patent_id
stringlengths
7
8
description
stringlengths
125
2.47M
length
int64
125
2.47M
11861725
DETAILED DESCRIPTION In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present invention. Aspects of the invention provide a method and system for efficiently communicating data between an insurer and its repair shops, e.g., vehicle repair shops. An insurer may thus provide claim data to the vehicle repair shop, and the vehicle repair shop may provide repair cost information to the insurer. In one embodiment, the methods and systems described herein are particularly useful for insurers utilizing direct repair partners for servicing vehicles involved in insurance claims. Direct repair refers to a process whereby an insured takes his or her vehicle directly to a repair shop (referred to herein as a direct repair partner) without having to first visit an insurance adjustor to assess damage to the vehicle. Direct repair partner shops are typically preapproved by the insurer to perform the estimating work directly on premises, thereby reducing the insurer's expense of hiring insurance adjustors and maintaining physical premises in which adjustors perform their jobs, and saving the insured's time by not requiring the insured to visit an insurance adjustor prior to visiting a repair shop or, alternatively, by not having to visit numerous vehicle repair facilities to secure multiple estimates for submission to the insurer. The communicated data may include repair estimates, photos, data regarding the insured party and/or vehicle, other data that may be used by a vehicle repair shop, data obtained by a vehicle repair shop that is subsequently provided to the insurer, financial data associated with the vehicle repair shop and its repairs, performance data for the vehicle repair shop, or reinspection reports from the insurance company. The term “photos” may refer to photos in a variety of formats, including print or digital. FIG.1illustrates one example of a network architecture and data processing device that may be used to implement one or more illustrative aspects of the invention. Various components103,105,107, and108may be interconnected via a network101, such as the Internet. Other networks may also or alternatively be used, including private intranets, local LANs, wireless WANs, personal PANs, storage area networks (SANs), and the like. The components may include an insurance company data server103, web server105, and a client computer107. The insurance company data server103provides overall control and administration of data communication services according to aspects described herein. The insurance company data server103may be connected to the web server105through which users interact with the communicative system and software. The web server105may be for example a claim processing system which may be used to store assignment information for further processing and then translate this information into a format acceptable for the client computer107. Alternatively, the insurance company data server103may act as a web server itself and be directly connected to the Internet. The insurance company data server103may be connected to web server105through the network101(e.g., the Internet), or via some other network. Users may interact with the data server103using a remote computer107located on premises of a vehicle repair shop108. The remote/client computer107may be any conventional data processing device that can access the Internet, e.g., laptop computers, desktop computers, ultra-mobile PCs, Internet enabled mobile devices, etc. Client computers may also be located in any other location, and need not be limited to the premises of a repair shop. Client computers may interact with data server103and/or web server105, e.g., using a web browser to connect to the data server103via one or more externally exposed web sites hosted by web server105. Alternatively, each client computer107may have a “thin client” installed thereon, whereby the thin client provides an executable shell hosting a browser-window therein. The thin client thereby limits the toolbar menus (e.g., File, Edit, View, Favorites, Tools, Help, etc.), such as are typically found in browser applications such as Microsoft's Internet Explorer, that are available to a user while accessing the data server. The thin client also adds new toolbar menus to provide services in conjunction with the data server103and/or web server105, as is further described below. Servers and applications may be combined on the same physical machines, and retain separate virtual or logical addresses, or may reside on separate physical machines.FIG.1illustrates but one example of a network architecture that may be used, and those of skill in the art will appreciate that the specific network architecture and data processing device used may vary, and are secondary to the functionality that they provide, as further described below. Each component103,105,107may be any type of known computer, server, or data processing device. Data server103, e.g., may include a processor111controlling overall operation of the data server103. Data server103may further include RAM113, ROM115, network interface117, input/output interfaces119(e.g., keyboard, mouse, display, printer, etc.), and memory121. Memory121may further store operating system software123for controlling overall operation of the data server103, control logic125for instructing data server103to perform aspects of the invention as described herein, and other application software127providing secondary support or other functionality which may or might not be used in conjunction with aspects of the present invention. The control logic125may be referred to herein as the data server software or repair shop communication (RSC) software. Functionality of the data server software may refer to operations or decisions made automatically based on rules coded into the control logic, or made manually by a user providing input into the system. Memory121may also store data used in performance of one or more aspects of the invention, including a claim database129and a shop database131. The claim database129may store information regarding claims submitted by the insurer's insureds. Claim information may include, e.g., a date of accident, type of vehicle, insured's name, etc. The shop database131stores information about the various vehicle repair shops108with which the insurer works to repair customers' vehicles. The shop database131may store, for each vehicle repair shop108, shop contact information, available services (e.g., body shop, engine, transmission, audio/video, etc.), hours of operation, as well as indicate whether each shop is a direct repair partner or whether review by an insurance adjustor is required. In some embodiments, the claim database129and shop database131may be stored in a single database system. That is, the information can be stored in a single database, or separated into different logical, virtual, and/or physical databases, depending on system design. Those of skill in the art will appreciate that the functionality of the data server103as described herein may be spread across multiple data servers or data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, insurer, insured, type of insurance, etc. In addition, one or more aspects of the invention may be embodied in computer-usable data and computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the invention, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. Vehicle Insurance Claim Management Application (VICMA) Generally, the vehicle insurance claim management application (VICMA) improves communication and streamlines tasks between a vehicle repair shop and an insurance company in three general task groups: assignment management (comprising work requests, estimates/photos, and reinspections), financial management (e.g., electronic funds transfers (EFT)), and performance measurement. A work request also may be referred to as work assignment, repair request, service request, record transfer, vehicle claim or other terms. Whatever term is used, it is not meant to imply any particular type of relationship or obligation between the insurance company and a vendor. FIGS.2A and2Billustrate a method for the VICMA for use by an insurance company and a user of a computer system at a vehicle repair shop. The method ofFIGS.2A and2Bwill be described further in the following exemplary embodiment. FIG.3illustrates the insurance company system300, a claim processing system320, and the repair shop work station107fromFIG.1. The insurance company system300may include the claim database129and shop database131as well as a security system302. In addition, the claim processing system320may include a claim processing web server105, a translator application322, and an additional database324. The repair work station107may include the VICMA330as described further in this description. Within the repair shop work station107, there may also be a claim processing desktop application332and a web-browser/insurance company shop application340. While the shop application340is illustrated conceptually as residing at the repair shop workstation107, the shop application340may alternatively be accessed by and interact with the claim processing desktop application332using a web browser or browser shell/window. That is, the shop application340may be housed or served by web server105associated with claim processing system320, and merely accessed using a thin client claim processing desktop application332. Within the web-browser/insurance company shop application340the user may perform a number of different actions such as: authentication or login342, pre-load assignment data344, transfer estimates and data346, view insurance company estimates348, view EFT details350, view Key Performance Indicators (KPI)352, or view reinspection details354. Furthermore, there may be a repair estimating system334within the claim processing desktop application332. The following sections will further describe the interaction between each of these systems and applications. The VICMA330may comprise a number of modules which may include, but is not limited to, the following: an adapter module336, an assignment module326, a financial module339, and a performance module337. Each of these modules will be described in more detail in the below sections. Assignment Module The insurance company may offer assignments to the vehicle repair shop for either repairs or estimates as part of the first notice of loss (FNOL) process. After the vehicle repair shop has been offered the assignment and submitted the estimate, the vehicle repair shop typically completes the corresponding repairs upon approval by the insurance company and absent any special circumstances. The present invention may provide the vehicle repair shop with assignment data needed to prepare a repair estimate or repair the vehicle. Assignment data may include, but not be limited to, customer name, contact information, insurance claim number, assignment date, loss date, loss type, loss type detail, loss description, current vehicle location, location where vehicle may be sent, deductible amount, vehicle type, year/make/model, vehicle identification number (VIN), license plate number, towing company information, damage information, prior damage information, and vehicle safety status (drivable/non-drivable). One of the elements of the VICMA and the “direct connection” with the vehicle repair shop is the ability to provide assignment information and to offer real-time updates when any of the information changes during the course of the claim and/or repair. For example, an assignment might not have contained a deductible amount or a correct deductible amount at the initiation of the assignment. Through further investigation, the amount of the error could be discovered or corrected and the assignment information could be immediately updated to the vehicle repair shop that needs that data to conclude the repair and the associated financial processing. In other words, the VICMA and “direct connection” with the vehicle repair shop may expedite the repair for the customer. An assignment module326may be defined by computer readable instructions from the VICMA330. The assignment module326may comprise three functions: work requests, estimates/photos, and reinspections. In one embodiment of the present invention, in a first step202, the insurance company system300, as part of the assignment module326, transfers (the term “transfer,” as used throughout, refers to uploading and/or downloading and/or pre-loading, depending on the specific situation, between two or more software applications residing on one or different computers or computer systems) a set of assignment data pertaining to a plurality of work requests. The work requests are received by the vehicle repair shop through the VICMA and the assignment module326. A work request may be referred to by other terms such as work assignment, repair request, service request, record transfer, vehicle claim or other terms. Whatever term is used, it is not meant to imply any particular type of relationship or obligation between the insurance company and a vendor. The assignment data may be stored in a claim processing system database324. The specific claim information is assigned to a given vehicle repair shop108and may be transferred to the vehicle repair shop108after the vehicle repair shop identity is authenticated by the VICMA330. Authentication or login342may be accomplished when the vehicle repair shop108enters a name and a password in the claim processing desktop332which queries the security application302to verify identity. The security application302may then query the shop database131or a similar database (e.g., which manages the relationships between the insurance companies and service providers to promote competition and provide customers with fact-based recommendations). The query of the shop database131may then validate that the user is legitimate while identifying the user's level of access to the insurance company information. Once the vehicle repair shop108is authenticated to the insurance company system300and the user is successfully logged in to the VICMA330, in a step204, the claim processing server105, as part of the assignment module326, transfers the assignment data to a vehicle repair estimating system334. This may include transferring the claim files assigned to the vehicle repair shop108by the insurance company. The claim processing system320may transfer or pre-load data344into the shop application estimating database by extracting information from the insurance company data and populating the necessary fields in the shop database131or claims database129, while generating an assignment request. The assignment request may include information about the claim and the estimate. The next step, a step206, the VICMA330, through an adapter module336, may transfer and translate the assignment data (e.g., estimate) to and from the vehicle repair estimating system334. There are two different scenarios where this may take place. First, if the customer's vehicle was inspected by the insurance company (e.g., drive-in, etc.), the vehicle repair shop108may receive the estimate as part of the transferred claim file. The estimates may then be directly transferred346to the vehicle repair shop's estimating system334for further processing. The processing may include viewing and verifying the estimate348or modifying and sending a revised estimate through the claim processing system320back to the insurance company system300. Second, in the case where the customer goes directly to the vehicle repair shop108, the vehicle repair shop108creates an estimate. Once the estimate is created, the vehicle repair shop108provides the insurance company system300with the estimate by transferring the estimate346through the claim processing system320where it may be translated through an adapter module336into a form acceptable by the insurance company. The adapter module336is defined by the computer readable instructions from the VICMA330. The VICMA330may have one or multiple adapter modules336which translate data between the VICMA330and a first or second (or third, etc.) vehicle repair estimating system334. If the vehicle repair shop108is using a different estimating system334, then the claim processing system320may translate the assignment data (e.g., claim and estimate data) into a format accepted by the vehicle repair shop's claims system. The claim processing system320may translate this data using a translation application322within the claim processing system320. Different vehicle repair shops108may use different estimating systems334. The translation application322will translate the output and input into each of these different estimating systems334. The VICMA330allows vehicle repair shops108to use whatever estimating systems334they choose. The claim processing system320may also encrypt the data transmitted over the internet to protect the privacy of the customer and to ensure the information is secure. In the next step,208, the estimate data may then be sent to the VICMA from the vehicle repair estimating system. The assignment module326may transfer the estimate data and assignment data from the vehicle repair estimating system334and then assign a corresponding repair to the vehicle repair shop108. In the next step,210, the VICMA may then review the estimate data for compliance with a set of front-end business rules. These front-end rules are normally contract-based. The VICMA may review the estimate data with respect to charges aligning with the contract the vehicle repair shop signed with the insurance company. These contract-based features may include labor rates, chargeable hours per particular task, or any other aspect of the work covered by the contract. If the front-end rules are not met, the vehicle repair shop108normally corrects the errors and resubmits the assignment request for further processing to the insurance company. This review could also include a review of the assignment data by the VICMA330. Following the estimation process and after the vehicle repair shop108has completed the review based on the front-end rules in step210, the estimate data or assignment data may be reviewed by a set of back-end rules by the insurance company system300. Generally, if the estimate data does not meet the back-end rules, a reinspection is required. During the reinspection, the VICMA, through the assignment module, may rerun the same set of front-end rules executed at the vehicle repair shop, and may also run a set of back-end rules (BERs). Front-end rules are normally contract-based. The VICMA may review the estimate with respect to charges aligning with the contract the vehicle repair shop signed with the insurance company. These contract-based features may include labor rates, chargeable hours per particular task, or any other aspect of the work covered by the contract. If certain errors are found, the insurance company system300may send the estimate back to the vehicle repair shop108to correct and re-submit. Or also, if a certain other set of errors are found, the insurance company system300may correct the identified errors and then move on and evaluate the back-end rules. The back-end rules, as reviewed in step212, may be based on or indicate whether the estimate aligns with the damage described (e.g., in the case of a front-end collision, the system would flag a rear tail light assembly indicated as needing repair). Each discrepancy may be scored and at the end of the review, a final score is calculated. Depending on score thresholds, the back-end business rules define what further specific action should be taken. Example BER<50—PASS; the claim is cleared for further processing;BER>50—generate report and send file to reinspection for output review The reinspection scores may be used as part of a performance rating method that measures the accuracy and effectiveness of the vehicle repair shop108relative to others in the market. The reinspection scores are then tabulated and may be displayed and stored using a reinspection report. The insurance company system300may create a reinspection report. In a step214, the VICMA330may receive a set of inspection data from the insurance company. This inspection data may be in the form of the reinspection report. The reinspection report may include: claim number, owner name, estimate version, appraisal source, reinspection type, reinspection location, reinspector's name, repair phase, reinspection completion date, and reinspector notes. In a step216, a read-only copy of the reinspection report may be downloaded and reviewed354by the vehicle repair shop108by linking through web browser on the claims processing system desktop application320. If discrepancies are listed in the reinspection report, the vehicle repair shop108then transfers a change request form through the claims processing translator system322which will allow the vehicle repair shop108to enter corrected information using its estimating system334. In a step218, the VICMA updates the estimate data based on the reinspection and reinspection report. Once the data is corrected, a supplement shop estimate may be transferred to the insurance company system300via the claim processing translator system322. The final financial processing on completion of repairs, EFT, might not process until all required reinspection requests are corrected and supplement shop estimates are transferred to the insurance company. The change request form may also include additional quality information in the form of: estimate accuracy percentage, opportunity percentage, dollar accuracy, and dollar opportunity in terms of a percentage of costs. Examples Estimate Accuracy (%)=(Insurance Company estimate value/Shop estimate value)×100% Opportunity (%)=100%−Estimate Accuracy % Dollar Accuracy (USD)=Insurance Company Estimate (USD)×Estimate Accuracy/100 Dollar Opportunity (USD)=Shop Estimate−Dollar Accuracy Estimate accuracy may be defined by the ratio of the insurance company estimate divided by the vehicle repair shop estimate. When the shop estimate is more than the insurance company estimate, the value is less than 100%. However, if the vehicle repair shop estimate is less than the insurance company estimate, the value is greater than 100%. The reinspection report may also include estimate exceptions. Estimate exceptions may highlight information associated to the claims that are identified by the insurance company as being in error. The types of information that may be highlighted may include vehicle information (vehicle year, mileage, equipment level). Other exceptions may include estimate line items. Estimate line items include listing of parts, labor and associated task duration (hours), and price. The reinspection report may also include a summary section that tabulates the identified tasks, hours, associated labor rates, and total amount of the reinspection costs. This information may also or alternatively be used by the performance module337and/or the financial module339. Financial Module. In a step220, the VICMA330provides the capability to track real-time financial status of the vehicle repair through the use of a financial module339or an Electronic Funds Transfer (EFT) system. The financial module339tracks two different types of claim transactions: verification requests and payment remittance. The financial module339provides for transaction information to be viewed as either including all claims assigned to the vehicle repair shop108(many vehicles) or by individual claims. For verification requests, the financial module339verifies that all transaction information is accurate and associated to the correct claim numbers. If errors are detected, the insurance company system300may list the error messages on the “EFT Financials” screen and a verification request may be submitted to the vehicle repair shop108to correct the error. When the vehicle repair shop108starts the vehicle repair, the vehicle repair shop108may enter the status change to “started” in the assignment screen600. The status change may be transmitted via the web link through the claim processing system320to be translated by the translation application322if needed. The status change may be registered in the claims database129. Once the vehicle repair shop108completes the repair, a user at the vehicle repair shop108updates the vehicle repair shop workstation107to “vehicle delivered.” The insurance company system300then may transfer a status change by the same process which will update the claim status in the claims database129to “complete,” in turn authorizing the EFT system to make the payment electronically. Therefore, there is no need for the vehicle repair shop108to wait for payment by conventional methods such as checks, etc. The EFT process may also begin when a user at the vehicle repair shop108updates the vehicle repair shop workstation107to “vehicle delivered” or “repair complete.” All transactions associated with the vehicle repair shop108or a particular claim may be viewed350by the vehicle repair shop user and/or insurance company's claim representative to quickly determining payment status or issues. Performance Module. In a step222, a performance module337compiles vehicle repair shop performance data, or Key Performance Indicator (KPI) data, that calculates a score and ranks the vehicle repair shop relative to other vehicle repair shops in the market. Finally, in a step224, the VICMA330gathers vehicle repair shop metrics by routing information through the claim processing translation system322. The KPI data may be compiled for individual claim transactions. When all data fields are captured for a given claim, the claim file may be added to a vehicle repair shop file that includes claim statistics for all claims the vehicle repair shop108has processed with the insurance company. The vehicle repair shop file may include scores for customer service, repair quality (pass ratio), or cycle time. It may also include estimate metrics that measure the vehicle repair shop's ability to estimate total repair cost, average part amount for estimate, and average hours per estimate (with a breakdown of refinish, repair, and replace). The report may also include a measure of estimate accuracy that measures the difference between the vehicle repair shop's estimates and the final bill, difference between submitted and reinspection results. Another quality criterion may measure the number of recommended certifications the vehicle repair shop has relative to the desired level of certification the insurance company has set or that the collision repair industry has set as a desirable industry standard. The insurance company system300and shop database131may compile the individual vehicle repair shop files over a specified time (e.g., one month, six months, etc.) and then may calculate rankings relative to the local market. The report may be created and then viewed by the vehicle repair shop108by linking through the web browser on the claims processing application332. The vehicle repair shop108may have only view-only rights352and may have no access for changing any information. This aspect of the invention allows the performance module337to gather performance metrics and automatically update the KPI performance report when the individual claim files are completed. The invention further provides the vehicle repair shops108with the latest performance rating and ranking feedback which may more quickly help them identify ways to improve. Illustrative User Interface. FIGS.4-16illustrate a set of illustrative user interface screens which represent an aspect of the invention. Those skilled in the art will recognize that these are only example user interface screens and that a wide range and variety of user interface screens may be possible for this invention. As used herein, the term “exemplary” is not intended to mean “best” or “ideal,” but rather is used synonymously with the term “illustrative.” FIG.4illustrates an exemplary user interface screen400for authentication342that may be presented to a user of the repair shop application330. The authentication screen400may enable the user to enter their user name402and password404. The user may then select a “Login” button406to begin the authentication process for the user of the VICMA330. FIGS.5A and5Billustrate exemplary “Assignments” user interface screens502for the present invention. On each of these “Home” screens530, the user may select any one of the following links across the top of the screen which will take the user to a specified action within the VICMA330. These links may include: Assignments500, Find Assignment1100, Request Assignment1200, Financials1300, Reinspections1400, Shop Profile1500, or Shop Performance1600. Each of these categories will be described in further detail below. The user may select the “Assignments” link500, the Assignment screen502will be displayed. As illustrated inFIG.5A, the transferred claim assignments may be listed in the Assignments screen502. As illustrated inFIG.5A, the following categories may be listed for each claim assignment: customer504, vehicle506, preferred phone508, claim number510, date assigned512, promised delivery date514, and current status516. The user may also be able to change the status of a given claim through the “Change Status” link518. The viewable list of claims may be sorted by a category by selecting that given category area. As illustrated inFIG.5B, this exemplary interface screen502may have various views for the user. The user, from a drop-down selection box522, may select views such as: new, estimate complete, repair scheduled, repair started, repair complete, vehicle delivered, cancelled & closed, and all. In addition to the user having the ability to access and view this status information, others such as insurance agents, rental car companies, and customers may also have the ability to access and view this status information. To view further assignment details for an individual claim or customer, the user may select a specific name520in the customer504category.FIG.6illustrates an exemplary user interface screen for Assignment Details600. The Assignment Details screen600may list various information which can include status, assignment type, date assigned, contact information, promised delivery data, notes to shop, shop comments, claim number, named insured, loss date, loss type, loss type details, loss description, deductible amount, vehicle type, vehicle location, year/make/model, Vehicle Identification Number, license plate, damage description, prior damage, vehicle safety drivable, and rental. Also on this assignment detail screen600, the user may take various actions associated with that specific claim. First, the user may select a “Preload Assignment Data” button602. When the “Preload Assignment Data” button602is selected, the assignment data is transferred from the claim processing database324to the VICMA330.FIGS.7A-7Cillustrate exemplary user interface screens for preloading assignment data700. The preloading assignment data screen700lists the claim number702as well as the status of the transfer of the assignment data704and the importing of assignment data706. Second, onFIG.6, the assignment detail screen, the user may select an “Upload Estimate/Photo” button604. When the “Upload Estimate/Photo” button604is selected, the VICMA330uploads the estimate from the claim processing database324to the VICMA330. InFIG.8A, an exemplary user interface screen illustrates how the user may select the estimate800and find the estimate802for a given claim number804. Once the estimate is selected800, inFIG.8B, an exemplary user interface screen illustrates the system uploading the estimate810. While uploading the estimate, the system obtains the estimate812, validates the estimate814, translates the estimate816, and reviews the estimate818. A photo may also be uploading during this step along with the estimate. Furthermore, the user may also select the “Upload Photos Only” button606in order to only upload photographs. Third, onFIG.6, the assignment detail screen600, the user may select to “Change Status/Dates” link608. When the “Change Status/Dates” link608is selected, the user may change the status of the claim assignment or change various dates associated with the claim assignment on the Change Status/Dates user interface screen900. Some examples of the dates which may be changed are: estimate complete date902, repair scheduled date904, repair started906, promise delivery908, repair complete910, or vehicle delivered912. Also on the Change Status/Dates screen900, the user may select the “Cancel Assignment” link614. Fourth, onFIG.6, the assignment detail screen600, the user may select the “Add Comments” link610which may allow the user to include comments surrounding the particular claim assignment. Also, the user may select the “View History” link612which may allow the user to view the claim history from the initiation of the claim assignment through the completion of the claim assignment. Lastly, on the assignment detail screen600, the user may select the “Cancel Assignment” link614(this link is the same as the “Cancel Assignment” link onFIG.9) in order to cancel the selected claim assignment.FIG.10illustrates an exemplary Cancel Assignment user interface screen1000. On this Cancel Assignment screen1000, the user may select a reason from a “Cancellation Reason” drop-down menu1002. The user may select the “Find Assignment” link1100to search various assignments on the Find Assignment screen1102.FIG.11illustrates an exemplary Find Assignments user interface screen1102. The Find Assignment screen1102may allow the user to search the claim database129by customer last name1104or claim number. The VICMA330will then search the claim database129and list the search results1106and all claim assignments which match the given search criteria. The user may select the “Request Assignment” link1200to request an assignment on the Request Assignment screen1202.FIG.12illustrates an exemplary Request Assignment user interface screen1202. The Request Assignment screen1202may allow the user to request a certain assignment by claim number1204, customer name1206, vehicle make1208, or vehicle year1210. The user may then select the “Request Assignment” button1212and the VICMA330will then search the claim database129for the given claim request and then present the requested assignment on the Assignment Details screen600, as illustrated inFIG.6. The user may select the “Financials” link1300to display the Financials screen1302.FIGS.13A and13Billustrate exemplary user interface screens for the financial module339of the VICMA330.FIG.13Aillustrates the Financial user interface screen1302. The Financial screen1302may allow the user to view a customer's financial status in such categories as: customer name1304, claim number1306, last transaction date1308, transaction type1310(e.g., remittance or verification), amount1312, or status1314. There may also be a selection for a “View All” link1316which allows the user to see the details of the financial customer or claim number.FIG.13Billustrates the specific details from the “View All” link1316for the claim number1320. On this screen, the following information is listed specifically for each financial transaction associated with the selected claim number: customer name1322, transaction date1324, transaction type1326(e.g., remittance or verification), amount1328, or status1330. The user may select the “Reinspection” link1400to display the Reinspection Report screen1402.FIG.14Aillustrates an exemplary Reinspection Report selection user interface screen1402. The Reinspection Reports screen1402may list the available reinspection reports for a given month. The user may select the month from a month drop-down menu1404. After the user selects the month, the available reinspection reports may be listed by: customer name1406, claim number1408, reinspection date1410, change request1412, and estimate accuracy1414. A “View Detail” link1416may also be available for selection.FIG.14Billustrates a reinspection report when the “View Detail” link1416is selected. The reinspection report1420as illustrated inFIG.14B, may list general claim assignment details1422, such as: owner name, reinspector name, estimate version, reinspection location, appraisal source, repair phase, reinspection type, reinspection complete, or change request status. The reinspection report1420may also list the accuracy and opportunity1424, such as: estimate accuracy (%), dollar accuracy (USD), opportunity (%), or dollar opportunity (USD). The reinspection report1420may also list the estimate line item exceptions1426, parts summary exceptions1428, labor summary exceptions1430, material summary exceptions1432, additional charges summary exceptions1434, or vehicle information exceptions1436. As illustrated inFIG.15A, the user may select the Shop Profile user interface screen1500. The Shop Profile screen1500may list the store details of the selected shop1502. The following information may be listed for each selected shop1502: address, phone, e-mail, contact name, hours of operation, services, or closest major intersection. The user may also change the selected shop by selecting the “Change Shop” link1504. After selecting the “Change Shop” link1504, the user may be taken to a “Change Shop” user interface screen1506as illustrated in FIG.15B. The user may select a vehicle repair shop by selecting the shop from a drop-down selection list1508with various shops listed from the shop database. The user may select the “Shop Performance” link1600to display the Performance screen1602.FIG.16illustrates an exemplary Shop Performance user interface screen1602from the Shop Performance module1600on the VICMA330. The Shop Performance screen1602may include both the tier level1604and performance ranking (with “as of date”)1606. The Performance screen1602may also include the following categories: customer service1608, repair quality (e.g., pass ratio)1610, cycle time1612, estimate metrics1614, reinspection accuracy1616, and recommended certifications (e.g., I-CAR, ASE Blue, SP2, etc.)1618. The customer service category may further include: explanation of shop process, quality of work, care and concern, timely completion of repairs, or promise time (e.g., change to delivered on time). The estimate metrics category may further include: total repair cost, average part amount per estimate, average hours per estimate for refinish, repair, and replace, or the difference from estimate to the final bill. Each of these categories may have its own shop result number with an associated ranking. Non-Referral Vehicle Repair Shops. Additionally, another illustrative embodiment provides a method and system for efficiently communicating data between an insurer and a non-referral repair shop, e.g., vehicle repair shops that are normally not preapproved by the insurer to perform the estimating and repair work. Non-referral vehicle repair shops may also be referred to as non-direct repair shops. The methods and systems described herein are particularly useful for insurers utilizing non-referral vehicle repair shops for servicing vehicles involved in insurance claims. In this embodiment, the insured or claimant may be able to select a non-referral repair shop, not delegated or preapproved by the insurer, thereby allowing the insured or claimant to select any vehicle repair shop. As was described above, this communicated information or data may include repair estimates, photos, data regarding the insured party and/or vehicle, other data that may be used by a non-referral vehicle repair shop, and data obtained by a non-referral vehicle repair shop that is subsequently provided to the insurer. The term “photos” may refer to photos in a variety of formats, including print or digital among others. As one added benefit, this non-referral process will allow the customer to select a repair shop of their choice while reducing the insurance company's expense of hiring insurance adjustors. Generally, an insurance company may be required to utilize less insurance adjustors with this embodiment of the non-referral module. Additionally, this process will save the insured's time by not requiring the insured to visit an insurance adjustor at a drive-in facility, or alternatively not having to visit numerous vehicle repair facilities to secure multiple estimates for submission to the insurer. As was described above,FIG.1illustrates one example of a network architecture and data processing device that may be used to implement one or more aspects of the invention. Additionally, as was described above, one or more aspects of the invention may be embodied in computer-usable data and computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The computer executable instructions may be stored on one or more computer readable media such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the invention, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. In addition to the adapter module336, the assignment module326, the financial module339, and the performance module337, the VICMA330as described above may also include a non-referral module338. The non-referral module338may be defined by computer readable instructions from the VICMA330. The non-referral module338may comprise various functions: rule based intelligence scores, desk intervention, field intervention, and/or change requests.FIG.17illustrates an insurance company system similar to the insurance system300illustrated inFIG.3with the added non-referral module338. The non-referral module338may be part of the VICMA330as shown inFIG.17. Without departing from the present invention, the non-referral module338may be part of or included with the insurance company system300as a separate module. FIG.18illustrates a method for the VICMA330for use by an insurance company and a user of a computer system at a non-referral vehicle repair shop. The method illustrated inFIG.18will be described further in the following exemplary embodiment. In one embodiment of the present invention, in a first step1802, a first notice of loss (FNOL) is received by the insurance company from the insured. This FNOL may be received in the form of a phone call, email, web site upload, or other methods applicable to the transfer of information. Following the FNOL, in a second step1804, the insurance company provides options to the insured for fixing their vehicle. One option would be for the insured to take their vehicle to a direct repair shop as was described above. Another option would be for the insured to select a non-referral repair shop for the repairs. A non-referral repair shop may be defined as not delegated or preapproved by the insurance company, thereby generally allowing the insured to select any available vehicle repair shop. In some instances, a repair shop may be chosen by the insured which is neither a referral shop nor a non-referral shop. In this instance, the insurance company may opt to send a field adjuster to inspect the damage to the vehicle or other methods as included with this invention or outside of this invention. Many reasons or factors may be utilized in an insured selecting a non-referral repair shop. First, a non-referral repair shop may be geographically closer to the insured and it may be more convenient to have their vehicle repaired at a non-referral repair shop. Second, the insured may have an existing positive relationship with a non-referral repair shop. In this instance, the insured may feel more comfortable having their vehicle repaired at the non-referral repair shop versus a repair shop recommended by the insurance company. Third, there may not be any direct repair shops in the insured's area and therefore, the only possible repair shop would be a non-referral repair shop. Following the selection from the insured, the VICMA330may determine whether the repair shop selected is a direct repair shop or a non-referral repair shop. As was stated above, if the insured selects a direct repair shop, the VICMA330proceeds as was described above, specifically inFIGS.2A and2B. If the insured selects a non-referral repair shop, the VICMA330proceeds as will be described below for a non-referral repair shop. The VICMA330may include the ability to recognize that the repair shop selected is a participating non-referral repair shop. When this participating non-referral repair shop is identified, the VICMA330may send the assignment to the non-referral repair shop. Similarly to as was described above for direct repair shops, the present invention may provide the non-referral repair shop with assignment data needed to prepare a repair estimate or repair the vehicle. Assignment data may include, but not be limited to, customer name, contact information, insurance claim number, assignment date, loss date, loss type, loss type detail, loss description, current vehicle location, location where vehicle may be sent, deductible amount, vehicle type, year/make/model, vehicle identification number (VIN), license plate number, towing company information, damage information, prior damage information, and vehicle safety status (drivable/non-drivable). Additionally, in the second step1804, a market relationship manager may allow the input of the non-referral repair shops. The market relationship manager may generally be a repository of information used in the management of repair shops, both non-referral repair shops and direct repair shops. The market relationship manager may also allow the creation of non-referral multi-shop organizations. The non-referral multi-shop organization may include the ability to group commonly-owned repair shops, such that a set of repair shops that are commonly-owned are grouped together. Additionally, the market relationship manager may allow for the conversion between referral repair shops (or direct repair shop) and non-referral repair shops. In a third step1806, the VICMA330and non-referral module338determines if the non-referral repair shop is in a staffed market or a non-staffed market. A staffed market may be defined as a market or area where field adjusters are available to make field inspections (as will be described below). A non-staffed market may be defined as a market or area where there are no field adjusters present or available in the area. In a fourth step1808, the VICMA330and non-referral module338receive an estimate uploaded from the non-referral repair shop. During this step, the non-referral repair shop receives the assignment data from the insurance company and then completes the estimate for the vehicle repair. The non-referral module338may include rules and activities to ensure that the customer-repair shop completes and uploads the estimate for the vehicle repair in a required amount of time defined as a future display date. In one illustrative aspect, in a staffed market, if the estimate is not uploaded prior to the future display date, the non-referral repair shop assignment may be canceled. Further, after the non-referral repair shop estimate assignment is canceled, a field estimate assignment may be assigned to a field adjuster in the staffed market. In another illustrative aspect, in a non-staffed market, if the estimate is not uploaded prior to the future display date, an automated task may be established to contact the non-referral repair shop and/or insured to ensure the estimate is completed in a timely manner. In a fifth step1810, after the estimate is transferred to the insurance company system300, the VICMA330and the non-referral module338may analyze and/or audit the estimate and calculate a score or rules-based intelligence (RBI) points for the estimate. The RBI points are cumulative for the subject estimate and any supplement estimate versions. The analysis/audit and calculation may include a line-by-line review of the estimate, such as “Repair Fender” and “Paint Fender.” Additionally, the analysis and calculation may include a review of the estimate in aggregate, such as the total number of labor hours or the total parts costs. The VICMA includes a proprietary set of rules that serve to audit the estimate both on the front and back end, i.e. front end rules and back end rules. The front end rules are used to alert the appraiser of errors or omissions in the preparation of the estimate and provide an opportunity for the appraiser to amend the estimate. Each back end rule may be assigned a RBI point value. The total of back end rules generated produces a cumulative RBI score for each estimate and any supplement estimate version. These back end rules may be generally related to and focused on identifying anomalies in an estimate. The individual proprietary set of rules may be based on and developed from anecdotal or experience from experienced damage evaluators or previous staff adjusters during their re-inspections. Additionally, the proprietary set of rules, exceptions, exclusions, and/or prioritizations may include statistically-based analysis of historical data. One illustrative rule or exception may be when the Chicago labor rate is known to be approximately $44-46 and the labor rate on the estimate is listed as $80. Another illustrative rule or exception may be for a drivable vehicle with an estimate for the hours of labor to repair the frame, where the appraised number of hours to repair the frame may be inconsistent with the typical damage to a drivable vehicle. Following the calculation of the RBI points for the estimate, in a sixth step1812, the VICMA330and the non-referral module338may determine the intervention level for the estimate. First, the intervention level may include three different levels: automatic approval1813A, desk intervention1813B, and field intervention1813C. With the automatic approval1813A, the VICMA330and the non-referral module388automatically approves the estimate. The desk intervention level1813B is defined by a review and/or approval by a desk intervention adjuster or inside technical reviewer. The field intervention level1813C is defined by a review and/or approval by a field adjuster. Further details of each of these intervention levels will be defined below. To determine the intervention level, the VICMA330and non-referral module338analyze the RBI score and whether the non-referral repair shop is in a staffed market or a non-staffed market. In determining the intervention level, the RBI score may be in a low-range, mid-range, or high-range. The ranges can be configured by the insurance company at an individual market level. Separate ranges can be established for catastrophe and non-catastrophe claims. Without departing from the present invention, the low-range, mid-range, and high-range may be determined and adjusted during the process by the insurance company. Additionally, these threshold settings of low-range, mid-range, and high-range may be configurable based on the changing market conditions and changes in the market and/or environment. The configurability of the threshold settings of low-range, mid-range, and high-range may be performed manually or automatically. A rule set editor application may automatically provide these market configurable threshold settings used for estimate routing and determining the intervention level based on changes in the market condition and environment. According to aspects of the present invention, if the RBI score is low-range, the VICMA330and non-referral module338may determine an automatic approval intervention level1813A. When the RBI score is low-range, the intervention level of an automatic approval1813A is not dependent on whether the non-referral repair shop is in a staffed market or a non-staffed market. Additionally, when the RBI score is mid-range, the VICMA330and the non-referral module338may determine a desk intervention level1813B. When the RBI score is mid-range, the intervention level of desk intervention1813B is not dependent on whether the non-referral repair shop is in a staffed market or a non-staffed market. These mid range RBI scored estimates are routed to a queue where the estimates may be held for review by the technical reviewer. When the RBI score is high-range and the non-referral repair shop is in a staffed market, the VICMA330and the non-referral module338may determine a field intervention1813C. These high range RBI scored estimates may be routed to a queue where they are dispatched to an field adjuster. However, when the RBI score is high-range and the non-referral repair shop is in a non-staffed market, the VICMA330and the non-referral module338may determine a desk intervention1813B, thereby automatically routing the claim to the desk review queue and designated the claim as a non-staffed area. Additionally, other factors and additional level determinations may be made when determining the intervention level without departing from the present invention. After the intervention level is determined, the intervention may be completed as was determined in step1812. For automatic approval1813A, the VICMA330or non-referral module338may automatically approve the estimate for the non-referral repair shop such that the non-referral repair shop may then begin repairs of the damaged vehicle. An automated status may be sent to the non-referral repair shop advising that the estimate has been approved. For desk intervention1813B, a desk intervention adjuster or internal/inside technical reviewer may review and analyze the estimate, photos, assignment, etc. for accuracy and errors. The inside technical reviewer may determine that the vehicle is a total loss wherein the total loss may then be handled as those processes and methods as known and used in the art. Additionally, the inside technical reviewer may determine that a field inspection is needed if there is a lack of detailed information in the estimate, photos, or assignment or if the inside technical reviewer feels that this estimate and vehicle need to be reviewed in person. If a field inspection is needed, the estimate and assignment may be sent, by way of the desk review queue to the Field queue, to the field intervention step1813C and a field intervention adjuster as will be described below in step1813C. Additionally the inside technical reviewer may determine that changes are required for the estimate. If changes are required, a change request process will begin as will be described below in step1814. If the vehicle is not a total loss, a field inspection is not needed, and no changes are required, the inside technical reviewer may then approve the estimate for the non-referral repair shop such that the non-referral repair shop may then begin repairs of the damaged vehicle. For field intervention1813C, a field intervention adjuster (which may also be labeled as field adjuster) may review both the non-referral repair shop and the vehicle. The field adjuster typically performs a physical inspection of the vehicle and may also review and analyze the estimate, photos, assignment, etc. for correctness and errors. The inside technical reviewer may determine that the vehicle is a total loss wherein the total loss may then be handled as those processes and methods as known and used in the art. Additionally, the field adjuster may determine that changes are required for the estimate. If changes are required, a change request process will begin as will be described below in step1814. If the vehicle is not a total loss and no changes are required, the field adjuster may then approve the estimate for the non-referral repair shop such that the non-referral repair shop may then begin repairs of the damaged vehicle. Without departing from the present invention, the field adjuster may approve the estimate by using a mobile communication device through a mobile client application from the non-referral repair shop or another remote location. In a seventh step1814, the VICMA330and non-referral module338may process a review or change request when required or needed. With the change request, the adjuster (either a desk intervention adjuster or field adjuster as applicable) may attempt to reach an agreed price with the non-referral repair shop. If the desk intervention adjuster cannot reach the agreed price with the non-referral repair shop, the assignment/estimate may be sent to a field intervention adjuster for physical inspection and potential adjustment/changes. The assignment/estimate may be routed from the desk review queue to the field queue. If the field intervention adjuster cannot reach an agreed price with the non-referral repair shop, the assignment/estimate may be cancelled from the non-referral repair shop and the field adjuster may then create an estimate from one of the various estimating platform products known and used in the art. This new estimate may then be treated in the same manner as a regular field assignment/estimate as described above. However, if the adjuster is able to reach an agreed price with the non-referral repair shop, the adjuster and/or the non-referral module338may create a change request. The VICMA330and non-referral module338may then transfer the change request to the non-referral repair shop. After the non-referral repair shop addresses the change request and makes the appropriate changes to the estimate, the non-referral repair shop uploads and transfers the revised estimate back to the VICMA330(and/or the non-referral module at the insurance company) by electronically sending the revised estimate to a “reconciliation queue”. Again, the adjuster (either a desk intervention adjuster or field adjuster as applicable) may review the revised estimate from the non-referral repair shop. If the adjuster agrees with the revised estimate from the non-referral repair shop, the adjuster may approve the new estimate. However, if the adjuster does not agree with the revised estimate from the non-referral repair shop, the adjuster submits a new change request via the VICMA330and through the non-referral module338and the above steps are repeated. Many external systems and components may provide additional features to aspects of this invention. For example, an external server or client server may add the functionality of sending the non-referral review requests throughout the process. Additionally this external server may assist and allow the approval of the non-referral estimates via the VICMA330. This external server may be for example a mobile client management system. Additionally, a scheduler application may allow the added functionality of identifying the non-referral and the referral shop locations from a field user interface. Based on this identification, the scheduler application may launch or enter the identification into the appropriate assignments, i.e. non-referral assignments or referral assignments. Additionally, a plurality of estimate routing rules may be utilized by the VICMA330. These estimate routing rules may define circumstances for routing the estimate based on the original estimate, supplements, conversions, change requests, and/or total losses. Other routing rules may be utilized without departing from this invention. Additionally, additional work queues or database files may be utilized to serve as repositories for specific estimates based on these estimate routing rules. CONCLUSION Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
62,361
11861726
The figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION To allow parties to investigate and settle insurance claims more quickly and efficiently, a system of this disclosure allows the parties to conduct inspections of insured properties substantially in real time, and in a collaborative manner, using a single unmanned aerial vehicle (UAV) or several UAVs. In some embodiments, the system may include one or several servers configured to automatically set up an online inspection session in which insurers, customer representatives, and various third parties may participate. The various parties may join the online inspection session via workstations from their respective locations. As discussed below, the workstations need not be dedicated machines and in general may include personal non-portable computers, personal portable computers, and/or mobile devices (such as smartphones, laptops, tablets, phablets, smart watches, wearable electronics, etc.). To ensure safety, only one workstation, operated by an inspection technician, may be authorized to control the UAV, in some embodiments. The other participating parties may be allowed to view the video feed from the UAV, submit requests (e.g., “please capture additional images of the western wall”), submit comments, share documents, and/or otherwise actively participate in the inspection session. The system may then create a record of the inspection (“inspection record”) using the data captured by the UAV, as well as the input from the parties of the inspection. In some embodiments, the parties may digitally sign the inspection record prior to the system saving the inspection record in a database. Further, in some embodiments and/or scenarios, the authorized parties can prepare and submit an offer to a policy holder, potentially saving time and effort associated with a longer resolution process. In addition to storing inspection records, the system may store records descriptive of properties to be insured, as well as records for various inspection technicians. To make multiple inspections more efficient, the one or more servers operating as part of the system may generate a prioritized list of properties to be inspected and provisionally assign candidate inspection technicians for each property. The one or more servers may create the prioritized list in view of geography (e.g., to schedule inspections of nearby properties consecutively), availability of inspection technicians, particular expertise of inspection technicians, etc. The system may notify a user in a supervisory role regarding the provisional assignments and receive a confirmation (or a rejection) of the assignments. Generally speaking, the techniques for conducing collaborative real-time inspections may be implemented in one or more network servers, in one or more client devices, or a system that includes several of these devices. However, for clarity, the examples below focus primarily on embodiments in which one or more server devices set up and facilitate online collaborative inspection sessions via a wide-area network, such as the Internet. Example Computing Environment for Conducting Collobarative Inspections FIG.1illustrates various aspects of an example computing environment implementing a collaborative inspection system10. The system10may include a server device12and several workstations14A-C, all of which may be communicatively interconnected via a communication network16, as described below. In an example configuration, an inspection technician may use the workstation14A to control an unmanned aerial vehicle (UAV)18to inspect an insured property, such as a house damaged by fire, water, wind, hail, and/or weather; a customer representative may use the workstation14B to monitor progress of the ongoing inspection, submit comments, and/or view aerial imagery in real-time; and/or a third-party participant may use the workstation14C to similarly participate in the inspection substantially in real time. In other configurations, different groups of users may participate in online inspections via different group workstations. As discussed in more detail below, the server device12during operation may access databases20A-D storing data related to insured properties, inspection records, candidate inspection technician records, and/or UAV equipment, respectively. The server device12may include may include one or more processor(s)30, such as a central processing unit (CPU), coupled to a memory32via a digital bus or another type of a link (not shown). The memory32may be tangible, non-transitory memory and may include any types of suitable memory modules, including random access memory (RAM), read-only memory (ROM), flash memory, other types of persistent memory, etc. The memory32may store data as well as instructions executable on the processor(s)30. These instructions may implement, among other functionality, an inspection control module34configured to set up online collaborative inspections, coordinate exchange of information between the workstations14A-C regarding inspection sessions, create inspections records to be stored in the database20B, etc., as discussed in more detail below. More generally, in various embodiments, the server device12may include hardware, firmware, and/or software components. It will be appreciated that although only one server device12is depicted inFIG.1, multiple servers12may be provided for the purpose of distributing server load, serving different web pages, etc. These servers12may include a web server, an entity-specific server (e.g. an Apple® server, etc.), a server that is disposed in a corporate or proprietary network, etc. The server device12may communicate with the workstations14A-C via the communication network16. The network16may be a proprietary network, a secure public Internet, a local area network (LAN), a wide area network (WAN), a virtual private network (VPN) or some other type of network, such as dedicated access lines, plain ordinary telephone lines, satellite links, combinations of these, etc. Where the communication network16comprises the Internet, data communication may take place over the digital network130via an Internet communication protocol. In some embodiments, the communication network16may be a password protected organizational network where only members of the organization having user accounts may access the network. In some embodiments, the databases20A-D may be implemented in a single device or a group of devices. In other embodiments, each of the databases20A-D may be implemented separately in a respective server or group of servers. The databases20A-D may be implemented as relational databases, for example, made up of tables stored on a non-transitory, computer-readable memory. The insured property database20A may store information about insured properties, including real properties. For example, a certain record in the insured property database20A may specify the boundaries of a parcel of land, the age of the structure built on the parcel of land, various properties of the structure (such as the number of rooms), whether or not the property includes a garage, the type of roof, the types of electrical and gas connections, etc. The inspection records database20B may store records describing ongoing or completed inspections sessions. For example, a certain record in this database may specify the date and time that the collaborative online inspection was conducted, the names and organizational roles of the participants, the comments submitted by the participants, whether or not the participants submitted any documents prior to or during the inspection, the type of the UAV18used for the inspection, etc. Further, the record may include photographs or even a video recording of the inspection as captured by the UAV18. Still further, the record may include authentication data such as digital signatures of the participants, for example. In this manner, the records in the inspection records database20B may be used as evidence in disputes or other procedures. The candidate inspection technician database20C may store information about inspection technicians that potentially may be available to participate in online collaborative inspections as operators of UAVs, such as the UAV18. An example record in the candidate inspection technician database20C may specify the availability of a certain inspection technician, various restrictions regarding the type of equipment he or she may be authorized to handle, etc. Further, the UAV database20D may store information about the available fleet of UAVs. The records in this database may include indications of technical capabilities of the UAVs (e.g., the video equipment, the number or quality of cameras, the range of flight, the amount of fuel remaining), as well as indications of the current locations of the corresponding to the UAVs (e.g., GPS coordinates). It is noted that, in addition to long-range UAVs, the fleet can include smaller, portable devices such as miniature or small UAVs, which inspectors can carry with them and launch while on-site. In some cases, the system10may automatically select candidate UAVs from among multiple UAVs in the fleet based on such considerations as weight, stabilization parameters, illumination and magnification capabilities, etc. in view of the inspection parameters such as the approximate size of the property to be inspected, the approximate distance to the property, the current weather, etc. The system10then may provide an automatic recommendation to an operator workstation (see below) to reduce the amount of time required to select the equipment. More generally, the system10may include additional databases or, conversely, not include some of the databases illustrated inFIG.1. It is noted thatFIG.1illustrates the databases20A-C by way of example only. With continued reference toFIG.1, the workstations14A-C may include, by way of example, various types of “mobile devices,” such as a smartphone, a tablet computer, a cell phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a wearable computing device, smart glasses, smart watches or bracelets, phablets, other smart devices, devices configured for wired or wireless RF (Radio Frequency) communication, etc. In general, any electronic device appropriately configured may interact with the server12via the network16. Thus, the workstations14A-C may include general-purpose devices or special-purpose devices developed specifically to operate in the system10. As illustrated inFIG.1, each of the workstations14A-C may include one or more processor(s)40, a memory42, and a user interface44, interconnected via one or several digital busses, for example (not shown). Similar to the processor(s)30, the processor(s)40may include general-purpose processors such as CPUs or special-purpose processing units, such as application-specific integrated circuits (ASICs). The memory42may include one or several non-transitory memory components, such as a RAM, a ROM, a flash drive, a hard disk, etc. The user interface44may include a touchscreen or a display-only screen, a speaker, one or more user-input devices such as a keyboard or a microphone, etc. In addition to the illustrated components, the workstations14A-C may include additional components such as a communication unit to communication with the server12via any suitable wireless communication protocol network, such as a wireless telephony network (e.g., GSM, CDMA, LTE, etc.), a Wi-Fi network (802.11 standards), a WiMAX network, a Bluetooth network, etc. The workstations14A-C need not necessarily communicate with the network16via a wired connection. In some instances, the workstations14A-C may communicate with the network16via wireless signals (e.g., radio frequency (RF) communication and data transmission) and, in some instances, may communicate with the network16via an intervening wireless or wired device, which may include a wireless router, a wireless repeater, a base transceiver station of a mobile telephony provider, etc. The workstations14A-C may include devices which are used by members of an organization to access an organizational communication network, such as a local area network (LAN), a virtual private network (VPN), etc. Each of the workstations14A-C may interact with the server device12to receive web pages and/or server data, and may display the web pages and/or server data via a client application and/or an Internet browser. To this end, the memory42may include such software components as an operating system and various software applications (not shown to avoid clutter). The operating system, for example, may include Microsoft Windows®, OS X®, Linux®, Unix®, etc. The software applications may include, for example, a web browser such as Apple Safari®, Google Chrome™, Microsoft Internet Explorer®, and Mozilla Firefox® that may be implemented as a series of machine-readable instructions for receiving, interpreting, and/or displaying web page information from the server12while also receiving inputs from the user. The memory42also may store instructions, executable on the processor(s)40, that implement a property inspection client module46. In operation, the property inspection client module46may provide graphical user interface screens including a list of other participants in the inspection session, various icons and/or interactive controls for submitting comments or requests, submitting documents, zooming in or out on the photographs and/or the video feed, etc. Example functionality of the property inspection client module46is further discussed below. The memory42also may include data storage regions that include such data as user profiles, application data for the software applications, and/or other data necessary to interact with the server12through the communication network16. In some embodiments, the workstations14A-C may gain access to online inspection sessions upon verification of user accounts of the corresponding users. These users then may access secure data assets shared within the network based upon permissions associated with security groups corresponding to the user accounts. Moreover, some or all of the workstations14A-C may also include devices which may be used to set and/or change permissions for security groups to access secure data assets, and to place and/or remove user accounts from security groups. For example, some or all the workstations14A-C may include a client device used by a system administrator and/or security analyst. Example Unmanned Aerial Vehicle The UAV18may be implemented, for example, as a UAV100illustrated inFIG.2. The UAV100may include a controller102that communicates with one or more proximity sensors104, one or more stabilization sensors106, a Global Positioning System (GPS) unit108, an image sensor110, and/or a communications unit112. The image sensor110may include one or more filters for infrared imaging, hyperspectral imaging, multispectral imaging, full spectral imaging, etc., or alternatively, the image sensor110may include one or more sensors which receive image data outside of the visible light spectrum, such as an infrared image sensor. The controller102may include a processor120that executes instructions from a computer-readable memory122to implement a control module124and a stabilization module126. The control module124may invoke the stabilization module126to retrieve data from the stabilization sensors106(i.e., sensors relating avionics) to implement a control function, such as that associated with a control routine that performs PID (proportional-integral-derivative), fuzzy logic, nonlinear, etc. control to maintain the stability of the UAV(s)100. For instance, the stabilization sensors106may include one or more of a directional speed sensor, a rotational speed sensors, a tilt angle sensor, an inertial sensor, an accelerometer sensor, or any other suitable sensor for assisting in stabilization of an aerial craft. Of course, the stabilization module56may implement any suitable technique of stabilizing the UAV100in a hover or stationary three dimensional position. The control module124may retrieve data from the proximity sensors104. These proximity sensors104may include any sensor or technique that assists the control module124in determining a distance and a direction to the insured properties within the neighborhood. The one or more proximity sensors104may include optic flow sensors, ultrasonic sensors, infrared sensors, LIDAR (Light Detection and Ranging), a stereo vision system (SVS) that may utilize the image sensors110(e.g., one or more cameras) to implement stereoscopic imaging techniques to capture aerial images of the neighborhood including the insured properties and to create 3D images of the insured properties. The control module124may also receive instructions from the workstation14A, for example (seeFIG.1) to capture aerial images at specific locations or time intervals. The GPS unit108may use “Assisted GPS” (A-GPS), satellite GPS, or any other suitable global positioning protocol or system that locates the position of the UAV(s)100. Moreover, the GPS unit108may also determine the position of the aerial images or of data points within the aerial images captured by the UAV(s)100, or the GPS may be combined with the distance and direction sensors to determine the position of the aerial images, and positions of data points within an aerial image. For example, A-GPS utilizes terrestrial cell phone towers or wi-fi hotspots (e.g., wireless router points) to more accurately and more quickly determine the location of the device while satellite GPS generally are more useful in more remote regions that lack cell towers or Wi-Fi hotspots. The communication unit112may communicate with a server or a workstation via any suitable wireless communication protocol network, such as a wireless telephony network (e.g., GSM, CDMA, LTE, etc.), a Wi-Fi network (802.11 standards), a WiMAX network, a Bluetooth network, etc. Example Inspection Record FIG.3illustrates an example inspection record150, which the inspection control module34, or a similar module may create and maintain in the inspection record database20B, for example (seeFIG.1). The inspection record150may include fields and sub-fields organized and stored in any suitable fashion, such as in multiple tables of a relational database. In some embodiments, certain fields or sub-fields are stored in separate one or more databases, and the inspection record150stores only a reference to the corresponding data. In any case, however, the inspection record150may define a data structure in which multiple pieces of information related to a certain inspection session are logically linked. One of ordinary skill in the art will recognize that the inspection record150is illustrated as an example only, and that the inspection record150in other embodiments may include additional fields or, conversely, omit some of the fields depicted inFIG.3. The example inspection record150may include a record identifier152, which may be a number or a string of alphanumeric characters, for example. The inspection record150may also include identifiers154of people who participated in the inspection, which may include an inspection technician operating the UAV, one or several customer representatives, one or several employees of the insurance company, one or several third-party representatives, etc. During an inspection session, the UAV may capture video and/or still photographic imagery of the property. For example, the inspection technician initially may direct the UAV to fly along the perimeter of the property and then fly over the property at a relatively high altitude. The inspection record150may include the video recording for these initial stages as data160. This data may be stored along with positioning data162, which may include GPS coordinates, for example, as well as a timestamp164. Similarly, the inspection record150may include photographs170, stored along with positioning data172and a timestamp174. Once one or several participants identify certain parts of the property as being “interesting” for the purposes of the inspection, the inspection control module34ofFIG.1may log these requests as part of comments180and, once the inspection technician directs the UAV to collected the desired photographic or video imagery, add the imagery to the data160and/or170. The inspection control module34may also attach documents, or references to documents, submitted by participants during an online inspection session (field184). In an example scenario, the server12may create a new inspection record150upon receiving an indication that an inspection technician has been approved for the inspection. The server12may then update the inspection record150during the inspection session in response to various inspection events. More specifically, the server12may update the inspection record150in response to receiving comments and requests from the participants via the respective workstations, documents received via the workstations, photographs and video imagery from the UAV, etc. The server12may then receive an indication that the inspection is completed and request that the participants enter a digital signature, a unique password, or another suitable authentication data. The server12may store the authentication data as part of the inspection record150. Example Method for Setting Up a New Inspection Session FIG.4depicts a flow diagram representing an example computer-implemented method200for setting up a collaborative online session. The method200may be executed on the server device12, for example. In some embodiments, the method200may be implemented as a set of instructions stored on a non-transitory computer-readable memory and executable on one or more processors of the server device12. For example, the method200may be performed by the inspection control module34ofFIG.1. For convenience, the method200is discussed below with reference to the inspection control module34. However, the method200in general may be executed by any suitable device or a group devices, which may be organized according to any suitable hierarchical or distributed topology. At block202, the inspection control module34obtains a set of properties to be inspected. For example, referring back toFIG.1, the inspection control module34may query the database20A to determine for which properties an employee of the insurance company requested an inspection, how long the properties have awaited inspection, where these properties are located, what additional requirements exists for online inspections (e.g., a certain property may require that a particular kind of specialist or expert participate, while another property may not), how large the property is and/or how long it would take to inspect it using an UAV, etc. Next, at block204, the inspection control module34obtains a set of records describing candidate inspection technicians (also referred to in this document as “UAV operators” or “pilots”). Each record may indicate the schedule of the corresponding inspection technician, his or her expertise, the type of equipment he or she is qualified to operate, etc. Using the information obtained at blocks202and204, the inspection control module34may generate a list of properties with provisional operator assignments (block206). The list may specify which operator is assigned to which property, e.g., {O1→P1, O1→P2, O2→P3, . . . ON→PL}. The inspection control module34may order the list so as to more efficiently utilize technicians' time and the available UAV equipment. It is noted that one inspection technician may be assigned to multiple properties and, in some implementations or scenarios, multiple inspection technicians may be assigned to the same property. In some implementations and/or scenarios, the inspection control module34may consider additional factors when generating the list of provisional assignments, such as whether inspection of a certain property requires mandatory presence of another party and, if so, when this party is available. More generally, in addition to the information obtained at blocks202and204, the inspection control module34may consider any suitable combination of factors. At block208, the inspection control module34may transmit indications of the provisional assignment of inspection technicians to properties to the potential participants. To this end, the inspection control module34may use email addresses, phone numbers, etc. to generate automated messages, for example, which may include selectable links to the future online inspection sessions. The inspection control module34also may transmit electronic messages to supervisors of the inspection technicians. In an example embodiment, the inspection control module34transmits the entire list of properties with provisional operator assignments, generated at block206, to a supervisor who approves the list in its entirety or partially. Once an indication of approval is received from a user in a supervisory role (condition check210), the flow proceeds to block214. Otherwise, the flow proceeds to block212, where another inspection technician may be selected. At block214, an online inspection session may be set up. For example, the inspection control module34may create a secure online session using Citrix GoToMeeting™ APIs or any suitable high-level and/or low-level APIs. The inspection control module34may configure the meeting to include the video feed from the UAV and video/audio/text input from each of the participants, for example. The inspection control module34may also set up a document repository to receive documents, notes, photographs, etc. from the participants. Still further, the inspection control module34may set up a comment/message repository to which formal comments from the participants are saved, to be included in the official record of the inspection (seeFIG.3). In some implementations, the property inspection client module46illustrated inFIG.1, or a similar software component, provides special-purpose controls for participating in an online collaborative inspection session. For example, the property inspection client module46may provide a button for submitting a request to reposition the camera of the UAV to inspect a specified portion of the property, a button for submitting a formal comment for the record, a button for submitting a document to the record, etc. Method200may include additional, less, or alternate actions, including those discussed elsewhere herein, and/or may be implemented via a computer system, communication network, one or more processors (such as an insurance customer mobile device and/or a remote server associated with an insurance provider), and/or computer-executable instructions stored on non-transitory storage media or computer-readable medium. Example Method for Conducting an Inspection Session FIG.5depicts a flow diagram representing an example method250for conducting a collaborative online session. Similar to the method200, the method250may be performed by the inspection control module34ofFIG.1. For example, the inspection control module34may execute the method250to conduct an online session after setting up the session in accordance with the method200. More generally, the method250may be implemented as a set of instructions stored on a non-transitory computer-readable memory and executable on any one or more suitable processors. For convenience, the method250is discussed below with reference to the inspection control module34. At block252, an indication of a property to be inspected at multiple workstations using an UAV is received. For example, this indication may be received upon the participants confirming they joined the session, that they agree to the terms of use, etc. Also, as discussed above, the inspection control module34may receive an indication that a user in a supervisory role has approved the selection of the inspection technician and the UAV equipment. At block254, a message may be received indicative of a workstation from which the UAV will be controlled during the inspection session. More particularly, the inspection control module34may receive an indication that the inspection technician operates a particular workstation, so that the inspection control module34may configure appropriate privileges for the online session. At block256, aerial imagery collected from the UAV may be received at the inspection control module34and provided to the participating workstations substantially in real time. In some implementations, the inspection control module34may also store the video input in the corresponding inspection record (seeFIG.3), automatically log the time and the appropriate parameters of the camera which the UAV uses to capture the video feed (e.g., position, pitch, yaw, roll), automatically authenticate the video feed to make the record usable in subsequent proceedings, etc. At block258, comments, requests, and other data may be received from the participating workstations at the inspection control module34. In some implementations, the inspection control module34automatically distributes the received information to some or all of the participating workstations, substantially in real time. The inspection control module34may, for example, determine that a certain comment is being submitted formally for the record, and notifies each participant of the submission. In another instance, the inspection control module34may receive a request to reposition the camera of the UAV to view a certain area, and forward the request to the workstation being operated by the inspection technician. In yet another instance, the inspection control module34may receive a document to be added to the inspection record. Further, in some implementations, the inspection control module34automatically requests description of the property being inspected or a specified portion of the property from each participant. In other words, the inspection control module34may require that the participants describe what they see. In other implementations, the inspection control module34may automatically suggest that the participants describe what they see without necessarily requiring that such a description be included in the record. At block260, the database record may be finalized and saved in a persistent storage, in response to an indication that the online collaborative inspection session has been completed. In some implementations, the participants may indicate completion of the inspection session by activating appropriate controls and/or submitting respective digital signatures. In particular, the participants may formally affirm the results of the inspection and include final comments, if desired. Method250may include additional, less, or alternate actions, including those discussed elsewhere herein, and/or may be implemented via a computer system, communication network, one or more processors (such as an insurance customer mobile device and/or a remote server associated with an insurance provider), and/or computer-executable instructions stored on non-transitory storage media or computer-readable medium. Example Method for Generating an Inspection Record For further clarity,FIG.6depicts a flow diagram representing an example computer-implemented method for generating a database record descriptive of an online session, which may be implemented by the inspection control module34ofFIG.1, for example. The method300may begin at block302, where an inspection record may be created in a database to store information related to an online collaborative inspection session. In an example embodiment, the record may include information schematically illustrated inFIG.3. In general, the inspection record may be distributed among any number of tables, and may conform to any storage/indexing technique. For example, the inspection record may be created in a relational database, where each type of information, such as UAV type, property identifier, or inspection time is stored in a separate table, logically linked by shared indexes. At block304, aerial imagery may be added to the database record. The aerial imagery may include video and/or still photography. Location and time data may be added to the database record at block306. For example, a location/time stamp may specify from where, and at what time, a certain image was captured. At block308, participants' comments may be added to the inspection record. Further documents, which may conform to any desired format, may be added to the inspection record at block310. A generated signature may be appended to the database record at block312to prevent alterations by unauthorized parties after the inspection. Blocks304-310may be executed in any order, and any necessary number of times during an inspection session. For example, users may submit multiple comments and multiple documents as new video data is added to the inspection record. Method300may include additional, less, or alternate actions, including those discussed elsewhere herein, and/or may be implemented via a computer system, communication network, one or more processors (such as an insurance customer mobile device and/or a remote server associated with an insurance provider), and/or computer-executable instructions stored on non-transitory storage media or computer-readable medium. Example Method for Generating an Inspection Record FIG.7depicts a flow diagram representing an example computer-implemented method for facilitating an online collaborative inspection session at a workstation in accordance with an example aspect of the present disclosure. The method350may be implemented in the property inspection client module36depicted inFIG.1, for example. More generally, the method350may be implemented as a set of instructions executable on one or more processors of any suitable computing device. The method350begins at block352, where a user interface screen for requesting admission to an online collaborative inspection session may be provided. This screen may be displayed in response to the potential participant clicking on a URL that includes a link to the online session. Next, at block354, interactive controls may be provided for submitting comments requests, documents. As discussed above, these controls may be provided in the form of buttons, for example. At block356, data received from the participant via the user interface may be forwarded to the appropriate server, such as the server12depicted inFIG.1, for example. The data may be forwarded to the server substantially in real time. Accordingly, at block358, data may be received also substantially in real time from the server. The data may include aerial imagery currently being captured by the UAV, as well as comments and documents forwarded to the server from other workstations. It is noted that blocks356and358may be executed in any order and multiple times during an inspection session. At block360, an indication that the inspection has been completed may be received, and the method350completes. Method350may include additional, less, or alternate actions, including those discussed elsewhere herein, and/or may be implemented via a computer system, communication network, one or more processors (such as an insurance customer mobile device and/or a remote server associated with an insurance provider), and/or computer-executable instructions stored on non-transitory storage media or computer-readable medium. Example Insurance Applications The present UAV System may dramatically advance the property loss adjustment process in many ways, from inspection through settlement. In one aspect, the UAV System may not replace in-person, on-site inspections. Rather, it may build upon them and provide up to real time data to centralized operations for completing claim settlements. The fundamental operations of UAV System may entail: (a) specialized personnel conducting on-site, UAV-assisted inspections (for these purposes, they may be referred to as “Inspection Technician” or “Technician”); (b) a centralized operation may assign the Inspection Technician a sequence of properties to inspect according to geography and possibly other factors (preferably, each assignment may include current information about the property, e.g., type of construction, cladding, number of stories, and dimensions, etc.); (c) a Planned Inspection Sequence; and/or (d) carriers to facilitate the UAV transporting itself from a remote location. Redundancy may be built into one or more of the operations discussed herein. Most or all UAV operations may be provided with the option of direct control, such as by a licensed drone operator, which functionality may be backed up by remote control from a centralized operation, such as an insurance provider location or remote server. The present embodiments may provide numerous purposes and benefits. For instance, (1) enhanced customer satisfaction may be provided. First, a video record of an entire structure, damaged and undamaged, close-up and at a distance of all exterior surfaces. Insurance customer and representatives may be able to access and “see” what the insurance provider sees or has access to. Drone data may be available instantly by feed to customer/representative mobile device, or shortly after inspection via a DVD or CD left on-site, and soon thereafter on an insurance provider secure website. Second, real-time remote participation may diminish customers' need/preference to be present, especially because of immediate documentation for them. Also, both the insurance customer and insurance provider may enjoy less time and fewer problems associated with communication coordinating schedules. Third, on-site second inspections may become very rare exceptions. Public adjusters generally may not need to inspect on-site. Again, they may have remote access to the same data as the insurance provider. This feature may further streamline the damage estimate reconciliation process. These benefits may apply equally to internal re-inspection for quality and training. Fourth, ultimately, reduced adjustment cost may also reduce indicated rates and premiums. Thus, overall insurance cost savings may be provided to insurance customers as a whole, as well as enhance the overall customer experience. Another benefit of the present embodiments may be improved accuracy of insurance claim settlements. For instance, video documentation of the complete insured structure or property may aid damage assessment. The continuous video of the total structure may be segmented and labelled, e.g., “close-up of left elevation damage.” That digital or other video segmentation may be both faster and more accurate than selected photo documentation. Increased efficiency may also result. For example, inspection time may be greatly reduced by eliminating the need for manual inspection, reducing average time to prepare a damage estimate and pay a claim, and requiring fewer specialized personnel. Also, reduced travel costs may be achieved by requiring fewer on-site personnel due to (i) increased inspections per day; and/or (ii) increased task centralization. More centralized tasks may reduce adjuster travel expenses. Efficiencies may be gained by (a) reduced inspection times; and/or (b) the feasibility of co-occurring inspections of proximate properties in multi-loss occurrences (such as due to a major weather event, e.g., hurricane or high water). Benefits associated with employee safety may also be provided by the present embodiments. Very courageous damage inspectors may accept much physical risk to inspect structural property damage with conventional techniques, most pointedly with roof inspections. The new tools provided by the present embodiments may operate in a fashion so as to not put them in harm's way, and provide a remote control inspection tool, a UAS, with which they may effectuate complete, even more complete than now possible, inspections of insured structures claimed to have insured damage. Insurance provider employees may be able to document better than ever before the state of the insured structure as to which damages are claimed without endangering themselves by climbing ladders, or, as to large, complicated structures employing climbing harnesses. In one embodiment, a drone or UAV may be provided with the following specifications. First, a weight of no more than 30 pounds, so that any mishaps that may unfortunately occur, may cause only minimal damage to person and property. Also, the drone or UAV may be programmed with flight technology that likewise limits risk. So-called helicopter applications are presently the state of the art, but other “propulsion” technologies may also be used, such as magnetic field control. Locational remote control may be utilized, meaning that the operator is visually connected to the drone or UAV, on location, not remotely. Also, the control of the device may be limited in distance to the perimeter of the property being examined. In other words, the device or drone may automatically drop to the ground in a controlled descent if it exceeds the defined perimeter. The preferred drone or UAV may also have high definition cameras both on top and bottom of its frame. Both cameras may have a 360° range (of view and/or image gathering capability). For instance, the bottom camera may capture the top of the house, i.e., images of the roof and roofing materials/shingles. And, more significantly, may capture the dimensions of the structure (or house) to guide the flight of the drone or UAV around the structure in a controlled flight to facilitate the complete documentation of the exterior of the structure. To be more specific, the drone or UAV may initially assess the dimensions and physical aspects of the structure, and with computer-executable instructions may define its flight pattern around the structure, recording high resolution video throughout its flight. Also, any obstacles in the vicinity of the home, such as vehicles, trees, bushes, fences, etc., may be determined from computer analysis of the image data acquired by the UAV, and the flight path may be adjusted to account for, and/or avoid, the obstacles detected. The UAV processors or controller may also use GPS (Global Positioning System) information acquired from a GPS unit mounted on the drone, and/or elevation information acquired from an altimeter mounted on the drone to calculate and/or refine the flight path. One purpose of the lower, underside, camera up may be to document the lower side of the house, i.e., the siding, the façade, etc., that may not be achieved with a top-sided camera with some drones and/or some flight plans. In some embodiments, a single lower camera, mounted on the bottom of the drone or UAV, may be programmed to accomplish both tasks. However, in some embodiments, two may be preferable. Example Roof Although the discussion below concerns primarily roofs, the techniques of this disclosure in general allow automatic collection of imagery related to an entire home to create an irrefutable, self-authenticating record of an inspection. For example, drones or UAVs also may collect images of exterior walls and other exterior surfaces (substantially parallel to the ground, substantially perpendicular to the ground, slanted relative to the ground), structures external to the home, etc., and other unmanned or remote-controlled devices can collect interior imagery. In short, the FIG.8depicts a roof of an example insured home800. The present embodiments may use aerial imagery data generated or collected by drones or UAVs for a number of insurance-related purposes. The drone aerial imagery data may be used to estimate, via computer analysis of the data, several insured property characteristics, including the slope, dimensions, length, and/or size of several roof segments802. The drone aerial imagery data also may be used to identify the type of roofing and shingles, and/or the manufacturer of the shingles or other roofing materials. The drone aerial imagery data also may be used to identify a number of stories for the insured property and/or structural characteristics thereof. After the characteristics of the insured property800are determined from computer analysis of the drone aerial imagery data, the characteristics (e.g., size of roof, type of roofing material, condition of roof, estimated damage to the insured home) may be used to generate a premium or discount for a new insurance policy covering the insured property800, update a premium or discount for an existing insurance policy covering the insured property800, estimate an insurance claim for the insured property800, estimate a replacement or repair cost for the insured property800, and/or handle insurance claims associated with an insurance policy covering the insured property800. Example Insurance-Related Purposes FIG.9depicts an example computer-implemented method of using aerial imagery data captured by drones for insurance-related purposes900. The method900may include collecting aerial imagery data of a property via a drone or UAV, and transmitting the data to a remote server902; analyzing the data at the remote server904; estimating characteristics of the property at the remote server906; generating or updating an insurance policy for the property at the remote server and communicating the insurance policy to an insured's mobile device908; collecting aerial imagery data of the insured property via a drone after an insurance-related event, and transmitting the post-event data to the remote server910; estimating damage to the insured property at the remote server912; facilitating or directing repair to the insured property via the remote server914; and/or proposing, handling, or adjusting an insurance claim at the remote server916. The method may include additional, less, or alternate actions, including those discussed elsewhere herein, and/or may be implemented via one or more processors, such as drone mounted processors, mobile device processors, and/or remote servers or processors associated with an insurance provider. The method900may include collecting or generating aerial imagery data of a property via a drone or UAV902. After which, the drone may include a transceiver that is configured to transmit the data to an insurance provider remote server via wireless communication or data transmission. The drone may be operated to capture image data of a roof and/or walls of a property, and may reveal several structural features of the property. In the case of a home, the data may reveal type of roofing; condition of roofing; slope, size, dimensions, etc. of each roof segment; number of floors; total roof area; total roof facets; GPS coordinates of the property; type, size, and/or condition of the yard; number and size of trees; etc. The method900may include analyzing the aerial image data at the remote server904. For instance, the remote server may perform computer analysis of the data generated by the drone, such as using various computer algorithms or known computer techniques on the data collected/received from the drone. The computer analysis of the aerial image data may result in or allow the remote server (and/or other processor(s)) to estimate various characteristics of the property906. In the case of a home, the computer analysis of the data may be used by a processor to estimate or determine a type of roofing or roofing material; condition of roofing; age of roofing; slope, size, dimensions, etc. of each roof segment; number of floors; total roof area; total roof facets; type, size, and/or condition of the yard; number and size of trees; etc. The method900may include generating or updating an insurance policy for the property at the remote server and communicating the insurance policy to an insured's mobile device908. Based upon the characteristics of the property determined from computer analysis of the drone data collected, a premium or discount for a new insurance policy covering the home (or other property) may be generated by the remote server, or an updated premium or discount for an existing insurance policy covering the home may be generated. The new or updated insurance policy and/or premium/discount may then be communicated to the home owner or insured, such as via wireless communication or data transmission from an insurance provider remote server to the insured's mobile device. The method900may include collecting or generating aerial imagery data of the insured property via a drone after an insurance-related event, and transmitting the post-event data to the remote server910. The insurance-related event may cause damage to the insured property, such as fire, water, wind, hurricanes, tornadoes, storm surge, flash flooding, hail, catastrophes, or weather events. The post-event drone data may be used to estimate damage to the insured property at the remote server912. For instance, the remote server may compare, such as via various software applications or algorithms, pre-event drone data (or other baseline data) with the post-event (or current home condition) to estimate an amount of damage caused to the insured home by the insurance-related event, and/or a cause of the damage, such as wind, water, or hail. The remote server may identify a type of roofing or shingle material that was damaged, an amount of size of the damaged area, an amount and/or type of replacement roofing or shingle materials to repair the damage to the home, and/or a cost of repairing the damage. The method900may include facilitating or directing repair to the insured property via the remote server914. Based upon the location of the insured home; the availability, qualifications, and/or experience of contractors (such as by searching contractor information stored in a database accessible by the remote server); the type and amount of replacement materials; and/or the extent of damage the insurance provider may schedule repair work for the insured (with their permission), and/or communicate the insured's best options for having the repair work timely and properly completed (such as via wireless communication with their mobile device). The method900may include proposing, handling, and/or adjusting an insurance claims at the remote server916. For instance, an insurance claim submitted by an insured may be adjusted based upon the post-event drone date showing actual damage to the insured property. As a result, accurate insurance claim handling may be facilitated. Also, based upon the post-event drone date, the remote server may generate a proposed insurance claim, and transmit the proposed insurance claim to the insured's mobile device for their review, modification, and/or approval. Example Claim Adjustment Workflow Using Aerial Inspections For further clarity,FIGS.10-15depict diagrams of an example workflow. As used in these diagrams, the term “I.O” refers to inspection operator, “C.E.” refers to centralized structure estimator, and “C.O.” refers to centralized contents operator. Further, the “set point” is the point from which, and to which, internal flight settings are circulated. The set point is analogous to the start/end point in land surveys. A data collection flight from the set point controlled by the inspection operator (“IO”) collects flight data by which the UAV will fly itself and record the entire structure. The structure dimensions are calculated by the UAV or by the I.O.'s handheld device. The data collected can be shared in real time with fire underwriter's risk record, for update and comparison of existing and newly-collected risk characteristics. InFIGS.10-15, steps can be executed sequentially, with the steps, events, or resources depicted higher on the page executed prior to the steps depicted lower down on the page. Some steps, however, can be executed concurrently. Further, it is noted that the sequence of some of the steps can be modified in alternate embodiments. More particularly,FIG.10illustrates initial assignment of a claim and transfer to automated inspection operator for the purposes of inspecting exterior and interior structures, as well as assignment of a centralized contents operator. Step1002corresponds to a claim being reported logged in an existing process. Catastrophe (CAT)-only considerations are processed at step1004. Referring first to the “exterior/interior structure” branch, transfer to an automated inspection operator occurs at step1010. An inspection operator may be provisionally assigned at step1012, subject to supervisor approval. Step1012may be associated with additional events such as management approval of transfer to automated provisional inspector operator assignment system (1014). Step1012may also be associated with resource1016, corresponding to the inspection operator as a new employment class, with the chief responsibility being site inspection. Further, step1012may be associated with automated assignment factors, such as proximity, file load, availability, etc. (1018). Inspection operator acceptance or rejection, and supervisor acceptance or rejection, occur at steps1022and1022, respectively. Final management sign-off occurs at step1030. Otherwise, if the rejection occurs, recycling to automated assignment occurs at step1024, and the flow returns to step1012. In the “contents” branch, a centralized contents operator is assigned at step1040, and supervisor approval is obtained at step1042. Next,FIG.11illustrates some of the stages of pre-inspection preparation, including file compilation and automated provisional assignment. Pre-inspection preparation may occur at step1102. In some cases, pre-inspection preparation can include a pre-inspection flight to capture dimensional data to facilitate and delimit a subsequent UAV inspection. File compilation may be conducted at step1110, which may be associated with such considerations as fire underwriting, comparable prior claims, actuarial data (1112), internal department data (1114), data transfer (1116), and US/local (e.g., local responders (1118). Step1120corresponds to centralized estimating by one or several centralized estimators. Automated provisional assignment may occur at step1122in view of various factors (1124), and the potential inspection may be transferred to an inspection operator and his or her supervisor for acceptance or rejection (1126). Failure to transfer results in recycling to automated assignment (1128). Otherwise, a provisionally assigned inspection operator may then work in collaboration with a supervisor (1130,1132) to accept (1136) or reject (1134) the potential inspection. Upon a rejection, the flow may return to step1122for a new automated provisional assignment. FIG.12illustrates some of the early stages of an aerial inspection supervised by an inspection operator. Step1202corresponds to operation of an UAV by an inspection operator. Possible inspection may include on-site as well as remote participants, participating in real time. The UAV detaches itself from a docketing pad at step1204, is offloaded at step1206, and proceeds to a set point under the control of an inspection operator to inspect structure dimensions at step1208. In some of the embodiments, the UAV may be ground-transported to the structure and configured (programmatically and/or manually) to remain within specified boundaries, as determined by GPS coordinates or using other positioning techniques. Property perimeter and structure dimensions are captured and calculated at step1210. As part of autonomous operation, the UAV may determine structural dimensions and perform some of the inspection automatically at step1212. Property perimeter locations may be captured and used to prevent the UAV from departing the property at step1214. This step may be carried out autonomously, but redundancy may be ensured at step1216by an inspection operator taking control if he or she seeks a risk of the UAV flying outside the perimeter of the property. Also, various methods (1240) may be employed at this point and a switch-over to manual flight and capture (1242) may occur. Inspection by an UAV takes place at step1220, when the UAV guides itself using the previously captured dimensional data. At least the following participants may participate in the inspection in real time, at this step: a public adjuster (1221), an engineer (1222), an appraiser (1223), a policyholder (1224), and/or a contractor (1225). To provide redundancy, the inspection operator may assume control of the UAV whenever he or she is concerned (1230). When necessary, he or she also can override the autopilot to obtain closer-ups of damage or materials. Collaboration with a centralized estimator in real time also is possible at this time. FIG.13illustrates some of the stages of inspection scheduling and inspection file compilation. Inspection scheduling takes place at step1302. Assignment of final IO inspection preparation may occur at step1304, and the inspection file is completed and “sanitized” at step1306. Steps/items related to the inspection scheduling at step1302may include standard operating procedure (1310), IO schedule acceptance (1312), and possible iteration through candidates (rejection, acceptance, considering the next candidate at steps1314-1320). Referring again to step1306, inspection file transfer may be related to inspection sequence (1330) and/or a central estimator (1340). FIG.14illustrates some of the stages of a collaborative fly-around inspection and generation of an estimate, including an on-site settlement in some cases. A collaborative fly-around inspection begins at steps1402with external participants1404and internal participants1406. The vendor of building materials may also be identified at step1410. Participants' requests for video and/or inspection of a selected area may be received at step1408. A real-time centralized estimate may be prepared at step1412with the participation of external and internal participants. An inspection operator may control the inspection at step1414, until the estimate is finalized at step1415. An on-site settlement may take place at step1418, and a real-time on-site reconciliation may take place at step1420. A real-time estimate may be provided to external participants at step1424. Regarding the inspection, an interior damage and contents inspection may occur at step1430. To this end, an automated inspection may be conducted at step1432using a handheld device (1434), a ground-operated unmanned device, which may be capable of accessing the attic (1436), and/or an interior UAV (1438). Replacement services may be considered at step1440, and may include real or near-real-time pricing, product identification, etc. This may be associated with contents (1442), a specialty inspection (1444), and dispute resolution (1446). Finally,FIG.15illustrates some of the stages of dispute resolution using the results of an aerial inspection as self-authenticating evidence. Dispute resolution, using inspection record as evidence, may be conducted at step1502. This step may be associated with appraisal (1506) followed by mediation (1508), and further followed by litigation (1510). Indisputable, self-authenticating evidence of damage may be used at step1512, so that the judge and jury may see what the participants saw, when necessary (1514). The computer-implemented methods and workflows ofFIGS.10-15may be implemented via one or more local or remote processors, and/or via computer-readable instructions stored on non-transitory computer-readable medium or media. Example Insurance Policy Adjustment Based Upon Roof Images In one aspect, a computer-implemented method for using drone data for insurance purposes and/or inspection or insuring properties may be provided. The method may include (1) receiving via wireless communication or data transmission, at or via one or more processors (such as at an insurance provider remote server), aerial data of a property (such as a home or other building), the aerial data being generated, collected, or captured via one or more cameras mounted on a drone, the aerial data further being transmitted from a transceiver mounted on the drone either directly or indirectly to the one or more (remotely located) processors; (2) estimating, at or via the one or more processors, a total roof area for the property via computer analysis performed on the aerial data received from the drone; (3) determining, at or via the one or more processors, a type or current condition of shingles or other roofing materials for the property via computer analysis performed on the aerial data received from the drone; (4) generating or updating, at or via the one or more processors, an insurance premium or discount for an insurance policy covering the property based upon (i) a total roof area for the property, and/or (ii) type or current condition of shingles or other roofing materials for the property determined via computer analysis performed on the aerial data received from the drone; and/or (5) transmitting, under the direction or control of the one or more processors, the insurance premium or discount for the insurance policy covering the property to a mobile device of the insured or home owner for their review, modification, or approval. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the method may include (a) identifying, at or via the one or more processors, multiple roof segments of a roof of the property via computer analysis performed on the aerial data received from the drone; and/or (b) estimating, at or via the one or more processors, a slope, size, and/or dimension of each of the multiple roof segments via computer analysis performed on the aerial data received from the drone. The method may include estimating, at or via the one or more processors, damage to one or more of the multiple roof segments of the roof of the property via computer analysis performed on aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property. The method may include estimating, at or via the one or more processors, a cost to repair the damage to the one or more of the multiple roof segments of the roof of the property via computer analysis performed on the aerial data generated or collected by a drone after an insurance-related event; and/or transmitting, under the direction or control of the one or more processors, the estimated damage and/or estimated cost to repair the damage to the mobile device of the insured or home owner for their review. Additionally or alternatively, the method may include estimating, at or via the one or more processors, damage to the roof of the property via computer analysis performed on aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property, such as the one or more processors comparing pre-event drone (or aerial image) data with post-event drone (or aerial image) data. Example Damage Assessment from Post-Event Image Data In one aspect, a computer-implemented method for using drone data for insurance purposes and/or inspection or insuring properties may be provided. The method may include (1) receiving via wireless communication or data transmission, at or via one or more processors (such as at an insurance provider remote server), post-event aerial data of a property (such as a home), the post-event aerial data being generated, collected, or captured via one or more cameras mounted on a drone after an insurance-related event has caused damage to the property, the post-event aerial data further being transmitted from a transceiver mounted on the drone either directly or indirectly to the one or more (remotely located) processors; (2) storing, via the one or more processors, the post-event aerial data of the property generated by the drone in a non-transitory memory unit for subsequent access by the one or more processors; (3) retrieving, via the one or more processors, the post-event aerial data, as well as pre-event aerial data, from the non-transitory memory unit (or otherwise accessing the post-event and pre-event aerial data), the pre-event aerial data being of, or associated with, the property prior to the insurance-related event happening; (4) comparing, via the one or more processors, the post-event aerial data and the pre-event aerial event to (i) estimate damage to the property caused by the insurance-related event; (ii) estimate a cost of repairing the damage to the property or replacing damaged items on the property; (iii) determine or estimate an amount and/or type of replacement or repair materials (such as an amount or type of replacement shingles or other roofing materials); and/or (iv) estimate an insurance claim for an insured or owner of the property for their review and/or approval; and/or (5) transmitting, under the direction or control of the one or more processors, (i) the estimated damage; (ii) estimated cost of repair; (iii) estimated amount or type of replacement/roofing materials; and/or (iv) estimated insurance claim to a mobile device associated with the insured or owner of the property for their review, modification, and/or approval. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the method may include (a) estimating, at or via the one or more processors, a total roof area for the property via computer analysis performed on the pre-event or post-event aerial data received from the drone; (b) determining, at or via the one or more processors, a type or current condition of shingles or other roofing materials for the property via computer analysis performed on the pre-event or post-event aerial data received from the drone; (c) generating or updating, at or via the one or more processors, an insurance premium or discount for an insurance policy covering the property based upon (1) a total roof area for the property, and/or (2) type or current condition of shingles or other roofing materials for the property determined via computer analysis performed on the pre-event or post-event aerial data received from the drone; and/or (d) transmitting, under the direction or control of the one or more processors, the insurance premium or discount for the insurance policy covering the property to a mobile device of the insured or home owner for their review, modification, and/or approval. The method may include identifying, at or via the one or more processors, multiple roof segments of a roof of the property via computer analysis performed on the pre-event and/or post-event aerial data received from the drone; and/or estimating, at or via the one or more processors, a slope, size, and/or dimension of each of the multiple roof segments via computer analysis performed on the pre-event and/or post-event aerial data received from the drone. The method may include estimating, at or via the one or more processors, damage to one or more of the multiple roof segments of the roof of the property caused by a weather event (such as wind or hail) or other insurance-related event via computer analysis performed on post-event aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property. Additionally or alternatively, the method may include estimating, at or via the one or more processors, a cost to repair the damage to the one or more of the multiple roof segments of the roof of the property via computer analysis performed on the post-event aerial data generated or collected by a drone after an insurance-related event; and/or transmitting, under the direction or control of the one or more processors, the estimated damage and/or estimated cost to repair the damage to a mobile device of an insured or home owner for their review. The method may also include estimating or identifying, at or via the one or more processors, damage to a roof of the property via computer analysis performed on post-event aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property, such as the one or more processors comparing pre-event drone (or aerial) data with the post-event drone (or aerial) data. Example Determination of Property Characteristics In one aspect, a computer-implemented method for using drone data for insurance purposes and/or inspection or insuring properties may be provided. The method may include (1) receiving via wireless communication or data transmission, at or via one or more processors (such as at an insurance provider remote server), pre-event aerial data of a property (such as a home or other building), the pre-event aerial data being generated, collected, or captured via one or more cameras mounted on a drone prior to an insurance-related event that causes damage to the property, the pre-event aerial data further being transmitted from a transceiver mounted on the drone either directly or indirectly to the one or more processors; (2) storing, via the one or more processors, the pre-event aerial data of the property generated by the drone in a non-transitory memory unit for subsequent access by the one or more processors; (3) retrieving, via the one or more processors, the pre-event aerial data from the non-transitory memory unit (or otherwise accessing the pre-event aerial data), the pre-event aerial data being of or associated with the property prior to the insurance-related event happening; (4) identifying, via the one or more processors, multiple property characteristics from computer analysis of the pre-event aerial data, the multiple property characteristics including (a) a total roof area for the property, and/or (b) type or current condition of shingles or other roofing materials of a roof of the property determined via computer analysis performed on the pre-event aerial data received from the drone; (5) generating or updating, via the one or more processors, an insurance premium or discount for an insurance policy covering the property based upon the multiple property characteristics identified via computer analysis performed on the pre-event aerial data received from the drone; and/or (6) transmitting, under the direction or control of the one or more processors, the insurance premium or discount for the insurance policy covering the property to a mobile device (such as via wireless communication or data transmission) of the insured or home owner for their review, modification, and/or approval. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the method may include (a) receiving via wireless communication or data transmission, at or via one or more processors (such as at an insurance provider remote server), post-event aerial data of a property (such as a home or other building), the post-event aerial data being generated, collected, or captured via one or more cameras mounted on a drone after an insurance-related event has caused damage to the property, the post-event aerial data further being wirelessly transmitted from a transceiver mounted on the drone either directly or indirectly to the one or more (remotely located) processors; (bi) comparing, via the one or more processors, the post-event aerial data and the pre-event aerial event to (1) estimate damage to the property caused by the insurance-related event; (2) estimate a cost of repairing the damage to the property or replacing damaged items on the property; (3) determine or estimate an amount and/or type of replacement or repair materials (such as replacement shingles or other roofing materials); and/or (4) estimate an insurance claim for an insured or owner of the property for their review and/or approval; and/or (c) transmitting, under the direction or control of the one or more processors, (i) the estimated damage; (ii) estimated cost of repair; (iii) estimated amount or type of replacement materials; and/or (iv) estimated insurance claim to a mobile device (such as via wireless communication and/or data transmission) associated with the insured or owner of the property for their review, modification, and/or approval. The method may also include (i) estimating, at or via the one or more processors, a total roof area for the property via computer analysis performed on the pre-event or post-event aerial data received from the drone; (ii) determining, at or via the one or more processors, a type or current condition of shingles or other roofing materials for the property via computer analysis performed on the pre-event or post-event aerial data received from the drone; (iii) generating or updating, at or via the one or more processors, an insurance premium or discount for an insurance policy covering the property based upon (1) a total roof area for the property, and/or (2) type or current condition of shingles or other roofing materials for the property determined via computer analysis performed on the pre-event or post-event aerial data received from the drone; and/or (iv) transmitting, under the direction or control of the one or more processors, the insurance premium or discount for the insurance policy covering the property to a mobile device of the insured or home owner for their review, modification, and/or approval. The method may further include identifying, at or via the one or more processors, multiple roof segments of a roof of the property via computer analysis performed on the pre-event and/or post-event aerial data received from the drone; and/or estimating, at or via the one or more processors, a slope, size, and/or dimension of each of the multiple roof segments via computer analysis performed on the pre-event and/or post-event aerial data received from the drone. The method may include estimating, at or via the one or more processors, damage to one or more roof segments of the roof of the property via computer analysis performed on post-event aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property; estimating, at or via the one or more processors, a cost to repair the damage to the one or more roof segments of the roof of the property via computer analysis performed on the post-event aerial data generated or collected by a drone after an insurance-related event; and/or transmitting, under the direction or control of the one or more processors, the estimated damage and/or estimated cost to repair the damage to a mobile device of an insured or home owner for their review and/or approval. Additionally or alternatively, the method may include estimating, at or via the one or more processors, damage to the roof of the property via computer analysis performed on post-event aerial data generated or collected by a drone after an insurance-related event that has caused damage to the property, such as the one or more processors comparing pre-event drone (or aerial) data with the post-event drone (or aerial) data. Example Flight Path Determination In one aspect, a computer-implemented method for using (high level or far away) drone image data to generate a flight path for the drone to subsequently capture up-close images of a property (and/or property damage or lack thereof) for insurance purposes may be provided. The method may include (1) capturing images of a property (such as a house or other building) via one or more cameras mounted on a drone; (2) analyzing the images (and/or associated image data) of the property via one or more processors mounted on the drone to estimate dimensions and/or height of the property; (3) analyzing the images (and/or associated image data) of the property via one or more processors mounted on the drone to determine any obstacles (e.g., trees, bushes, or vehicles) in the vicinity of the property; (4) calculating, via the one or more processors mounted on the drone, a flight path for the drone to take that leads the drone around the home and/or in close proximity (such as within a couple or a few feet of the house) based upon the estimated dimensions and/or height of the property, the flight path further being calculated to avoid any obstacles in the vicinity of the property that were detected from image analysis; (5) directing the drone to take the flight path calculated (either under the direction and control of (i) one or more drone mounted processors, or (ii) a licensed operator) to facilitate the one or more cameras of the drone capturing or generating up-close images or image data of the property and/or damage to the property (or lack thereof), the up-close images or image data intended for subsequent use in insurance-related activities (such as insurance claim handling, damage repair, claim estimation, damage estimations, insurance policy generation or adjustment, etc.). The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the flight path may take the drone within a pre-determined threshold distance of the property to facilitate acquiring up-close images or image data of high quality/accuracy, the pre-determined threshold distance being 3-5 feet. The method may include (i) transmitting the up-close images or image data, via a drone mounted transceiver, to an insurance provider remote server; (ii) estimating, via the remote server (or an associated processor), an amount of damage to the property and/or a cost to repair the damage; and/or (iii) handling or adjusting, via the remote server, an insurance claim associated with the property based upon the amount of damage, or cost to repair the damage, estimated from the up-close images or image data acquire by the drone mounted camera(s). The method may include transmitting the up-close images or image data, via a drone mounted transceiver, to an insurance provider remote server; generating or updating, via the remote server (or associated processor(s)), an insurance policy (or premium or discount) for the property based upon computer analysis of the up-close images or image data acquired by the drone mounted camera(s); and/or transmitting, via a transceiver associated with the remote server, the new or updated insurance policy to a mobile device of an insured or home owner for their review and/or approval. The dimensions and/or height of the property may be estimated or determined, and/or the obstacles may located or locations determined, by one or more processors using GPS coordinate information from a GPS unit mounted on the drone and/or an altimeter mounted on the drone. Additionally or alternatively, the flight path may be calculated at least in part by using GPS coordinate information generated from a GPS unit mounted on the drone and/or elevation data generated from an altimeter mounted on the drone. In another aspect, a computer-implemented method for using (high level or far away) drone image data to generate a flight path for the drone to subsequently capture up-close images of a property (and/or property damage or lack thereof) for insurance purposes may be provided. The method may include (1) capturing (far away) images of a property (such as a house or other building) via one or more cameras mounted on a drone; (2) analyzing the images (and/or associated image data) of the property via one or more processors mounted on the drone to estimate dimensions and/or height of the property; (3) calculating, via the one or more processors mounted on the drone, a flight path for the drone to take that leads the drone around the home and/or in close proximity (such as within a couple or a few feet of the house) based upon the estimated dimensions and/or height of the property; (4) directing the drone to take the flight path calculate (either under the direction and control of (i) one or more drone mounted processors, or (ii) a licensed operator) to facilitate the one or more cameras of the drone capturing or generating up-close images or image data of the property and/or damage to the property (or lack thereof), the up-close images or image data intended for subsequent use in insurance-related activities (such as insurance claim handling, damage repair, claim estimation, damage estimations, insurance policy generation or adjustment, etc.). The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the dimensions and/or height of the property may be estimated or determined by one or more processors using GPS coordinate information from a GPS unit mounted on the drone and/or an altimeter mounted on the drone. Additional Considerations Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this patent and equivalents. The detailed description is to be construed as example only and does not describe every possible embodiment since describing every possible embodiment would be impractical. Numerous alternative embodiments may be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based upon any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based upon the application of 35 U.S.C. § 112, sixth paragraph. Accordingly, the term “security group,” as used herein, may be used to refer to a group of user accounts, computer accounts, and/or other security groups which receive permission to access a certain secure data asset when the security group has permission to access the secure data asset. As used herein, the term “secure data asset” may be used to refer to computer hardware (e.g., servers and switches), software, and/or confidential information owned by an organization. For example, secure data assets may include confidential files, proprietary information, user account information, databases, network drives, data tables within a database, files within a network drive, etc. As used herein, the term “graph data structure,” or “graph” may be used to refer to a data structure used to model relationships between objects. The graph data structure may include a collection of nodes and edges (ordered or unordered pairs of nodes) which connect the nodes. The term “node” as used herein may be used to refer to a data point which represents an object. For example, nodes may represent users in an organizational network. A node may be displayed as a dot, a circle, and/or any other suitable indication of a data point. The term “edge” as used herein may be used to refer to an ordered or unordered pair of nodes that connects nodes which share some common property and/or attribute. For example, two nodes which represent users who belong to the same security group may be connected by an edge in the graph data structure. An edge may be displayed as an arc, a line, and/or any other suitable indication of a connection between nodes. The following additional considerations apply to the foregoing discussion. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. This detailed description is to be construed as example only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One may be implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application.
93,866
11861727
The Figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION OF THE DRAWINGS The present embodiments may relate to, inter alia, systems and methods for analyzing the environment of a vehicle and determining at least one business opportunities based upon the environment. In an exemplary embodiment, the process is performed by an environment monitoring (“EM”) computer device, also known as an environment monitoring (“EM”) server. In the exemplary embodiment, a vehicle includes a vehicle computer device and a plurality of sensors. A process begins with the vehicle computer device in-transit from one location. While in-transit, the plurality of sensors may continuously scan the environment around the vehicle. For example, the sensors may take images of buildings, plants, and other vehicles as a part of normal operation while the vehicle is in-transit. These images may be in the visible spectrum, infrared spectrum, high-contrast, and/or three-dimensional (3D) images. In the exemplary embodiment, the vehicle controller is in communication with a database and an environment monitoring (“EM”) computer device, also known as an EM server. The EM server is also in communication with one or more 3rdParty providers, such as via wireless communication or data transmission over one or more radio links or wireless communication channels. The vehicle computer device and EM server may include software that allows them to function as is described herein. The vehicle computer device may transmit the sensor data to the database. In some embodiments, the vehicle computer device may transmit the data continuously to the database. In other embodiments, the vehicle computer device may transmit the data when the vehicle is stopped, such as at a stoplight. In still other embodiments, the vehicle computer device may transmit the data to the database when the vehicle is connected to a network through a wired connection, such as at a recharging station. Alternatively, the vehicle may be connected to a wireless communication network through a wireless connection, such as at a wireless or other recharging station. Transmitting the data may occur at a convenient processing or data transmission time(s) based upon prioritization methods such as data transmission costs (e.g., cellular vs. free Wi-Fi) or computational costs (e.g., vehicle busy processing autonomous or accident avoidance so that may delay processing until the vehicle is parked, or until vehicle processing load has decreased. In the exemplary embodiment, the database stores all of the data received from the sensors. In some embodiments, the database may store the raw data feeds. In other embodiments, the database may store a sub-set of the data from the sensors. In some embodiments, the database may store sensor data from a plurality of vehicles. The database stores the data that allows the EM server to function as is described herein. In the exemplary embodiment, the EM server feeds the received sensor data through a comparative algorithm that contains historical data. In some embodiments, the EM server compares the received sensor data to historical sensor data from the same vehicle. In other embodiments, the EM server compares the sensor data to historical sensor data from other vehicles. In the exemplary embodiment, the EM server determines if there is an actionable change in the environment of the vehicle. In a first example, sensor data may contain images of a house that the vehicle drives past. In the exemplary embodiment, sensor data may include location data, such as from a GPS unit. Based upon the location of the vehicle at the time that sensor data was taken, the EM server may determine the address of the house. The EM server may compare the received images of the house to historical images of the house. Based upon the comparison, the EM server may determine that there is damage to the house that has occurred since the last time the house was sensed. The EM server may compare the sensor data of the house to sensor data of other houses and determine a potentially hazardous or dangerous condition of the house based upon the comparison. In these examples, the EM server determines that there is an actionable change, such as repairs that need to be made to the house, or preventive or mitigating actions that should be taken. In a second example, sensor data may contain images of a plant, such as a tree. The EM server may compare the sensor data of the tree to sensor data from other trees of the same type and determine that the tree has a disease or requires trimming to reduce various risks (such as theft or wildfire). In this example, the EM server determines that there is an actionable change, such as actions that need to be taken to improve the health of the tree. In a third example, sensor data may contain images of a public thoroughfare, such as a road or sidewalk. The EM server may determine that the public thoroughfare requires repair. In some embodiments, the EM server may determine a priority or severity of any needed repair or actionable item. In the exemplary embodiment, if the EM server determines that there are no actionable changes, the system continues scanning and analyzing the environment of vehicle. If the EM server determines that there is an actionable change, the EM server logs the change in the database. The EM server determines a 3rdParty to notify about the actionable change and transmits the actionable change to the 3rdParty. The 3rdParty may perform an action based upon the actionable item or changes. The EM server may refine one or more algorithms based upon the sensor data and the determined actionable item. In the exemplary embodiment, the 3rdParty may be a subscriber to a service that monitors for potential actionable items. For example, the 3rdParty may be a landlord that owns a plurality of rental buildings. The EM server may determine that one of the landlord's buildings is in need of repairs, that one of the trees in his yard has a disease, that one of the walkways near his building has a dangerous condition, and/or that one of his tenants is failing to perform proper maintenance, e.g., mow the lawn. The notification of the actionable item may inform the landlord of a previously unknown issue that requires action on his or her part. The 3rdParty may also be a homeowner's association and actionable items may include lawn maintenance, building changes, and other issues potentially related to the homeowner's charter. In other examples, the 3rdParty is a service provider, such as a tree trimmer, a roofer, or other construction company. In these examples, the 3rdParty may transmit one or more advertisements to a person associated with the actionable item, such as the homeowner. For example, the EM server may determine that there is damage to the siding of the house, determine one or more 3rdParties that may repair the issue, and/or notify those 3rdParties. In still other examples, the 3rdParty may be a municipal service provider, such as a road repair crew or a building inspector. In the example of road repair crew, the actionable item may be one or more potholes or other potential hazards. In some embodiments, the hazard may be a broken water pipe and/or flooding on the road. In the example of a building inspector, the EM server may determine that a new addition or out building was added to a property and notify the building inspector that there may be a permitting issue. In another example, the EM server may compare the timing of traffic lights to determine if there is an issue, or if the timing of one or more lights may need to be adjusted. In still further examples, the sensors may observe a vehicular accident and the EM server may use sensor data to recreate the accident and provide the accident information to the police or the appropriate insurance companies. In this example, the vehicle may not be involved in the vehicular accident. In yet another example, the sensors may observe weather conditions. For example during a hail storm, the sensors may measure the size of hail through images and the rate of hail based upon the sound of the hail hitting the vehicle or the ground. The EM server may receive sensor data about the hail from multiple vehicles in multiple locations to determine where the hail fell and how serious it was in different areas. Then the EM server may determine one or more construction companies that would be interested in this information for lead generation purposes. At least one of the technical problems addressed by this system may include: (i) discovering potential business opportunities; (ii) accurately monitoring conditions of one or more structures for users; (iii) improving the speed and accuracy of reconstructing a vehicular accident scenario; (iv) determining that a vehicular accident is occurring or may be occurring; and/or (v) reducing the severity of a vehicular accident. The technical effect achieved by this system may be at least one of: (i) automated discovery of potential business opportunities; (ii) automated warning of condition changes at one or more structures; (iii) automated detection of vehicular accidents as they are occurring; and/or (iv) automatically reacting to a vehicular accident reduce the severity of the vehicular accident. The methods and systems described herein may be implemented using computer programming or engineering techniques including computer software, firmware, hardware, or any combination or subset thereof, wherein the technical effects may be achieved by performing at least one of the following steps: (a) receiving, at an environment monitoring (“EM”) server, a plurality of data from at least one sensor associated with a vehicle, where the plurality of data includes at least one environmental condition; (b) analyzing, by the EM server, the plurality of data to determine the at least one environmental condition; (c) determining, by the EM server, at least one actionable item based upon the at least one environmental condition; (d) determining, by the EM server, at least one provider based upon the actionable item; and (e) transmitting a message to the at least one provider, wherein the message includes the at least one actionable item to facilitate communication to providers about potential actionable items. Additional technical effects may be achieved by performing at least one of the following steps: (a) receiving data from a sensor; (b) determining that a potential vehicular crash is imminent based upon the received data; and/or (c) performing at least one action to reduce a severity of the potential vehicular crash prior to impact to facilitate reducing injuries and/or damage caused by the vehicular crash. Exemplary Vehicle FIG.1depicts a view of an exemplary vehicle100. In some embodiments, vehicle100may be an autonomous vehicle capable of fulfilling the transportation capabilities of a traditional automobile or other vehicle. In these embodiments, vehicle100may be capable of sensing its environment and navigating without human input. In other embodiments, vehicle100is a manual vehicle, such as a traditional automobile that is directly controlled by a driver115. Vehicle100may include a plurality of sensors105and a vehicle computer device110, also known as a vehicle controller110. The plurality of sensors105may detect the current surroundings and location of vehicle100. Plurality of sensors105may include, but are not limited to, radar, LIDAR, Global Positioning System (GPS), video devices, imaging devices, cameras, audio recorders, and computer vision. Plurality of sensors105may also include sensors that detect conditions of vehicle100, such as velocity, acceleration, gear, braking, and other conditions related to the operation of vehicle100. In some embodiments, plurality of sensors105may detect the presence of driver115and one or more passengers120in vehicle100. In these embodiments, plurality of sensors105may detect the presence of fastened seatbelts, the weight in each seat in vehicle100, heat signatures, or any other method of detecting information about driver115and passengers120in vehicle100. Vehicle computer device110may interpret the sensory information to identify appropriate navigation paths, detect threats, and react to conditions. In some embodiments, vehicle computer device110may be able to communicate with one or more remote computer devices, such as mobile device125. In the example embodiment, mobile device125is associated with driver115and includes one or more internal sensors, such as an accelerometer. Mobile device125may be capable of communicating with vehicle computer device110wirelessly. In addition, vehicle computer device110and mobile device125may be configured to communicate with computer devices located remotely from vehicle100. While vehicle100may be an automobile in the exemplary embodiment, in other embodiments, vehicle100may be, but is not limited to, other types of ground craft, aircraft, and watercraft vehicles. Exemplary Process for Analyzing Vehicle Environment FIG.2illustrates a flow chart of an exemplary process200of analyzing the environment of a vehicle, such as of vehicle100shown inFIG.1. In the exemplary embodiment, vehicle controller110may be in communication with a database202and an environment monitoring (“EM”) computer device204, also known as an EM server204. EM server204may also be in communication with one or more 3rdParty providers206. Vehicle computer device110and EM server204may include software that allows them to function as is described herein. In the exemplary embodiment, vehicle100(shown inFIG.1) includes vehicle computer device110and a plurality of sensors105(shown inFIG.1). Process200begins with vehicle computer device110in-transit208from one location. While in-transit208, the plurality of sensors105may continually scan210the environment around vehicle100. For example, sensors105may take images of buildings, plants, and other vehicles as a part of normal operation while vehicle100is in-transit. These images may be in the visible spectrum, infrared spectrum, high-contrast, and/or three-dimensional (3D) images. Vehicle computer device110may transmit212the sensor data to database202. In some embodiments, vehicle computer device110may transmit212the data continuously to database202. In other embodiments, vehicle computer device110may transmit212the data when vehicle100is stopped, such as at a stoplight. In still other embodiments, vehicle computer device110may transmit212the data to database202when vehicle100is connected to a network through a wired connection, such as at a recharging station. In the exemplary embodiment, database202stores214all of the data received from sensors105. In some embodiments, database202may store214the raw data feeds. In other embodiments, database202may store214a sub-set of the data from sensors105. In some embodiments, database202may store sensor data from a plurality of vehicles100. Database202stores the data that allows EM server204to function as is described herein. In the exemplary embodiment, EM server204feeds216the received sensor data through a comparative algorithm containing historical data. In some embodiments, EM server204compares the received sensor data to historical sensor data from the same vehicle100. In other embodiments, EM server204compares the sensor data to historical sensor data from other vehicles. In the exemplary embodiment, EM server204determines218if there is an actionable change in the environment of vehicle100. In a first example, sensor data may contain images of a house that vehicle100drives past. In the exemplary embodiment, sensor data may include location data, such as from a GPS. Based upon the location of vehicle100at the time that sensor data was taken, EM server204may determine the address of the house. EM server204may compare the received images of the house to historical images of the house. Based upon the comparison, EM server204may determine that there is damage to the house that has occurred since the last time the house was sensed. EM server204may compare the sensor data of the house to sensor data of other houses and determine a potentially hazardous or dangerous condition of the house based upon the comparison. In these examples, EM server204determines that there is an actionable change, for example repairs that need to be made to the house. In a second example, sensor data may contain images of a plant, such as a tree. EM server204may compare the sensor data of the tree to sensor data from other trees of the same type and determine that the tree has a disease or requires trimming. In this example, EM server204determines that there is an actionable change, such as actions that need to be taken to improve the health of the tree. In a third example, sensor data may contain images of a public thoroughfare, such as a road or sidewalk. EM server204may determine that the public thoroughfare requires repair. In some embodiments, EM server204may determine a priority or severity of any needed repairs or actionable items. In the exemplary embodiment, if EM server204determines218that there are no actionable changes, system200continues scanning and analyzing the environment of vehicle100. If EM server204determines218that there is an actionable change, EM server204logs220the change in database202. EM server204determines222a 3rdParty206to notify about the actionable change and transmits the actionable change to the 3rdParty206. 3rdParty206may perform224an action based upon the actionable item or changes. EM server204may refine226one or more algorithms based upon the sensor data and the determined actionable item. In the exemplary embodiment, 3rdParty206is a subscriber to a service that monitors for potential actionable items. For example, 3rdParty206may be a landlord that owns a plurality of rental buildings. EM server204may determine222that one of the landlord's buildings is in need of repairs, that one of the trees in his yard has a disease, that one of the walkways near his building has a dangerous condition, and/or that one of his tenants is failing to perform proper maintenance, e.g., mow the lawn or perform repairs to the premises. The notification of the actionable item may inform the landlord of a previously unknown issue that requires action on his part. 3rdParty206may also be a homeowner's association and actionable items may include lawn maintenance, building changes, and/or other issues potentially related to the homeowner's charter. In other examples, 3rdParty206may be a service provider, such as a tree trimmer, a roofer, or other construction company. In these examples, 3rdParty206may transmit one or more advertisements to a person associated with the actionable item, such as the homeowner. For example, EM server204may determine218that there is damage to the siding of the house, determine222one or more 3rdParties206that may repair the issue, and notify those 3rdParties206. In still other examples, 3rdParty206may be a municipal service provider, such as a road repair crew or a building inspector. In the example of road repair crew, the actionable item may be one or more potholes or other potential hazards. In some embodiments, the hazard may be a broken water pipe and/or flooding on the road. In the example of a building inspector, EM server204may determine218that a new addition or out building was added to a property and notify the building inspector that there may be a permitting issue. In another example, EM server204may compare the timing of traffic lights to determine if there is an issue or if the timing of one or more lights may need to be adjusted. In still further examples, sensors105may observe a vehicular accident and EM server204may use sensor data to recreate the accident and provide the accident information to the police or the appropriate insurance companies. In this example, vehicle100may not be involved in the vehicular accident. In yet another example, sensors105may observe weather conditions. For example during a hail storm, sensors105may measure the size of hail through images and the rate of hail based upon the sound of the hail hitting vehicle100or the ground. EM server204may receive sensor data about the hail from multiple vehicles100in multiple locations to determine where the hail fell and how serious it was in different areas. Then EM server204may determine222one or more construction companies that would be interested in this information for lead generation purposes. In other examples, 3rdParty206may be an insurance provider. The EM server204may analyze the vehicle sensor data, and/or other data, received, such as discussed elsewhere herein, such as using pattern recognition or machine learning techniques. The EM server204may determine preventive or mitigation recommendations. For instance, image data acquired via vehicle sensors may reveal shrubbery to close to an insured home, or trees with large limbs over handing a roof of the insured home. Virtual recommendations may be generated by the EM server204, such as recommendations to trim vegetation surrounding the insured home, and transmitted to a customer's mobile device. If the customer verifies that the recommendations have been taken, then a homeowners insurance discount may be generated and applied to their insurance policy. Additionally or alternatively, the EM server204may determine that (i) the shingles on a roof of an insured home should be replaced; (ii) siding should be repaired or replaced; (iii) windows or storm windows should be upgraded; (iv) doors or garage doors should be upgraded; (v) trees are infected by insects and should be treated (such as via analysis of images of leaves); (vi) observed structural insulation efficiency; etc. Recommendations may be transmitted to the customer's mobile device for their review via wireless data transmission over one or more radio links or wireless communication channels. If the customer performs the suggested upgrade(s) to their home, an insurance discount may be generated and applied to their policy. After insurance-related event, such as one that causes damage to an insured vehicle or an insured home, vehicle sensor data and/or other data may be analyzed to estimate an extent of damage to the insured vehicle or home, respectively. A virtual proposed insurance claim may be generated using the damage estimate, and transmitted to the customer's mobile device for their review and/or approval. In case of a vehicle collision, if damage is severe, the insured vehicle may be deemed a “total loss” for insurance purposes and the total loss claim handling process may commence. In the case that a current insurance-related event, such as a home fire or vehicle collision, is anticipated or has happened, emergency personnel may be requested to arrive at the scene to render aid. Exemplary Method for Analyzing Vehicle Environment FIG.3illustrates a flow chart of an exemplary computer-implemented process300for analyzing the environment of a vehicle as shown inFIG.2. Process300may be implemented by a computing device, for example EM server204(shown inFIG.2). In the exemplary embodiment, EM server204may be in communication with vehicle computer device110(shown inFIG.1) through a wireless communications network, such as a cellular network. In some embodiments, database202(shown inFIG.2) and EM server204are both part of vehicle computer device110and included in vehicle100(shown inFIG.1). In the exemplary embodiment, EM server204may receive305a plurality of data from at least one sensor105(shown inFIG.1) associated with vehicle100. In the exemplary embodiment, the plurality of data may include at least one environmental condition. Examples of an environmental condition include, but are not limited to, a condition of a building, a condition of vegetation, a condition of a public thoroughfare, a weather condition, and a vehicular accident that vehicle100was not involved in. Other examples of environmental conditions are listed above. In the exemplary embodiment, vehicle100includes a plurality of sensors105that provide data to vehicle controller110. Vehicle controller110transmits the sensor data to EM server204for analysis. In the exemplary embodiment, EM server204may analyze310the plurality of data to determine the at least one environmental condition. In some embodiments, EM server204may compare the received plurality of data to historical sensor data to determine the environmental condition. In other embodiments, EM server204may use algorithms of potential issues and known examples of environmental conditions to determine if one of the known examples is in the received sensor data. In the exemplary embodiment, EM server204may determine315at least one actionable item based upon the determined at least one environmental condition. EM server204may determine320at least one provider or 3rdParty206(shown inFIG.3) based upon the actionable item. In some embodiments, the 3rdParty206is a user, who set up a user account to receive information about the determined environmental condition or actionable items. Examples of 3rdParties206may include municipal agencies, landlords, and advertisers. For example, EM server204may determine320a landlord for the actionable item based upon the location of vehicle100at the time that the sensor data was taken. Using the GPS location information, EM server204may determine the address of the building being imaged and determine the landlord for that building, who has set-up an account to receive notifications. In the scenario where the 3rdParty206is an insurance provider, the actionable item may be to generate a quote and/or a discount for homeowners or auto insurance based upon EM server204analysis of the vehicle sensor data collected. For instance, features and status of a home or vehicle may be determined from processor analysis (such as performing pattern recognition or machine learning techniques on image data), and risk, or lack thereof, may be assessed or estimated by the processor. Additionally or alternatively, after an insurance-related event, such as a tornado or wind storm for an insured home or a vehicle collision for an insured vehicle, the amount and/or level of severity of damage to the insured asset may be estimated from the sensor data received. For instance, a vehicle with extensive damage of a high percentage of pre-collision vehicle value may be deemed a total loss for insurance purposes. Referring back toFIG.3, EM server204may then transmit325a message to the determined 3rdParty provider206. The message may include the actionable item, the environmental condition, sensor data, and/or any other information required and/or requested by the 3rdParty. In some embodiments, EM server204may collect sensor data from a plurality of vehicles100and use that sensor data to determine315a plurality of actionable items. In these embodiments, EM server204may transmit a batch message to 3rdParty206with actionable items associated with 3rdParty's interests. Exemplary Computer Network FIG.4depicts a simplified block diagram of an exemplary system400for implementing process200shown inFIG.2. In the exemplary embodiment, system400may be used for analyzing the environment of a vehicle based upon sensor data, determining one or more actionable items based upon the environment, and communicating with providers to perform those actionable items. As described below in more detail, environment monitoring (“EM”) server204may be configured to receive a plurality of data from at least one sensor105associated with vehicle100(both shown inFIG.1). The plurality of data includes at least one environmental condition. EM server204may also be configured to analyze the plurality of data to determine the at least one environmental condition, determine at least one actionable item based upon the at least one environmental condition, determine at least one provider206(shown inFIG.2) based upon the actionable item, and transmit a message to the at least one provider206. The message includes the at least one actionable item to facilitate communication to providers about potential actionable items. In the exemplary embodiment, user computer devices405are computers that include a web browser or a software application, which enables user computer devices405to access EM server204using the Internet or other network. More specifically, user computer devices405are communicatively coupled to the Internet through many interfaces including, but not limited to, at least one of a network, such as the Internet, a local area network (LAN), a wide area network (WAN), or an integrated services digital network (ISDN), a dial-up-connection, a digital subscriber line (DSL), a cellular phone connection, and a cable modem. User computer devices405may be any device capable of accessing the Internet including, but not limited to, a desktop computer, a laptop computer, a personal digital assistant (PDA), a cellular phone, a smartphone, a tablet, a phablet, wearable electronics, smart watch, or other web-based connectable equipment or mobile devices. In some embodiments, user computer device405is associated with the policyholder of an account associated with vehicle100. In other embodiments, user computer device405is associated with a third party, such as 3rdParty Provider206. A database server410may be communicatively coupled to a database202that stores data. In one embodiment, database202may include 3rdParty providers, sensor data, historical data, environmental conditions, and/or actionable items. In the exemplary embodiment, database202may be stored remotely from EM server204. In some embodiments, database202may be decentralized. In the exemplary embodiment, a user may access database202via user computer devices405by logging onto EM server204, as described herein. EM server204may be communicatively coupled with the user computer devices405. In some embodiments, EM server204may be associated with, or is part of a computer network associated with a manufacturer of vehicle100, or in communication with the manufacturer's computer network (not shown). In other embodiments, EM server204may be associated with a third party. In some embodiments, vehicle controller110may include EM server204. In other embodiments, EM server204may be remote from vehicle computer device110and may communicate with vehicle computer device110via a wireless connection, such as a cellular connection. In some embodiments, EM server204may be associated with, or is part of a computer network associated with an insurance provider, or in communication with the insurance provider's computer network (not shown). In other embodiments, EM server204may be associated with a third party and is merely in communication with the insurance provider's computer network. One or more vehicle computer devices110may be communicatively coupled with EM server204through the Internet or a cellular network. In the exemplary embodiment, vehicle computer devices110are computers included in vehicles100that include a software application, which enables vehicle computer devices110to access EM server204using the Internet or other network. More specifically, vehicle computer devices110are communicatively coupled to the Internet through many interfaces including, but not limited to, at least one of a network, such as the Internet, a local area network (LAN), a wide area network (WAN), or an integrated services digital network (ISDN), a dial-up-connection, a digital subscriber line (DSL), a cellular phone connection, and a cable modem. In some embodiments, vehicle computer device110may be capable of communicating with EM server204while in transit. In other embodiments, vehicle computer device110may be capable of communicating with EM server204while vehicle100is at rest, such as at a stoplight. In still other embodiments, vehicle computer device110may be capable of communicating with EM server204while vehicle100is parked, such as at a recharging station (not shown). Vehicle computer device110may also include one or more sensors105. Vehicle computer device110may be configured to receive data from sensors105and transmit sensor data to EM server204. In the exemplary embodiment, sensor105may be a configured to detect one or more conditions of the environment around vehicle100. In other embodiments, sensor105may be configured to detect one or more conditions of one or more occupants of vehicle100, such as driver115and/or passengers120(both shown inFIG.1). Exemplary Client Device FIG.5depicts an exemplary configuration of user computer device405shown inFIG.4, in accordance with one embodiment of the present disclosure. User computer device502may be operated by a user501. User computer device502may include, but is not limited to, user computer devices405(shown inFIG.4), vehicle controller110(shown inFIG.1), and mobile device125(shown inFIG.1). User computer device502may include a processor505for executing instructions. In some embodiments, executable instructions are stored in a memory area510. Processor505may include one or more processing units (e.g., in a multi-core configuration). Memory area510may be any device allowing information such as executable instructions and/or transaction data to be stored and retrieved. Memory area510may include one or more computer readable media. User computer device502may also include at least one media output component515for presenting information to user501. Media output component515may be any component capable of conveying information to user501. In some embodiments, media output component515may include an output adapter (not shown) such as a video adapter and/or an audio adapter. An output adapter may be operatively coupled to processor505and operatively coupleable to an output device such as a display device (e.g., a cathode ray tube (CRT), liquid crystal display (LCD), light emitting diode (LED) display, or “electronic ink” display) or an audio output device (e.g., a speaker or headphones). In some embodiments, media output component515may be configured to present a graphical user interface (e.g., a web browser and/or a client application) to user501. A graphical user interface may include, for example, an online store interface for viewing and/or purchasing items, and/or a wallet application for managing payment information. In some embodiments, user computer device502may include an input device520for receiving input from user501. User501may use input device520to, without limitation, select and/or enter one or more items to purchase and/or a purchase request, or to access credential information, and/or payment information. Input device520may include, for example, a keyboard, a pointing device, a mouse, a stylus, a touch sensitive panel (e.g., a touch pad or a touch screen), a gyroscope, an accelerometer, a position detector, a biometric input device, and/or an audio input device. A single component such as a touch screen may function as both an output device of media output component515and input device520. User computer device502may also include a communication interface525, communicatively coupled to a remote device such as EM server204(shown inFIG.2). Communication interface525may include, for example, a wired or wireless network adapter and/or a wireless data transceiver for use with a mobile telecommunications network. Stored in memory area510are, for example, computer readable instructions for providing a user interface to user501via media output component515and, optionally, receiving and processing input from input device520. A user interface may include, among other possibilities, a web browser and/or a client application. Web browsers enable users, such as user501, to display and interact with media and other information typically embedded on a web page or a website from EM server204. A client application allows user501to interact with, for example, EM server204. For example, instructions may be stored by a cloud service, and the output of the execution of the instructions sent to the media output component515. Processor505executes computer-executable instructions for implementing aspects of the disclosure. In some embodiments, the processor505is transformed into a special purpose microprocessor by executing computer-executable instructions or by otherwise being programmed. For example, the processor505may be programmed with the instruction such as illustrated inFIG.7. In some embodiments, user computer device502may include, or be in communication with, one or more sensors, such as sensor105(shown inFIG.1). User computer device502may be configured to receive data from the one or more sensors and store the received data in memory area510. Furthermore, user computer device502may be configured to transmit the sensor data to a remote computer device, such as EM server204, through communication interface525. Exemplary Server Device FIG.6depicts an exemplary configuration of server204shown inFIG.4, in accordance with one embodiment of the present disclosure. Server computer device601may include, but is not limited to, database server410(shown inFIG.4), EM server204(shown inFIG.2), and vehicle controller110(shown inFIG.1). Server computer device601may also include a processor605for executing instructions. Instructions may be stored in a memory area610. Processor605may include one or more processing units (e.g., in a multi-core configuration). Processor605may be operatively coupled to a communication interface615such that server computer device601is capable of communicating with a remote device such as another server computer device601, mobile device125(shown inFIG.1), vehicle computer device110(shown inFIG.1), user computer device405(shown inFIG.4), and EM server204. For example, communication interface615may receive requests from user computer devices405via the Internet, as illustrated inFIG.4. Processor605may also be operatively coupled to a storage device634. Storage device634may be any computer-operated hardware suitable for storing and/or retrieving data, such as, but not limited to, data associated with database202(shown inFIG.2). In some embodiments, storage device634may be integrated in server computer device601. For example, server computer device601may include one or more hard disk drives as storage device634. In other embodiments, storage device634may be external to server computer device601and may be accessed by a plurality of server computer devices601. For example, storage device634may include a storage area network (SAN), a network attached storage (NAS) system, and/or multiple storage units such as hard disks and/or solid state disks in a redundant array of inexpensive disks (RAID) configuration. In some embodiments, processor605may be operatively coupled to storage device634via a storage interface620. Storage interface620may be any component capable of providing processor605with access to storage device634. Storage interface620may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing processor605with access to storage device634. Processor605may execute computer-executable instructions for implementing aspects of the disclosure. In some embodiments, the processor605may be transformed into a special purpose microprocessor by executing computer-executable instructions or by otherwise being programmed. For example, the processor605may be programmed with the instruction such as illustrated inFIG.3. Exemplary Vehicular Crash Detection FIG.7illustrates a flow chart of an exemplary computer-implemented process700for detecting a vehicular crash using system400shown inFIG.4. Process700may be implemented by a computing device, for example vehicle computer device110(shown inFIG.4). In some embodiments, process700may be implemented by EM server204(shown inFIG.2). In the exemplary embodiment, vehicle computer device110may be in communication with EM server204. In the exemplary embodiment, vehicle computer device110receives705data from at least one sensor105(shown inFIG.1). In the exemplary embodiment, at least one sensor105may be one or more of plurality of sensors105(shown inFIG.1) in vehicle100. Vehicle computer device110determines710that a potential vehicular crash is imminent based upon the received sensor data. For example, in the exemplary embodiment, sensor105is an external sensor and may show that another vehicle is about to collide with vehicle100. Or sensor105may be an impact sensor or any other sensor that allows vehicle computer device110to work as described herein. In some embodiments, vehicle computer device110generates a scenario model of the potential vehicular crash based upon the received sensor data. Scenario models may predict damage to vehicle100and injuries that may be experiences by driver115and passengers120(both shown inFIG.1) of vehicle100. In the exemplary embodiment, vehicle computer device110accesses a database, such as database202(shown inFIG.2). Database202may contain a plurality of crash scenarios and the sensor data associated with these crash scenarios. The scenarios may be based upon information from vehicle crash testing facilities, from past crashes that EM server204has analyzed, and/or from other sources that allow vehicle computer device110to operate as described here. Vehicle computer device110compares the received sensor data with the different stored crash scenarios to generate a scenario model that is the most likely match for the imminent vehicular crash. In some further embodiments, vehicle computer device110may communicate the sensor data to EM server204, where EM server204may generate the scenario model. In some embodiments, vehicle computer device110generates a plurality of scenario models that may fit the sensor data received. Vehicle computer device110may then rank the generated scenarios based upon the likelihood or degree of certainty that the scenario is correct. In some further embodiments, vehicle computer device110may compare the degree of certainty to a predetermined threshold. In the exemplary embodiment, vehicle computer device110performs715at least one action to reduce the severity of the potential vehicular crash prior to impact. In some embodiments, the action that vehicle computer device110performs715may be to adjust the position or situation of vehicle100at the point of impact. In these embodiments, vehicle computer device110may determine a position of vehicle100to reduce damage to at least one of one or more occupants of the vehicle and the vehicle based upon the scenario model. Vehicle computer device110may instruct vehicle100to adjust its position to the determined position to lessen the impact. For example, vehicle computer device110may instruct vehicle100to turn one or more wheels to readjust vehicle's position. In other examples, vehicle100may include hydraulics or some other component that allows vehicle100to raise or lower portions of itself. In these examples, vehicle computer device110may instruct vehicle100to raise or lower a portion of itself to redirect how forces may impact the vehicle during impact. In some further examples, vehicle100may have one or more inflatable external components that vehicle computer device110may be able to instruct vehicle100to inflate prior to impact to cause forces in the impact to be redirected. In another embodiment, vehicle computer device110may receive data from sensors105about driver115and passengers120of vehicle100. In this embodiment, vehicle computer device110may be able to use that sensor data to determine a position and a direction of facing of at least one occupant of the vehicle. Then using the scenario model, vehicle computer device110may be able to determine an advantageous direction of facing for the at least one occupant. Vehicle computer device110may then generate a sound through the audio system of vehicle100, such a horn or alarm sound. The sound would be generated to cause the at least one occupant to change to the advantageous direction of facing. For example, vehicle computer device110may generate a honking sound to cause the passenger to turn around to prevent or reduce potential injuries during the imminent vehicular crash. Exemplary Computer Device FIG.8depicts a diagram800of components of one or more exemplary computing devices810that may be used in system400shown inFIG.4. In some embodiments, computing device810may be similar to EM server204(shown inFIG.2). Database820may be coupled with several separate components within computing device810, which perform specific tasks. In this embodiment, database820may include 3rdParty providers822, sensor data824, environmental conditions826, and/or actionable items828. In some embodiments, database820is similar to database202(shown inFIG.2). Computing device810may include the database820, as well as data storage devices830. Computing device810may also include a communication component840for receiving305a plurality of data and transmitting325a message (both shown inFIG.3), such as via wireless communication or data transmission via radio links or wireless communication channels. Computing device810may further include an analyzing component850for analyzing310the plurality of data (shown inFIG.3). Moreover, computing device810may include a determining component860for determining315at least one actionable item and determining320at least one provider (both shown inFIG.3). A processing component870may assist with execution of computer-executable instructions associated with the system. Exemplary Vehicle Environment Analysis FIG.9illustrates a flow chart of another exemplary computer-implemented process for analyzing the environment of a vehicle shown inFIG.2. Process900may be implemented by a computing device, for example EM server204(shown inFIG.2). In the exemplary embodiment, EM server204may be in communication with vehicle computer device110(shown inFIG.1) through a wireless communications network, such as a cellular network, and/or over one or more radio links or wireless communication channels. In some embodiments, database202(shown inFIG.2) and EM server204are both part of vehicle computer device110and included in vehicle100(shown inFIG.1). In the exemplary embodiment, EM server204may receive905a plurality of data from at least one sensor105(shown inFIG.1) associated with vehicle100. In the exemplary embodiment, the plurality of data may include at least one environmental condition. Examples of an environmental condition include, but are not limited to, a condition of a building, a condition of vegetation, a condition of a public thoroughfare, a weather condition, and a vehicular accident that vehicle100was not involved in. Other examples of environmental conditions are listed above. In the exemplary embodiment, vehicle100includes a plurality of sensors105that provide data to vehicle controller110. Vehicle controller110transmits the sensor data to EM server204for analysis. In the exemplary embodiment, EM server204may analyze910the plurality of data to determine the at least one environmental condition. In some embodiments, EM server204may compare the received plurality of data to historical sensor data to determine the environmental condition. In other embodiments, EM server204may use algorithms of potential issues and known examples of environmental conditions to determine if one of the known examples is in the received sensor data. In the exemplary embodiment, EM server204may determine915at least one actionable item based upon the determined at least one environmental condition. EM server204may determine920an insurance policy associated with the at least one actionable item. For example, EM server204may determine920an associated insurance policy by determining an address associated with the at least one actionable item. Using the GPS location information, EM server204may determine the address of the building being imaged. EM server204may then look up the address in database202to determine920if there is an insurance policy associated with that address. EM server204may then generate925a proposed virtual insurance claim based upon the at least one actionable item. For example, the environmental condition may be damage to the siding of a house and the actionable item be the needed repairs. The house may be insured and EM server204may generate a proposed virtual claim for the damage based upon the insurance policy and the needed repairs. In some embodiments, EM server204may determine a cost and/or value of the virtual claim based upon the actionable item. In some further embodiments, EM server204may determine one or more recommended 3rdParty providers206to rectify the actionable items. In some further embodiments, EM server204may request bids from 3rdParty Providers206to determine the costs and/or values of actionable items. In still further embodiments, EM server204may generate925a plurality of potential virtual claims based upon the actionable items. EM server204may then rank the plurality of potential claims to determine which ones are the most appropriate for the actionable items before generating. EM server204may use other methods to determine the most appropriate virtual claims to generate. EM server204may present the proposed virtual insurance claim to the customer for their review and/or approval. For instance, the EM server204may transmit the proposed virtual insurance claim to the customer's mobile device or vehicle display, such as via wireless communication or data transmission over one or more radio frequency links or wireless communication channels. After which, the customer may approve the proposed virtual insurance claim, such as by pressing an icon of their mobile device or vehicle display. The approval may then be transmitted via wireless communication or data transmission back to the EM server204. After which, EM server204may transmit930the virtual insurance claims to an insurance provider, such as the one associated with the insurance policy. The insurance provider may then complete the virtual claim and/or determine other claims for the insurance policy based upon the actionable items and environmental conditions. Additionally or alternatively, after an insurance-related event, such as a tornado or wind storm for an insured home or a vehicle collision for an insured vehicle, the amount and/or level of severity of damage to the insured asset may be estimated from the sensor data received. For instance, a vehicle with extensive damage of a high percentage of pre-collision vehicle value may be deemed a total loss for insurance purposes. Referring back toFIG.9, EM server204may then generate925and transmit930one or more virtual claims to the insurance provider. The virtual claims may include the actionable item, the environmental condition, sensor data, and/or any other information required and/or requested by the insurance provider. In some embodiments, EM server204may collect sensor data from a plurality of vehicles100and use that sensor data to determine915a plurality of actionable items. Exemplary Method for Analyzing Vehicle Environment FIG.10illustrates a flow chart of a further exemplary computer-implemented process for analyzing the environment of a vehicle shown inFIG.2. Process1000may be implemented by a computing device, for example EM server204(shown inFIG.2). In the exemplary embodiment, EM server204may be in communication with vehicle computer device110(shown inFIG.1) through a wireless communications network, such as a cellular network. In some embodiments, database202(shown inFIG.2) and EM server204are both part of vehicle computer device110and included in vehicle100(shown inFIG.1). In the exemplary embodiment, EM server204may receive1005a plurality of data from at least one sensor105(shown inFIG.1) associated with vehicle100. In the exemplary embodiment, the plurality of data may include at least one environmental condition. Examples of an environmental condition include, but are not limited to, a condition of a building, a condition of vegetation, a condition of a public thoroughfare, a weather condition, and a vehicular accident that vehicle100was not involved in. Other examples of environmental conditions are listed above. In the exemplary embodiment, vehicle100includes a plurality of sensors105that provide data to vehicle controller110. Vehicle controller110transmits the sensor data to EM server204for analysis. In the exemplary embodiment, EM server204may analyze1010the plurality of data to determine the at least one environmental condition. In some embodiments, EM server204may compare the received plurality of data to historical sensor data to determine the environmental condition. In other embodiments, EM server204may use algorithms of potential issues and known examples of environmental conditions to determine if one of the known examples is in the received sensor data. In the exemplary embodiment, EM server204may determine1015a condition of a building based upon the at least one environmental condition. For instance, features and status of the building may be determined from processor analysis (such as performing pattern recognition or machine learning techniques on image data), and risk, or lack thereof, may be assessed or estimated by the processor or EM server204. Using the GPS location information, EM server204may determine the address of the building being imaged. EM server204may determine1020an insurance product for the building based upon the determine condition of the building. For example, if the building is a home, then EM server204may determine1020a homeowner's insurance policy for the home based upon the condition of the home. EM server204may use the determined address to determine additional information that may be used in determining1020an insurance product. EM server204may then generate1025an insurance quote for the insurance product. In the exemplary embodiment, EM server204transmits the insurance quote and the determined product to a 3rdParty Provider206(shown inFIG.2), such as an insurance provider. In other embodiments, EM server204may transmit the insurance quote to the homeowner. The insurance provider may then provide the quote to the homeowner. The insurance quote and/or determined insurance product may include the environmental condition, sensor data, and/or any other information required and/or requested by 3rdParty206. In some embodiments, EM server204may collect sensor data from a plurality of vehicles100and use that sensor data to determine315a plurality of actionable items. In some embodiments, EM server204may determine at least one actionable item based upon the determined at least one environmental condition. EM server204may then adjust the insurance quote based upon the at least one actionable item. Exemplary Embodiments & Functionality In one aspect, a computer system for analyzing the environment of a vehicle may be provided. The computer system may include at least one processor in communication with at least one memory device. The at least one processor (local or remote to the vehicle) may be configured or programmed to: (1) receive a plurality of data from at least one sensor associated with a vehicle (such as via wireless communication or data transmission over one or more radio links or communication channels), where the plurality of data includes at least one environmental condition; (2) analyze the plurality of data to determine the at least one environmental condition; (3) determine at least one actionable item based upon the at least one environmental condition; (4) determine at least one provider based upon the actionable item; and/or (5) transmit a message to the at least one provider (such as via wireless communication or data transmission over one or more radio links or wireless communication channels), wherein the message includes the at least one actionable item. The environmental condition may be proximate to the vehicle at a point in time. The computer system may achieve the above results where the environmental condition is proximate to the vehicle at a plurality of separate points in time and the computer system determines the at least one environmental condition by comparing data associated with the plurality of separate points in time. The environmental condition may be, but is not limited to, a condition of a building, a condition of vegetation, a condition of a public thoroughfare, a weather condition, and a vehicular collision that the vehicle was not involved in. A further enhancement may be where the plurality of data includes location data, such as from a GPS unit. And the processor may determine a location associated with the at least one environmental condition based upon the plurality data including the location data. A further enhancement may be where the computer system may transmit a message to one or more emergency services based upon the scenario model. The one or more emergency services may include, but are not limited to, a towing service, an emergency medical service provider, a fire department, a police department, and/or some other emergency responder. The computer system may select the one or more emergency services to transmit to based upon the scenario model and the location of the vehicular crash. The computer system may achieve the above results by storing a database of historical sensor data based upon past sensor data that the vehicle observed. The computer system may then compare the database of historical sensor data to the received sensor data and determine the at least one environmental condition based upon the comparison. The computer system may also achieve the above results by storing a database of historical data from a plurality of vehicles and using that database to determine the at least one environmental condition. A further enhancement may be where the computer system may be configured to include a database of potential environmental conditions that may be compared to the received sensor data to determine the at least one environmental condition. The sensor data described herein may include, but is not limited to, pictures and/or images of around the vehicle, 3D scans of the environment around the vehicle, infrared images, the velocity of the vehicle, vibrational data, travel timing data, the acceleration of the vehicle, the location of the vehicle, the direction of travel of the vehicle, one or more changes in velocity, one or more changes in direction of the vehicle, a number of occupants in the vehicle, seatbelt sensor data, and seat occupant weight sensor data. A further enhancement may be where third parties have signed up with user accounts that are tied to locations. When the computer system detects an environmental condition at a location associated with a user account, the computer system may transmit a message to the corresponding third party about the environmental condition. A still further enhancement may be where the environmental condition is associated with a potential business opportunity and the third party transmits an advertisement associated with the environmental condition and/or the actionable item. The actionable item may be a product or service that the third party may provide to resolve the actionable item. Machine Learning & Other Matters The computer-implemented methods discussed herein may include additional, less, or alternate actions, including those discussed elsewhere herein. The methods may be implemented via one or more local or remote processors, transceivers, and/or sensors (such as processors, transceivers, and/or sensors mounted on vehicles or mobile devices, or associated with smart infrastructure or remote servers), and/or via computer-executable instructions stored on non-transitory computer-readable media or medium. Additionally, the computer systems discussed herein may include additional, less, or alternate functionality, including that discussed elsewhere herein. The computer systems discussed herein may include or be implemented via computer-executable instructions stored on non-transitory computer-readable media or medium. A processor or a processing element may be trained using supervised or unsupervised machine learning, and the machine learning program may employ a neural network, which may be a convolutional neural network, a deep learning neural network, or a combined learning module or program that learns in two or more fields or areas of interest. Machine learning may involve identifying and recognizing patterns in existing data in order to facilitate making predictions for subsequent data. Models may be created based upon example inputs in order to make valid and reliable predictions for novel inputs. Additionally or alternatively, the machine learning programs may be trained by inputting sample data sets or certain data into the programs, such as image, mobile device, vehicle telematics, autonomous vehicle, and/or intelligent home telematics data. The machine learning programs may utilize deep learning algorithms that may be primarily focused on pattern recognition, and may be trained after processing multiple examples. The machine learning programs may include Bayesian program learning (BPL), voice recognition and synthesis, image or object recognition, optical character recognition, and/or natural language processing—either individually or in combination. The machine learning programs may also include natural language processing, semantic analysis, automatic reasoning, and/or machine learning. In supervised machine learning, a processing element may be provided with example inputs and their associated outputs, and may seek to discover a general rule that maps inputs to outputs, so that when subsequent novel inputs are provided the processing element may, based upon the discovered rule, accurately predict the correct output. In unsupervised machine learning, the processing element may be required to find its own structure in unlabeled example inputs. In one embodiment, machine learning techniques may be used to extract data about the mobile device or vehicle from device details, mobile device sensors, geolocation information, image data, and/or other data. In one embodiment, a processing element may be trained by providing it with a large sample of phone and/or online credentials with known characteristics or features. Such information may include, for example, fingerprint, device print, verification codes, PBQA, and/or passive voice analysis. Based upon these analyses, the processing element may learn how to identify characteristics and patterns that may then be applied to analyzing sensor data, authentication data, image data, mobile device data, and/or other data. For example, the processing element may learn, with the user's permission or affirmative consent, to identify the user based upon the user's device or login information. The processing element may also learn how to identify different types of environmental changes and associated actionable items based upon differences in the received sensor data. The processing element may further learn how to identify an environmental change and/or actionable item based upon partial or incomplete information and determine a level of certainty that the environmental change and/or actionable item is correct. Additional Exemplary Embodiments In still another aspect, a computer system for detecting a vehicular crash may be provided. The computer system may include at least one processor, sensor, and/or transceiver in communication with at least one memory device, the at least one processor, sensor, and/or transceiver. The at least one processor may be programmed to (1) receive data from the at least one sensor; (2) determine that a potential vehicular crash is imminent based upon the received data; and/or (3) perform at least one action to reduce a severity of the potential vehicular crash prior to impact. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. For instance, the data from the at least one sensor may include speed, acceleration, and braking information; and/or image data associated with an area forward of a direction of travel of a vehicle that is acquired by a video recorder or camera mounted on the vehicle. Determining that a potential vehicular crash is imminent may be based upon applying object recognition techniques on the image data acquired by the video recorder or camera mounted on the vehicle. Determining that a potential vehicular crash is imminent may further be based upon vehicle speed, acceleration, and breaking data. Determining that a potential vehicular crash is imminent may be based upon processor analysis of vehicle speed and acceleration data, and the image data acquired by a vehicle mounted video recorder or camera. The processor may generate a model of the potential vehicular crash based upon the received data to further analyze. The processor may also determine a position and a direction of facing of at least one occupant of the vehicle and use the model to determine an advantageous direction of facing for the at least one occupant. If one of the occupants is not facing in an advantageous way, the processor may generate a sound through the audio system to cause the at least one occupant to change to the advantageous direction of facing. The processor may also use the model to determine a position or orientation of the vehicle to reduce damage to at least one of one or more occupants of the vehicle and the vehicle itself. The processor may then instruct the vehicle to adjust position to the determined position. This may be done by instructing the vehicle to turn at least one wheel to adjust position and/or instructing the vehicle to raise or lower at least a portion of the vehicle. The processor may also instruct the vehicle to inflate a portion of the vehicle to redirect or lessen the impact. Determining that a potential vehicular crash is imminent may be based upon processor analysis of vehicle speed and acceleration data, and analysis of the image data acquired by a vehicle mounted video recorder or camera that determines whether an object in a direction of travel of the vehicle is within a predetermined or threshold distance for the given vehicle speed and acceleration. The sensor data may be analyzed to estimate a severity of the expected vehicular crash, and the estimated severity of the expected vehicular crash may be transmitted to a remote server via wireless communication or data transmission over one or more radio links or wireless communication channels. The estimated severity of the expected vehicular crash may be determined based upon vehicle speed, acceleration, and braking data acquired from mobile device-mounted sensors and/or vehicle-mounted sensors, and a size and type of the object determined to be in the direction of travel of the vehicle from performing object recognition techniques on the image data captured by one or more vehicle-mounted cameras or video recorders. The type of the object determined to be in the direction of travel of the vehicle may be a compact vehicle, sport-utility vehicle, truck, or semi-truck. The type of the object determined to be in the direction of travel of the vehicle may be a concrete pillar or support, a street sign, traffic light, or other road marking. The type of the object determined to be in the direction of travel of the vehicle may be an animal or a tree. For instance, the sensor data may include vehicle speed, acceleration, and braking information. The sensor data may further include image data of area in a direction of vehicle travel or otherwise forward of the moving vehicle, the image data being acquired from one or more video recorders or cameras mounted on the vehicle, a dashboard of the vehicle, or a mobile device traveling within the vehicle. The method may include analyzing, via the one or more processors, the image data using object recognition or pattern recognition techniques to identify objects forward of the moving vehicle. The method may include using the results of the object recognition or pattern recognition techniques performed on the image data to identify type of objects forward of the moving vehicle. The object forward of the moving vehicle identified may be a compact vehicle, sport utility vehicle, or a truck. The object forward of the moving vehicle identified may be a concrete pillar or support, a road sign, a traffic light, or mile marker. The object forward of the moving vehicle identified may be an animal or a tree. Determining, via the one or more processors, that a vehicle collision is imminent (or likely imminent) based upon analysis of the sensor data may include processor analysis of vehicle speed and acceleration data, and determining whether or not an object shown in image data is within a predetermined distance of the vehicle. The one or more processors may determine that based upon the sensor data (such as vehicle speed, acceleration, and braking) and distance to an object shown in the image data that a collision will occur in 0.5 seconds, 1 second, 2 seconds, 3 seconds, etc. For instance, a processor may determine that a vehicle collision is imminent if it is likely to occur within 1-3 seconds. Determining, via the one or more processors, an estimated severity of the vehicle collision based upon analysis of the sensor data may include processor analysis of vehicle speed and acceleration data, and determining a size and type of an object shown in image data forward of a direction of travel of the vehicle. Determining, via the one or more processors, an estimated severity of the vehicle collision based upon analysis of the sensor data may include processor analysis of vehicle speed and acceleration data, and determining a size and type of an object shown in image data forward of a direction of travel of the vehicle, and a distance to the object. Determining, via the one or more processors, whether the estimated severity is above a predetermined threshold may include estimating an amount of vehicle damage from the vehicle collision and estimating whether or not the vehicle will be drivable or not. ADDITIONAL CONSIDERATIONS As will be appreciated based upon the foregoing specification, the above-described embodiments of the disclosure may be implemented using computer programming or engineering techniques including computer software, firmware, hardware or any combination or subset thereof. Any such resulting program, having computer-readable code means, may be embodied or provided within one or more computer-readable media, thereby making a computer program product, i.e., an article of manufacture, according to the discussed embodiments of the disclosure. The computer-readable media may be, for example, but is not limited to, a fixed (hard) drive, diskette, optical disk, magnetic tape, semiconductor memory such as read-only memory (ROM), and/or any transmitting/receiving medium, such as the Internet or other communication network or link. The article of manufacture containing the computer code may be made and/or used by executing the code directly from one medium, by copying the code from one medium to another medium, or by transmitting the code over a network. These computer programs (also known as programs, software, software applications, “apps”, or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” “computer-readable medium” refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The “machine-readable medium” and “computer-readable medium,” however, do not include transitory signals. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. As used herein, a processor may include any programmable system including systems using micro-controllers, reduced instruction set circuits (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are example only, and are thus not intended to limit in any way the definition and/or meaning of the term “processor.” As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a processor, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are example only, and are thus not limiting as to the types of memory usable for storage of a computer program. In one embodiment, a computer program is provided, and the program is embodied on a computer readable medium. In an exemplary embodiment, the system is executed on a single computer system, without requiring a connection to a sever computer. In a further embodiment, the system is being run in a Windows® environment (Windows is a registered trademark of Microsoft Corporation, Redmond, Washington). In yet another embodiment, the system is run on a mainframe environment and a UNIX® server environment (UNIX is a registered trademark of X/Open Company Limited located in Reading, Berkshire, United Kingdom). The application is flexible and designed to run in various different environments without compromising any major functionality. In some embodiments, the system includes multiple components distributed among a plurality of computing devices. One or more components may be in the form of computer-executable instructions embodied in a computer-readable medium. The systems and processes are not limited to the specific embodiments described herein. In addition, components of each system and each process can be practiced independent and separate from other components and processes described herein. Each component and process can also be used in combination with other assembly packages and processes. As used herein, an element or step recited in the singular and preceded by the word “a” or “an” should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited. Furthermore, references to “example embodiment” or “one embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The patent claims at the end of this document are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being expressly recited in the claim(s). This written description uses examples to disclose the disclosure, including the best mode, and also to enable any person skilled in the art to practice the disclosure, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the disclosure is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
76,408
11861728
The Figures depict aspects of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternate aspects of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION In general, data modeling is used in various contexts to assess risk in insurance, finance, and other industries and professions. For example, in life insurance assessments, data models may incorporate the analysis of mortality data, the production of life tables, and the application of compound interest. As another example, health insurance modeling may focus on the analysis of rates of disability, morbidity, mortality, fertility, and other factors. The systems and methods of the present disclosure offer a variety of data science and statistical data modeling methods. Generally, a data model may assess the linear or non-linear relationship between a response variable (i.e., a dependent variable), and a set of predictors (i.e., input or independent variables). For example, in property and casualty insurance ratemaking applications, the response variable may be one of: claim frequency, claim severity, pure premium, or loss ratio. Additionally, examples of ratemaking predictors may be: type of vehicle, age, or marital status for auto insurance; and construction type, building age, or amount of insurance (AOI) for homeowners insurance. According to some embodiments, techniques and platforms for building, managing, and combining data models are discussed. The techniques and platforms may build data models using a model build partition and a model assessment partition. The model build partition may incorporate a set of data transformation and modeler techniques, and the model assessment partition may incorporate a model level comparison and a variable level comparison. According to some embodiments, the systems and methods may be supported by a server computer, and data may be uploaded to and stored on the server. Additionally, the server may support client login, for example using a username and password, or other techniques. The systems and methods offer numerous benefits. In particular, the systems and methods effectively and efficiently enable users to input data modeling parameters, and automatically populate applications with the requisite programming code that reflects the inputted parameters. The systems and methods further employ data model combining and selecting techniques that effectively and efficiently identify accurate and useful data models for subsequent use. It should be appreciated that additional benefits are envisioned and realized. Exemplary Model Building Techniques FIG.1is an illustration of a technique100for building a single data model, according to certain aspects. As illustrated inFIG.1, the technique100may include a model build partition101and a model assessment partition102. The model build partition101may include the following components and subcomponents: a structure component103, a data transformation component104(with the following sub-components: binning111and sampling112), and a modeler component107(with the following sub-components: data113, exploratory data analysis (EDA)114, set ref115, variable selection116, benchmark model117, and challenger model comparison118). In one embodiment, setting a reference level115after EDA114in single tool may be a new and innovative approach. Also believed to be unique is building a benchmark model117and computing prediction statistics119before building challenger models118within a single tool. The model assessment partition102may include the following components and subcomponents: a model level comparison108(with the following sub-components: prediction statistics119and other plots120), and a variable level comparison109(with the following sub-components: main effects relativity plots121and interaction relativity plots122). Exemplary Structure Component Interface The structure component103may aim to automatically create a modeling folder structure using programming language. According to some embodiments, a user such as an analyst may identify a location of the data.FIG.2illustrates an exemplary interface200associated with the structure component103. The interface200may include an input window201into which an individual may input a directory where modeling data is to be saved. Additionally, the interface200may include a root directory input window202into which an individual may input a root directory where the modeling output is to be saved. The interface200may further include a coding window205that may be displayed alongside the other windows201,202, or in a different location. In association with the individual inputting information into one or more of the windows201,202, the coding window205may automatically populate with program code that corresponds to the inputted information. For example, the coding window205automatically updates lines of program code for the inputted modeling data directory (“DataRaw”). In this regard, the platform enables automatic program code generation corresponding to the inputted information. The binning sub-component111, as discussed with respect toFIG.1, may aim to combine categories of insufficient data so that the model prediction may be stable/credible. According to some embodiments, the systems and methods may support the following functionalities: displaying a list of variables in a dataset to enable analysts or other individuals to correctly identify variables, transforming options for binned values within the platform, and enabling name assignment for newly-created binned variables (or enabling the platform to assign default names). Exemplary Data Transformation Component In general, the present embodiments may include a data transformation component. The objectives and functionality of the data transformation component may include preparing the data for modeling—which may include binning and random sampling. The objective and functionality of binning may be to combine categories of insufficient data. The new processes within the tool (later referred to as the SMART Tool herein) may include displaying a list of variables in the data set to help analysts correctly identify variables; transforming options for binned values within the tool; and/or allowing name assignment for the newly created binned variable or allowing the SMART Tool to assign default names. The objective and functionality of random sampling may be to divide the data into training and validation data. This division facilitates creating of build and validate datasets. The integration and organization of the data transformations within a single tool is believed to be a new approach to modeling processes. Exemplary Binning Component Interface FIG.3illustrates an example interface300associated with the binning sub-component111of the data transformation component. The interface300may include an input window301into which an individual may add variables to be binned. Additionally, the interface300may include a binning input window302into which an individual may specify the following information: the variable, the binning method (e.g., pseudo-quantile binning or bucket binning), a number of bins (e.g., ten (10)), a new variable name, and/or other information). The interface300may further include an output window303into which an individual may input a desired name for the output dataset. The interface300may further include a coding window305that may be displayed alongside the other windows301,302,303, or in a different location. In association with the individual inputting information into one or more of the windows301,302,303, the coding window305may automatically populate with program code that corresponds to the inputted information. For example, the coding window305automatically adds lines of program code for the inputted variables “R_MODEL_YEAR” and “R_TENURE”. For further example, the coding window305automatically updates a line of program code to name the output data set as “PDIL_Bin”. In this regard, the platform enables automatic program code generation corresponding to inputted information associated with variable binning. Exemplary Sampling Component Interface FIG.4illustrates an example interface400associated with the sampling sub-component112of the data transformation component. The interface400may include an input window401into which an individual may specify a dataset to be sampled (as shown: “DATA_OUT.PDIL_BIN”). Additionally, the interface400may include a stratification window402into which an individual may input a parameter(s) by which to stratify. The interface400may further include an options window403into which an individual may specify the following parameters: a sample percent (or a number of rows), a random seed number, and/or other information. The interface400may further include an output window404into which an individual may input a desired name for the output dataset. The interface400may further include a coding window405that may be displayed alongside the other windows401-404, or in a different location. In association with the individual inputting information into one or more of the windows401-404, the coding window405may automatically populate with program code that corresponds to the inputted information. For example, the coding window405automatically updates lines of program code for the inputted dataset to be sampled (“DATA_OUT.PDIL_BIN”). In this regard, the platform enables automatic program code generation corresponding to inputted information associated with data sampling. According to some embodiments, the data sub-component113of the modeler component107facilitates or performs various functionalities. In particular, the data sub-component113enables for the identification of modeling data, dynamically selects the target variable and distribution parameters, selects unique identifiers, and identifies the types of explanatory variables, among other features. Exemplary Data Component Interface As shown inFIG.1, the modeler component107may include a data sub-component113. The objectives and functionality of the data sub-component113may include allowing for the identification of modeling data; dynamic selection of the target variable and distribution; selecting unique identifiers; and/or identifying the types of explanatory variables. The dynamic selection of target variables and the associated distributions within a single tool is believed to represent a new approach to modeling processes. FIG.5illustrates an exemplary interface500associated with the data sub-component113of the modeler component107. The interface500may include an input window501into which an individual may specify various information and parameters, including: a dataset, a model type, a target risk variable(s), a target exposure variable, a distribution type (e.g., Poisson, negative binomial), a link function, a unique identifier, a set of classification variables, and a set of continuous variables. The interface500may further include a coding window505that may be displayed alongside the window501, or in a different location. In association with the individual inputting information into the window501, the coding window505may automatically populate with program code that corresponds to the inputted information. For example, the coding window505automatically updates a line of program code to reflect the selected Poisson distribution. In this regard, the platform enables automatic program code generation corresponding to inputted information associated with the data sub-component. According to some embodiments, the exploratory data analysis (EDA) sub-component114of the modeler component107facilitates or performs various functionalities. In particular, the EDA sub-component114calculates statistics of the observed actual target variable, automatically stores and manages the univariate analysis results to the corresponding modeling folder, facilitates data error identification, and provides a general overview of the data before a multivariate analysis, among other features. Exemplary EDA Component Interface As shown inFIG.1, the modeler component107may include an exploratory data analysis sub-component114. The objectives and functionality of the EDA sub-component114may include calculating statistics of the observed actual target variable; automatically storing and managing the univariate analysis results to the corresponding modeling folder; facilitating data error identification; and/or providing a general overview of the data before multivariate analysis. The automatic storage and management of univariate analysis within a single tool is believed to represent a new approach to modeling processes. FIG.6illustrates an exemplary interface600associated with the EDA sub-component114of the modeler component107. The interface600may include an input window601into which an individual may specify various information and parameters, including a dataset analysis selection. The input window601may further including information associated with the EDA sub-component, including univariate analysis, interaction detection, and output. The interface600may further include a data display window605that may be displayed alongside the window601, or in a different location. According to some embodiments, the data display window605may display various data associated with the EDA sub-component114. According to some embodiments, the variable selection sub-component116of the modeler component107facilitates or performs various functionalities. In particular, the variable selection sub-component116may facilitate the incorporation of multiple variable selection techniques into a single process, output selection results in a summarized table format, and automatically store and manage the variable selection results within a tools data structure, among other features. Exemplary Variable Selection Component Interface As shown inFIG.1, the modeler component107may include a variable selection sub-component116. The objectives and functionality of the variable selection sub-component116may include facilitating the incorporation of run multiple variable selection techniques into a single process; outputting the selection results in a summarized table format; and/or automatically storing and managing the variable selection results within the tools data structure. The automatic storage and management of variable selection analysis within a single tool is believed to represent a new approach to modeling processes. FIG.7illustrates an exemplary interface700associated with the variable selection sub-component116of the modeler component107. The interface700may include an input window701into which an individual may specify various information and parameters, including: a dataset selection, a set of model effects, and a set of methods. The interface700may further include a data display window705that may be displayed alongside the window701, or in a different location. According to some embodiments, the data display window705may display various data associated with the variable selection sub-component116, including a summary table of the analysis. According to some embodiments, the challenger model sub-component118of the modeler component107facilitates or performs various functionalities. In particular, the challenger model sub-component118may facilitate the creation of generalized linear models, other statistical and data science models, output and organize parameter estimates in an easily-interpretable manner, compute prediction statistics and summarize general information about the model, automatically store and manage modeling results within the tools structure and create the appropriate output files, among other features. Exemplary Challenger Model Component Interface As shown inFIG.1, the modeler component107may include a challenger model sub-component118. The objectives and functionality of the challenger model sub-component118may include facilitating the creation of generalized linear models; other statistical and data science models; outputting and organizing parameter estimates in an easily interpretable manner; computing prediction statistics and summarizing general information about the model; and/or automatically storing and managing modeling results within the tools structure and creating the appropriate output files. The automatic storage and management of model results, parameters, and output within a single tool is believed to represent a new approach to modeling processes. FIG.8illustrates an exemplary interface800associated with the challenger model sub-component118of the modeler component107. The interface800may include an input window801into which an individual may specify various information and parameters, including: a data selection, model iteration information, and a set of model effects. The interface800may further include a coding window805that may be displayed alongside the window801, or in a different location. In association with the individual inputting information into the window801, the coding window805may automatically populate with program code that corresponds to the inputted information. In this regard, the platform enables automatic program code generation corresponding to inputted information associated with the challenger model sub-component118. Exemplary Model & Variable Level Comparison As shown inFIG.1, the present embodiments may include a model level comparison component108and a variable level comparison component109. These components may provide a number of prediction statistics which compare the prediction accuracy between model iterations. Discussed further below,FIGS.9to11illustrate prediction statistics associated with the model level comparison, and the main and interaction effects of the variable level comparison, respectively. Exemplary Prediction Statistics Component Interface FIG.9illustrates an exemplary interface900associated with the prediction statistics sub-component119of the model-level comparison108. The interface900may include a model iteration selection window901, a model summary902, and a chart903. The model iteration selection window901enables an individual to select one or more model iterations for mapping or charting. The model summary902indicates various versions as well as data associated therewith (e.g., AIC, BIC, Lift, etc.). The chart903displays relevant data for the selected model iterations (as shown: 1_Val and 6_Val). Exemplary Main Effects Component Interface FIG.10illustrates an exemplary interface1000associated with the main effects sub-component121of the variable-level comparison109. The interface1000may include an effect selection window1001, a level table1002, and a relativity plot1003. The effect selection window1001may enable an individual to select one or more model effects (as shown: marital status). The level table1002may display data associated with the selected effect (as shown: a percentage breakdown of married versus single people). The relativity plot1003may display a relativity plot associated with the selected effect. The relativity line display1004allows specification of multiple model iterations to be displayed along with confidence intervals. Exemplary Relativity Plots Component Interface FIG.11illustrates an exemplary interface1100associated with the interaction relativity plots sub-component112of the variable-level comparison109. The interface1100may include a selection window1101, a level table1102, and a relativity plot1103. The selection window1101may enable an individual to select an interaction and an iteration. The level table1102may indicate various data associated with certain levels. The relativity plot1103may display a relativity plot associated with the selections. Exemplary Multiplicative Technique FIG.12Ais an illustration of an exemplary multiplicative technique1200for combining two models. In particular, the technique1200illustrates a first model1201and a second model1202, each of which may be generated or built according to the technique100as discussed with respect toFIG.1. The technique1200includes a multiplicative combiner1203which may take, as inputs, the first model1201and the second model1202, and may output a combined model1204.FIG.12Adepicts an optional refitting of the model1204after combination. According to some embodiments, the multiplicative combiner1203may multiply the values included in the first model1201and the second model1202to generate the combined model1204. Generally, the multiplicative process addresses theoretical issues associated with combining distributions in a multiplicative way including the appropriate reconstruction of combined parameter estimates. In certain embodiments, the theoretical mathematical constructs of the combined distribution are automatically created. Additionally, the technique1200includes a champion selector1205that may facilitate the selection, storage, and management of a champion model. Exemplary Additive Technique FIG.12Bis an illustration of an exemplary additive technique1210for combining two models. In particular, the technique1210illustrates a first model1211and a second model1212, each of which may be generated or built according to the technique100as discussed with respect toFIG.1. The technique1210includes an additive combiner1213which may take, as inputs, the first model1211and the second model1212, and may output a combined model1214.FIG.12Bdepicts an optional refitting of the model1214after combination. According to some embodiments, the additive combiner1213may add the values included in the first model1211and the second model1212to generate the combined model1214. Generally, the additive process addresses theoretical issues associated with combining distributions in an additive way including the appropriate reconstruction of combined parameter estimates. In certain embodiments, the theoretical mathematical constructs of the combined distribution are automatically created. Additionally, the technique1210includes a champion selector1215that may facilitate the selection, storage, and management of a champion model. Exemplary Multiplicative Model Combination Interface In general, the present embodiments may include a multiplicative process multiplicative combiner, such as shown inFIG.12A. The objectives and functionality of this combiner may include allowing analysts to combine multiple single models into a combined model. The multiplicative process addresses theoretical issues with combining distributions in a multiplicative way including the appropriate reconstruction of combined parameter estimates. Theoretical mathematical constructs of the combined distribution may be created automatically. The combiner may also facilitate selection, storage, and management of a champion model. More specifically,FIG.13Aillustrates an exemplary interface1300associated with the multiplicative model combination technique. The interface1300may include an input window1301into which an individual may specify various information and parameters, including: a frequency model dataset, a severity model dataset, a unique identifier, an exposure variable, model iteration numbers for each of the frequency and severity models, and an output dataset name. The interface1300may further include a coding window1305that may be displayed alongside the window1301, or in a different location. In association with the individual inputting information into the window1301, the coding window1305may automatically populate with program code that corresponds to the inputted information. For example, when the individual enters the unique identifier in the window1301, the coding window1305may automatically generate program code indicating the entered unique identifier (as shown: “O_POLICY_NUMBER”). In this regard, the platform enables automatic program code generation corresponding to inputted information associated with the multiplicative model combination technique. Exemplary Additive Model Combination Interface In general, the present embodiments may include an additive process additive combiner, such as shown inFIG.12B. The objectives and functionality of this combiner may include allowing analysts to combine multiple single models into a combined model. The additive process addresses theoretical issues with combining distributions in an additive way including the appropriate reconstruction of combined parameter estimates. Theoretical mathematical constructs of the combined distribution may be created automatically. The combiner may also facilitate selection, storage, and management of a champion model. More specifically,FIG.13Billustrates an exemplary interface1310associated with the additive model combination technique. The interface1310may include an input window1311into which an individual may specify various information and parameters, including: a modeling dataset, a unique identifier, an exposure variable, a set of component models, and an output dataset name. The interface1310may further include a coding window1315that may be displayed alongside the window1311, or in a different location. In association with the individual inputting information into the window1311, the coding window1315may automatically populate with program code that corresponds to the inputted information. For example, when the individual enters the unique identifier in the window1311, the coding window1315may automatically generate program code indicating the entered unique identifier (as shown: “O_POLICY_NUMBER”). In this regard, the platform enables automatic program code generation corresponding to inputted information associated with the additive model combination technique. Exemplary Champion Model Selection Interface The present embodiments may facilitate selection of a champion model. Champion model results may be stored and managed within the application structure. Work files, temporary models and output may be automatically managed and cleaned up as needed. The integration of folder structure and automated processes for evaluation, cleanup and management of the model results within a single tool is believed to represent a new approach to modeling processes. FIG.13Cillustrates an exemplary interface1320associated with selecting a champion model. The interface1320may include an input window1321into which an individual may specify various information and parameters, including an iteration number for a champion model. The interface1320may further include a coding window1305that may be displayed alongside the window1321, or in a different location. In association with the individual inputting information into the window1321, the coding window1325may automatically populate with program code that corresponds to the inputted information. For example, when the individual enters the iteration number in the window1321, the coding window1325may automatically generate program code indicating the entered iteration number (as shown: “1”). In this regard, the platform enables automatic program code generation corresponding to inputted information associated with the champion model selection. Exemplary Method of Enabling Management of Data Models FIG.14depicts a block diagram of an exemplary computer-implemented method1400of enabling management of data models. In particular, the method1400may be associated with building one or more models, combining the one or more models, and selecting a champion model. According to some embodiments, the method1400may be performed by a computing device, such as a server computer, configured with or configured to connect to a user interface, where a user or individual may interact with the user interface. It should be appreciated that the functionalities of the method1400are exemplary, and that additional or alternative functionalities are envisioned. The method1400may begin when the computing device generates a model build partition. In particular, the computing device may enable (block1405) the user to input, via the user interface, a storage location where one or more of the following may be stored: data, a set of inputs, and a set of model outputs. In some embodiments, the storage location may be local to the computing device or to another device (e.g., within a distributed database). The computing device may further enable (block1410) the user to input, via the user interface, data to be partitioned and/or a set of variables to be binned. In particular, the computing device may enable the user to input, for each of the set of variables, (i) a binning technique, (ii) a number of bins, and/or (iii) a binned value. The computing device may enable (block1415) the user to input, via the user interface, a set of identifications for (i) at least one of a training dataset and a validation dataset, and/or (ii) modeling data. According to certain embodiments, the modeling data may be associated with one or more different modeling techniques, including GLM, GAM, ELM, and/or others. In one implementation, the set of identifications for the modeling data may include a model type, a distribution, a link function, and/or a unique identifier. In an additional or alternative implementation, the set of identifications for the modeling data may include a target risk variable and a target exposure variable. The computing device may enable (block1420) the user to input, via the user interface, a set of selections associated with (i) an EDA, (ii) a variable selection, (iii) a set of model methods, (iv) a set of model ensemble process, and/or (v) a challenger model comparison. In some embodiments, the set of selections associated with the EDA may include an input of whether to run the EDA using the validation dataset and the training dataset (i.e., the entire dataset), or using the training dataset. Further, the set of selections associated with the variable selection may include (i) whether to run the variable selection using the entire dataset or using the validation dataset, (ii) a set of model effects, and/or (iii) a set of variable selection techniques. Alternatively or additionally, the set of selections associated with the modeling methods may include (i) whether to generate the modeling output using the entire dataset or using the training dataset, (ii) a model iteration identification, and/or (iii) a set of model effects. The computing device may optionally enable (block1425) the user to input, via the user interface, (i) stratification selection, (ii) a sample percent, and/or (iii) a random seed. The computing device may generate (block1430) the modeling output according to the model build partition. In some embodiments, the modeling output may be stored in the storage location inputted in block1405, for access by the computing device and/or other devices. The computing device may display (block1435), in the user interface, a set of results associated with the generating the modeling output, where the set of results may include (i) a set of model level results, and/or (ii) a set of variable level results. In some embodiments, the set of model level results may include a set of prediction statistics, and the set of variable level results may include a set of main effects and/or a set of interaction relativity plots. In certain embodiments, the modeling output may include a first model output and a second model output (and optionally additional model outputs). The computing device may combine (block1440) multiple model outputs (i.e., the first model output and the second model output) using either an additive technique or a multiplicative technique, to generate a combined model(s). Additionally, the computing device may select (1445) a champion model from the any of the initial model outputs and the combined model(s). In some embodiments, the computing device may select the champion model according to various factors and parameters. Exemplary Computing Device FIG.15illustrates a hardware diagram of an exemplary computing device1510in which the functionalities as discussed herein may be implemented. In particular, the computing device1510may support the model building, comparing, and selecting functionalities as discussed herein. The computing system1510may include a processor1559as well as a memory1556. The memory1556may store an operating system1557capable of facilitating the functionalities as discussed herein as well as a set of applications1551(i.e., machine readable instructions). For example, one of the set of applications1551may be a modeling application1552configured to facilitate various functionalities discussed herein. It should be appreciated that one or more other applications1553are envisioned. The processor1559may interface with the memory1556to execute the operating system1557and the set of applications1551. According to some embodiments, the memory1556may also include modeling data1558, such as modeling data that may be used by the modeling application1552. The memory1556may include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others. In one implementation, the computing device1510may interface with external storage, such as one or more databases. Additionally or alternatively, the memory1556(and/or any external storage) may be included as part of a distributed database. The computing system1510may further include a communication module1555configured to communicate data via one or more networks1520. According to some embodiments, the communication module1555may include one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via one or more external ports1554. For example, the communication module1555may receive, from an external electronic device, various datasets, modeling parameters, and/or the like. The computing device1510may further include a user interface1562configured to present information to a user and/or receive inputs from the user. As shown inFIG.15, the user interface1562may include a display screen1563and I/O components1564(e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs). According to some embodiments, a user may access the computing device1510via the user interface1562to review information, make changes, input modeling parameters, and/or perform other functions. In some embodiments, the computing device1510may perform the functionalities as discussed herein as part of a “cloud” network or may otherwise communicate with other hardware or software components within the cloud to send, retrieve, or otherwise analyze data. In general, a computer program product in accordance with an embodiment may include a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code may be adapted to be executed by the processor1559(e.g., working in connection with the operating system1557) to facilitate the functions as described herein. In this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via Golang, Python, Scala, C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML). In some embodiments, the computer program product may be part of a cloud network of resources. Smart Tool Overview The present embodiments may relate to, inter alia, a SMART (Statistical Modeler using Advanced Ratemaking Techniques) Tool, or Application, that may include two graphical-user interfaces (GUIs) that allow the user to model ratemaking information (e.g., frequency, severity, and/or pure premium). The SMART Tool may contain several custom tasks that help the user to produce a finished model. The SMART Tool, in one aspect, is designed to help users who may not have much SAS or other coding knowledge build a successful pricing model. However, being somewhat familiar with both environments (SAS Studio and Visual Analytics) and basic ratemaking principles, especially those covered in CAS Exam 5 Basic Ratemaking, may be helpful to use the SMART Tool to its full potential. A. Generalized Linear Models (GLMs) GLMs model the linear relationship between a response, or dependent, variable, and a set of predictors, also called input or independent variables. In property and casualty insurance ratemaking applications, the response variable may be usually one of the following: claim frequency, claim severity, pure premium, or loss ratio. Examples of ratemaking predictors are type of vehicle, age, or marital status for personal auto insurance; construction type, building age, or amount of insurance (AOI) for homeowners insurance. The GLM relationship is written as the following: g(μi)=β0+β1xi1+β2xi2+ . . . +βpxip. Where g(μi) is the link transformation applied to the mean μi, β0is the intercept, and βiis the coefficient applied to each xi. For ratemaking, a log link is applied, yielding multiplicative factors, which is the most commonly used rating plan in ratemaking, and where μi=exp(β0+β1xi1+β2xi2+ . . . +βpxip)=e{circumflex over ( )}β0×e{circumflex over ( )}β1xi1× . . . ×e{circumflex over ( )}βpxip. B. Generalized Additive Models (GAMs) GAMs are models in which relationships between the individual predictors and the dependent variable follow smooth patterns that may be linear or nonlinear. The smooth relationships may be estimated simultaneously to predict the expected value of a response variable Y according to the following: g(E(Y))=β0+f1(x1)+f2(x2)+ . . . +fm(xm). In this equation, the response variable, Y, is related to a set of predictor variables, xi, where E(Y) denotes the expected value and g(Y) denotes the link function that links the expected value to the predictor variables xi. A set of functions fidenote functions with a specified parametric form (e.g., polynomial), non-parametric form, or semi-parametric form, that are estimated by non-parametric means. Generally, GAMs enable regularization, interpretability, flexibility, and automation. Accordingly, GAMs provide a regularized and interpretable solution, especially in situations in which the model contains nonlinear effects. C. Ensemble Learning Methods (ELMs) ELMs employ multiple learning algorithms using a finite set of alternative models to assess predictive performance. Generally, the goal of ELMs is to combine multiple hypotheses to form a hopefully better hypothesis. ELMs may be implemented using various algorithms, such as decision trees (e.g., Random Forest), and may be implemented sequentially or in parallel. Additionally, ELMs may be of various types, including: Bayes optimal classifier, bootstrap aggregating, boosting, Bayesian parameter averaging, Bayesian model combination, bucket of models, and stacking. Because ELMs employ multiple learning algorithms, the results may be more accurate than those of single learning algorithms, as the results may average out biases, reduce variance, and reduce instances of overfitting. D. Modeler The entire process may be executed on a dedicated server. Data may be obtained from various sources, but in order to use the SMART Tool application, the data should preferably be uploaded and stored on the server. The SMART Tool may allow the user to custom build models and perform their statistical analysis on the predictors they select. Two exemplary process flows described herein are suggestions on how to use the SMART Tool to its full potential. As detailed further herein,FIG.1and associated text explains the process to build a single model, includes a model build partition and a model assessment partition.FIGS.12A and12Band associated text explains the process to build a combined model, withFIG.12Afocusing on the multiplicative process, andFIG.12Bfocusing on the additive process. In one embodiment, visualization may take place in SAS Visual Analytics or other program, which may be on the dedicated server. Datasets created in the Modeler may be automatically pushed to the server with each iteration performed. E. Server Files and Folders Creating Files and Folders may be considered Task A in one embodiment. The server's Files and Folders, shown inFIG.18, may be where the permanent files are located. This may be where a user will access the SMART Tool or application tasks. The SMART Tool or application may also create a ROOT directory in the first step to hold the datasets and files created by each task. Libraries may be where the datasets are stored. The SMART Tool or application may also create several libraries, or folders/files, that hold datasets created from, or during, the modeling process. A process flow may be used to visualize the order of the tasks. In one embodiment, the user may use a combination of tasks to create their own customized process flow. For instance, to create a custom process flow, the user may be able to click an icon and select a Process Flow tab or button from a dropdown list or menu. After which, a user may drag and drop various tasks to add them to new process flow that may be graphically depicted. Also, the user may be allowed to remove a task, such as by clicking upon an icon and hitting delete. In order to have accurate models, the data may need to be properly cleaned and prepared beforehand. A “response variable” may be calculated from selected target risk variables and target exposure variables in the tool prior to building a statistical model are calculated. Exemplary response variables may include, for insurance-related embodiments, calculated Frequency, calculated Severity, and/or calculated Loss Ratio. It should be noted that users should make sure the variable types they are using are correct. For example, if the user wants to use a numeric variable as a categorical variable in the modeling, then they will need to change its variable type to a categorical variable before using the Modeler. Otherwise, the variable will be filtered out, or be modeled incorrectly due to the underlying code. F. Define Libraries and Create Folders The first step in building a model (Task A) may be to tell the SMART Tool or application where the data is located, and where to store new data. This task may ask for two user inputs, as shown inFIG.19. This first user input may be a DATA_IN (SOURCE) Directory. This may be the server path where the input dataset is located. The second user input may be a DATA_OUT (ROOT) Directory. This may be the server path where the output datasets created by the tool are located. If the indicated directory does not exist, this task will create one. Datasets from two potential Task B's (such as Random Sample, and Variable Binning) may appear here. In one embodiment, the ROOT Directory may have five (5) subdirectories for insurance-related embodiments: (1) FREQ: holds frequency datasets and associated files; (2) SEV: holds severity datasets and associated files; (3) PP: holds pure premium datasets or combined risk premium datasets and associated files; (4) FINAL: holds the champion models selected; and/or (5) TEMP: temporary files are kept here, such as the intermediate dataset from EDA. This task may also create SAS or other libraries with corresponding names. This way, datasets located under each subdirectory may also be located in the library of the same name. It should be noted that this task may need to be run once for each component, and each time the user starts a SAS Studio session. For example, if the user is building frequency and severity models using the same dataset, Task A (which may include defining libraries and/or creating files and folders) only needs to be run only once. If the user logs out or desires to build a different component, they will need to run Task A. G. Variable Binning Variable Binning may be an optional Task B, but is recommended. Binning creates groups for continuous variables that may facilitate modeling. As examples, “pseudo-quantile binning” may return equal number of results per bin. Currently the default option, pseudo-quantile is an approximation of quantile binning, and returns similar results. “Bucket binning” may create bins of equal width. And, “winsorized binning” may be similar to bucket binning except that the tails are cut off to ensure a smoother binning result and remove outlier influence. H. Random Sampling Like the previous task, Random Sampling is another optional Task B that is recommended, but not required. The user may choose to use only one Task B, or both in any order. The random sampling task may be based upon stratified sampling. The data may be partitioned into two groups based upon the selected stratifying variable(s), then a sample from each group may be taken based upon a predefined portion. In one embodiment, the sampled variables may be indicated in a “Selected” column created in this task by assigning each value either a “0” or “1”, and the GLM (or GAM, ELM, or the like) created or programmed by the system may use the sampled data to do the modeling. Further, a random seed may be used to specify the initial seed for random number generation. This feature may be useful if the user wanted to compare or replicate several sampling attempts later by making sure the initial seed is the same for all attempts. I. Model Declaration The Model Declaration task may be used to specify the target risk variable, target exposure variable, model type (e.g., Frequency, Severity, or Pure Premium for insurance-related embodiments), and predictors and/or interaction variables used in the Variable Selection task. A Response Variable may be calculated from a target risk variable and/or a target exposure variable. In insurance-related embodiments, in a Frequency Model, the Response Variable may be Claim Count per exposure, or a frequency variable. In a Severity Model, the Response Variable may be Loss Amount per event, or a severity variable. In a Pure Premium Model, the Response Variable may be Loss Amount per exposure, or a pure premium variable. A Pure Premium Model may also be used to refit Pure Premium models previously created as combination of frequency and severity models. A Weight Variable may allow users to give more ‘weight’ to rows that carry greater risk. In a Frequency Model, usually Exposure (i.e., Earned House Years) may be used, and the Response and Weight variables may become Frequency=Claim Count/Exposure. In a Severity Model, Claim Count may be used, and Severity=Loss/Claim Count. In a Pure Premium Model, Exposure (EHY) may be used, and PP=Loss/Exposure. In some embodiments, a drop down menu may be used to select the specified model type. This feature may be used to direct output datasets to the right folder when they are modeled. The user may select which distribution best fits the data. For instance, Poisson may be used for Frequency. Negative Binomial may also be used for Frequency. Gamma may be used for Severity and/or Pure Premium. Tweedie may be used for Pure Premium, and may use a combination of Poisson (Frequency) and Gamma (Severity). A link function may be used that provides flexibility in transforming the equation and relating the model prediction to its predictors. A Log Link may have the property of producing a multiplicative structure, making it the most intuitive link function to use in ratemaking. A unique identifier may be used to identify the correct rows to merge together in future tasks, and may use a Policy Number or another similar unique variable. Variable Selection is highly recommended to ensure the model is using statistically valid predictors. All potential predictors may be selected to be used in the model, including those to be used for interaction terms. Continuous Variables may be used that are numeric variable that represent a measurement on a continuous scale, such as age or AOI. Offset variables may also be used. For instance, a ‘fixed’ variable that is part of the rating plan, but is not given a coefficient in the GLM, such as a base rate or deductible. The GLM equation is represented as g(μ1)=β0+β1xi1+β2xi2+ . . . +βpxip+offset. Preferably, the offset variable is a continuous variable. Some variables might have a combined effect on the target variable. In other words, the effect of one predictor might depend upon the value of another, and vice-versa. The combined effect of these variables is referred to as an interaction term. A variable selection method may be performed to further narrow down the predictors. The output is a variable selection summary that the user can use. As examples, “backward selection” may start with all predictors, and each step may delete the variable with the least contribution until a stopping condition is reached. “Forward selection” may start with no predictors, and each step may add the most significant variable until a stopping condition is reached. “Stepwise selection” may be used, and may be a modification of forward selection, but effects selected may be removed later. Stepwise selection uses the same methods of adding and removing effects as forward and backward selection, respectively. If at any step an effect is not significant, it may be removed. “Lasso selection” may be used that includes the sum of the absolute values of the regression coefficients that may be constrained to be smaller than a specified parameter. “Variance based selection” may identify a group of variables that jointly explain the maximum amount of variance in the response variables. “Random forest selection” may be used that generates a random forest that evaluates variable importance using decision trees. In some embodiments, the two previously created frequency and severity models may be combined into one Pure Premium model. Additionally or alternatively, Pure Premium models from all components may be combined. For example, the user may create an All Peril Pure Premium Model from separate fire, water, weather, hurricane, etc. Peril Pure Premium Models. Exemplary Process Flow FIG.16illustrates an exemplary computer-implemented process flow1600. The process flow1600may include determining input data location1602, and/or allow the user to select files or folders with the input data to be input into the model(s) created by the SMART Tool. For instance, in insurance-related embodiments, the input data may include ratemaking input data; customer data; data related to historical or current insurance claims; data related to premiums, discounts, loss, exposures; and other types of input data, including that discussed elsewhere herein. The process flow1600may include setting up file, folder, and/or library structures1604, such as the structures discussed elsewhere herein. The structures may include structures for input and output (i.e., results generated by the model(s)) structures. In one embodiment, the user may enter input and output folders or files, and the system may automatically build or set up the input and/or output folder and file structures. The process flow1600may include data transformation1606. For instance, variable binning and random sampling may be used to transform the input data. The data may be grouped into appropriate sizes that increase the accuracy of the modeling. Initial data groups may also be combined to increase modeling accuracy. Other transformations may be performed on the input data, including those transformations discussed elsewhere herein. As discussed elsewhere herein, the process flow1600may include exploratory data analysis; variable selection; and model creation1608(e.g., GAM, GLM, ELM, or other data science models). The variable selection may determine which variables should be selected. For instance, miles; home, vehicle or customer age; home or vehicle features; smart home features; autonomous or semi-autonomous vehicle features; state features; geographic related features; home or vehicle telematics data features; and/or other data features, including those discussed elsewhere herein, may be selected for various insurance-related or rating models, including those related to auto or homeowners insurance. Turning briefly toFIGS.3to5, for instance, on the left hand side of the user interface, the user may be presented with a series of options or selections. The right hand side depicts the programming code, such as object code, that would have to be typed in by the user if they were not using the user interface. The user interface and process described herein alleviates the need for the user to be a programmer and/or to understand how to write the programming code, such as SAS. For instance, the user selections of options on the left hand side of the user interface automatically populate the programming code on the right hand sider of the user interface. The present embodiments allow users to build code by selecting options or icons—with the resulting code having no errors or being free from programming errors. In other words, the user interface acts as a template to efficiently or effectively build code, such as SAS code, without any programming knowledge and that is error free. As a result, the user can focus on the options or selections that they would like in their model, and not on the actual coding itself. Machine learning and/or artificial intelligence techniques may also be used to increase the accuracy of the models, including GLM, GAM, and/or ELM models, created. For instance, as new claim data becomes available, machine learning techniques may be used to further revise the models to account for the new information. This may be especially useful as newer technologies become available to customers—such as new make and models of vehicles, including electric or hybrid vehicles, or new autonomous or semi-autonomous features for auto insurance; or new smart or intelligent home features, or new types of construction or construction materials for homeowners or renters insurance. The process flow1600may include displaying the modeling results1610, such as on a display screen. After the GLM or other model (e.g., GAM or ELM) is created, and the input data is analyzed or processed by the GLM or other model (e.g., GAM or ELM), a processor may translate the model output. Analysis may then be performed to determine how “good” the model is. For instance, confidence levels may be estimated. Also, for auto insurance models, frequency and severity models, and/or other models (e.g., property damage or bodily injury models) may be combined into ensembles in order to increase accuracy. Certain embodiments may relate to, inter alia, insurance, banking, or financial services. As an example,FIG.17illustrates an exemplary modeling folder structure for an auto insurance-related embodiment. The SMART Tool may automatically create a modeling folder structure using the programming language created by the SMART Tool. Analysts may identify the location of data within the tool using the structure. As shown inFIG.17, the file or folder structure may include a high level folder associated with Coverage, and sub-folders may include Frequency, Severity, Pure Premium, Final, Temp, and/or other sub-folders. Additional Exemplary Embodiments In one aspect, a computer-implemented method in a computing device of enabling the management of data models and/or creating models may be provided. The method may include (1) generating, by a computer processor, a model build partition, including enabling a user to input, via a user interface: a storage location, file, folder, or library where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for (i) modeling data, and/or (ii) a data model modeling methods to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, (iii) a set of modeling methods, and/or (iv) model ensemble processes; (2) generating, by the computer processor, the modeling output according to the model build partition; and/or (3) displaying, in the user interface, a set of results associated with generating the modeling output, the set of results including: (a) a set of model level results, and/or (b) a set of variable level results. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In one implementation, enabling the user to input the set of variables to be binned may include enabling the user to input, for each of the set of variables, (i) a binning technique, (ii) a number of bins, and/or (iii) a binned value. Generating the model build partition may further include enabling the user to input, via the user interface, (i) a stratification selection, (ii) a sample percent, and/or (iii) a random seed. Enabling the user to input the set of identifications for the modeling data may include enabling the user to input (i) a model type, (ii) a distribution, (iii) a link function, and/or (iv) a unique identifier. Additionally or alternatively, enabling the user to input the set of identifications for the modeling data may include enabling the user to input a target risk variable and/or a target exposure variable. Enabling the user to input the set of selections associated with the exploratory data analysis (EDA) may include enabling the user to input whether to run the EDA using the entire dataset or using the training dataset. Enabling the user to input the set of selections associated with the variable selection may include enabling the user to input (i) whether to run the variable selection using the entire dataset or using the training dataset, (ii) a set of model effects, and/or (iii) a set of variable selection techniques. Enabling the user to input the set of selections associated with the modeling methods may include enabling the user to input (i) whether to generate the modeling output using the entire dataset or using the training dataset, (ii) a model iteration identification, and/or (iii) a set of model effects. Displaying the set of model level results may include displaying, in the user interface, a set of prediction statistics. Displaying the set of variable level results may include displaying, in the user interface, a set of main effects and/or a set of interaction relativity plots. The modeling output may include multiple model outputs, and wherein the method may further include combining multiple model outputs using either an additive technique or a multiplicative technique. In an implementation, the method may further include, after combining the first model output and the second model output: selecting a champion model. In another aspect, a computer system for enabling the management of data model and/or creating a data model may be provided. The system may include a user interface; a memory storing a set of computer-executable instructions; and/or a processor interfaced with the user interface and the memory, and configured to execute the computer-executable instructions to cause the processor to: (1) generate a model build partition, including enabling a user to input, via the user interface: a storage location, file, folder, or library where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for (i) modeling data, and/or (ii) modeling methods to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, (iii) a set of modeling method(s); and/or (iv) model ensemble process(es); (2) generate the modeling output according to the model build partition, and/or cause the user interface to display a set of results associated with generating the modeling output, the set of results including: (a) a set model level results, and/or (b) a set of variable level results. The system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In one implementation, the set of variables to be binned may include, for each of the set of variables, (i) a binning technique, (ii) a number of bins, and/or (iii) a binned value. In one implementation, wherein the set of identifications for the modeling data may include a target risk variable and/or a target exposure variable. The set of selections associated with the exploratory data analysis (EDA) may include whether to run the EDA using the entire dataset and/or using the training dataset. The set of selections associated with the variable selection may include (i) whether to run the variable selection using the entire dataset or using the training dataset, (ii) a set of model effects, and/or (iii) a set of variable selection techniques. The set of selections associated with the data model may include (i) whether to generate the modeling output using the entire dataset or using the training dataset, (ii) a model iteration identification, and/or (iii) a set of model effects. The set of model level results may include a set of prediction statistics. The set of variable level results may include a set of main effects and/or a set of interaction relativity plots. The modeling output may include a first model output and a second model output, and wherein the processor is further configured to: combine multiple outputs using either an additive technique or a multiplicative technique. The processor may be further configured to, after combining multiple model output: select a champion model. In another aspect, a computer-implemented method of building a data model and then model ratemaking information may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file or folder from which to retrieve ratemaking input data; (2) accepting user input that identifies a file or folder to which store results generated from a user-defined data model created using user-selections acting upon the ratemaking input data; (3) accepting user selected variables or other selections (such as user selected icons or buttons) related to the user-defined data model to be created; (4) translating the user selected variables or other selections into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined data model that is defined by the user selected variables or other selections (or otherwise creating the programming language code from the user selected variables or other selections to alleviate the need for the user to have coding knowledge); (5) compiling, executing, and/or running the programming language code to create the user-defined data model; (6) feeding the ratemaking input data into the user-defined data model created to model the ratemaking input data and generate modeling results, or otherwise modeling the ratemaking input data and generating the modeling results; and/or (7) displaying the modeling results, or other results generated by the user-defined data model acting upon the ratemaking input data to facilitate modeling ratemaking information. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the user selected variables may include claim frequency, claim severity, pure premium, or loss ratio, and the data model may be related to insurance. The user selected variables may include vehicle, make, model, vehicle age, driver age, or marital status, and the user-defined data model may be related to auto insurance. The user selected variables may also relate to autonomous or semi-autonomous vehicle features, systems, or technologies. The user selected variables may include construction type, building age, or amount of insurance, and the user-defined data model may be related to homeowners insurance. The user selected variables may also relate smart or interconnected home features, systems, or technologies. The ratemaking input data and/or the user-defined data model may be related to auto, homeowners, pet, renters, life, health, commercial, personal articles, or other types of insurance. Other embodiments are also envisioned in which the input data and/or user-defined data model are not related to insurance. For instance, the input data and/or user-defined data model may be related to banking or financial services. In another aspect, a computer system configured to build a data model and then model ratemaking information may be provided. The computer system may include one or more processors and/or memory units, and/or a graphical user interface, configured to: (1) accept user input that identifies a file or folder from which to retrieve ratemaking input data; (2) accept user input that identifies a file or folder to which store results generated from a user-defined data model created using user-selections acting upon the ratemaking input data; (3) accept user selected variables or other selections (such as user selected icons or buttons) related to the user-defined data model to be created; (4) translate the user selected variables or other selections into a programming language code that can be compiled, executed, and/or run by the one or more processors to create the user-defined data model that is defined by the user selected variables or other selections (or otherwise create the programming language code from the user selected variables or other selections to alleviate the need for the user to have coding knowledge); (5) execute or run the programming language code to create the user-defined data model; (6) feed the ratemaking input data into the user-defined data model created to model the ratemaking input data and generate modeling results, or otherwise model the ratemaking input data and generate modeling results; and/or (7) display the modeling results, or other results generated by the user-defined data model acting upon the ratemaking input data to facilitate modeling ratemaking information. The system may include additional, less, or alternate functionality and/or components, including those discussed elsewhere herein. In one implementation, the user selected variables may include claim frequency, claim severity, pure premium, or loss ratio, and the data model may be related to insurance. The user selected variables may include type of vehicle, make, model, vehicle age, driver age, or marital status, and the user-defined data model may be related to auto insurance. Additionally or alternatively, the user selected variables may include construction type, building age, or amount of insurance, and the user-defined data model may be related to homeowners insurance. The ratemaking input data and/or the user-defined data model may be related to auto, homeowners, pet, renters, life, health, commercial, personal articles, or other types of insurance. In another aspect, a computer-implemented method of building a data model and then model input data may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file or folder from which to retrieve input data; (2) accepting user input that identifies a file or folder to which store results generated from a user-defined data model created using user-selections acting upon the input data; (3) accepting user selected variables or other selections (such as user selected icons or buttons) related to the user-defined data model to be created; (4) translating, or otherwise converting, the user selected variables or other selections into a programming language code (such as the object code shown in some of the Figures herein) that can be compiled, executed, and/or run by one or more processors to create the user-defined data model that is defined by the user selected variables or other selections (or otherwise creating the programming language code from the user selected variables or other selections to alleviate the need for the user to have coding knowledge); (5) compiling, executing, and/or running the programming language code to create the user-defined data model; (6) feeding the input data into the user-defined data model created to model the input data and generate modeling results, or otherwise modeling the input data and generating the modeling results; and/or (7) displaying the modeling results, or other results generated by the user-defined data model acting upon the input data to facilitate modeling input data. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In one implementation, the input data may be ratemaking input data, and the user selected variables may include claim frequency, claim severity, pure premium, or loss ratio, and the data model may be related to insurance. The input data may be ratemaking input data, and the user selected variables may include type of vehicle, make, model, vehicle age, driver age, or marital status, and the user-defined data model may be related to auto insurance. The input data may be ratemaking input data, and the user selected variables may include autonomous or semi-autonomous vehicle feature, system, or technology, and the user-defined data model may be related to auto insurance. The input data may be ratemaking input data, and the user selected variables may include construction type, building age, or amount of insurance, and the user-defined data model may be related to homeowners insurance. The input data may be ratemaking input data, and the user selected variables may include smart or intelligent home system, technology, or feature, and the user-defined data model may be related to homeowners insurance. The input data may be ratemaking input data, and the ratemaking input data and/or the user-defined data model may be related to auto, homeowners, pet, renters, life, health, commercial, personal articles, or other types of insurance. In another aspect, a computer system configured to build a data model and then model input data may be provided. The computer system may include one or more processors, and/or a graphical user interface, configured to: (1) accept user input that identifies a file or folder from which to retrieve input data; (2) accept user input that identifies a file or folder to which store results generated from a user-defined data model created using user-selections acting upon the input data; (3) accept user selected variables or other selections (such as user selected icons or buttons) related to the user-defined data model to be created; (4) translate the user selected variables or other selections into a programming language code (such as object or source code) that can be compiled, executed, and/or run by the one or more processors to create the user-defined data model that is defined by the user selected variables or other selections (or otherwise create the programming language code from the user selected variables or other selections to alleviate the need for the user to have coding knowledge); (5) compile, execute, and/or run the programming language code to create the user-defined data model; (6) feed the input data into the user-defined data model created to model the input data and generate modeling results, or otherwise model the input data and generate modeling results; and/or (7) display the modeling results, or other results generated by the user-defined data model acting upon the input data to facilitate modeling input data. The system may include additional, less, or alternate functionality and/or componentry, including that discussed elsewhere herein. In one implementation, the input data may be ratemaking input data, and the user selected variables may include claim frequency, claim severity, pure premium, or loss ratio, and the data model may be related to insurance, and/or the user selected variables may include type of vehicle, make, model, vehicle age, driver age, or marital status, and the user-defined data model may be related to auto insurance. The input data may be ratemaking input data, and the user selected variables may include autonomous or semi-autonomous vehicle feature, system, or technology, and the user-defined data model may be related to auto insurance, and/or may include construction type, building age, or amount of insurance, and the user-defined data model may be related to homeowners insurance. The input data may be ratemaking input data, and the user selected variables may include smart or intelligent home system, technology, or feature, and the user-defined data model may be related to homeowners insurance. Additionally or alternatively, the ratemaking input data and/or the user-defined data model may be related to auto, homeowners, pet, renters, life, health, commercial, personal articles, or other types of insurance. In another aspect, a graphical user interface configured to facilitate building a data model and then model input data may be provided. The graphical user interface configured to: (1) accept user input that identifies a file or folder from which to retrieve input data; (2) accept user input that identifies a file or folder to which store results generated from a user-defined data model created using user-selections acting upon the input data; (3) accept user selected variables or other selections (such as user selected icons or buttons) related to the user-defined data model to be created; (4) display programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined data model that is defined by the user selected variables or other selections, the programming language code being generated or processor created by one or more processors using at least in part the user selected variables or other selections to alleviate the need for the user to have coding knowledge; and/or (5) display modeling results generated by the input data that is retrieved being feed into or otherwise analyzed by the user-defined data model created using the programming language code that is generated or otherwise processor created based at least in part upon the user selected variables or other selections, and/or other results generated by the user-defined data model acting upon the input data to facilitate modeling input data. The user interface may include additional, less, or alternate functionality, including that discussed elsewhere herein. In one implementation, the input data may be ratemaking input data, and the user selected variables may include claim frequency, claim severity, pure premium, or loss ratio, and the data model may be related to insurance; type of vehicle, make, model, vehicle age, driver age, or marital status; and/or autonomous or semi-autonomous vehicle feature, system, or technology, and the user-defined data model may be related to auto insurance. The input data may be ratemaking input data, and the user selected variables may include construction type, building age, or amount of insurance, and the user-defined data model may be related to homeowners insurance. The input data may be ratemaking input data, and the user selected variables may include smart or intelligent home system, technology, or feature, and the user-defined data model may be related to homeowners insurance. Additionally or alternatively, the input data may be ratemaking input data, and the ratemaking input data and/or the user-defined data model may be related to auto, homeowners, pet, renters, life, health, commercial, personal articles, or other types of insurance. Additional Exemplary GLM Embodiments In another aspect, a computer-implemented method in a computing device of enabling the management of data models may be provided. The method may include (1) generating, by a computer processor, a model build partition, including enabling a user to input, via a user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or a Generalized Linear Model (GLM) to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generating, by the computer processor, the modeling output according to the model build partition; and/or (3) displaying, in the user interface, a set of results associated with generating the modeling output, the set of results including: (a) a set of model level results, and/or (b) a set of variable level results. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for enabling the management of data models may be provided. The system may include: a user interface; a memory storing a set of computer-executable instructions; and/or a processor interfaced with the user interface and the memory, and configured to execute the computer-executable instructions to cause the processor to: (1) generate a model build partition, including enabling a user to input, via the user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or a Generalized Linear Model (GLM) to be created and/or programmed by the computer processor, and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generate the modeling output according to the model build partition, and/or (3) cause the user interface to display a set of results associated with generating the modeling output, the set of results including: (a) a set model level results, and/or (b) a set of variable level results. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for building a Generalized Linear Model (GLM) model and then model ratemaking information may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve ratemaking input data; (2) accepting user input that identifies a file to which store results generated from a user-defined GLM created using user-selections acting upon the ratemaking input data; (3) accepting user-selected variables related to the user-defined GLM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined GLM that is defined by the user-selected variables; (5) executing the programming language code to create the user-defined GLM; (6) feeding the ratemaking input data into the user-defined GLM created to model the ratemaking input data and generate modeling results; and/or (7) displaying the modeling results to facilitate modeling ratemaking information. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system configured to build a Generalized Linear Model (GLM) model and then model ratemaking information may be provided. The computer system may include one or more processors, and/or a graphical user interface, configured to: (1) accept user input that identifies a file from which to retrieve ratemaking input data; (2) accept user input that identifies a file to which store results generated from a user-defined GLM created using user-selections acting upon the ratemaking input data; (3) accept user-selected variables related to the user-defined GLM to be created; (4) translate the user-selected variables into a programming language code that can be compiled, executed, and/or run by the one or more processors to create the user-defined GLM that is defined by the user-selected variables; (5) execute the programming language code to create the user-defined GLM; (6) feed the ratemaking input data into the user-defined GLM created to model the ratemaking input data and generate modeling results; and/or (7) display the modeling results to facilitate modeling ratemaking information. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method of building a Generalized Linear Model (GLM) model and then model input data may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve input data; (2) accepting user input that identifies a file to which store results generated from a user-defined GLM created using user-selections acting upon the input data; (3) accepting user-selected variables related to the user-defined GLM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined GLM that is defined by the user-selected variables; (5) executing the programming language code to create the user-defined GLM; (6) feeding the input data into the user-defined GLM created to model the input data and generate modeling results; and/or (7) displaying the modeling results to facilitate modeling input data. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. Additional Exemplary GAM Embodiments In one aspect, a computer-implemented method in a computing device of enabling the management of data models may be provided. The method may include (1) generating, by a computer processor, a model build partition, including enabling a user to input, via a user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or a Generalized Additive Model (GAM) to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generating, by the computer processor, the modeling output according to the model build partition; and/or (3) displaying, in the user interface, a set of results associated with generating the modeling output, the set of results including: (a) a set of model level results, and/or (b) a set of variable level results. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for enabling the management of data models may be provided. The method may include: a user interface; a memory storing a set of computer-executable instructions; and a processor interfaced with the user interface and the memory, and configured to execute the computer-executable instructions to cause the processor to: (1) generate a model build partition, including enabling a user to input, via the user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or a Generalized Additive Model (GAM) to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generate the modeling output according to the model build partition, and/or (3) cause the user interface to display a set of results associated with generating the modeling output, the set of results including: (a) a set model level results, and/or (b) a set of variable level results. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for building a Generalized Additive Model (GAM) model and then model ratemaking information may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve ratemaking input data; (2) accepting user input that identifies a file to which store results generated from a user-defined GAM created using user-selections acting upon the ratemaking input data; (3) accepting user-selected variables related to the user-defined GAM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined GAM that is defined by the user-selected variables; (5) executing the programming language code to create the user-defined GAM; (6) feeding the ratemaking input data into the user-defined GAM created to model the ratemaking input data and generate modeling results; and/or (7) displaying the modeling results to facilitate modeling ratemaking information. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system configured to build a Generalized Additive Model (GAM) model and then model ratemaking information may be provided. The computer system may include one or more processors, and/or a graphical user interface, configured to: (1) accept user input that identifies a file from which to retrieve ratemaking input data; (2) accept user input that identifies a file to which store results generated from a user-defined GAM created using user-selections acting upon the ratemaking input data; (3) accept user-selected variables related to the user-defined GAM to be created; (4) translate the user-selected variables into a programming language code that can be compiled, executed, and/or run by the one or more processors to create the user-defined GAM that is defined by the user-selected variables; (5) execute the programming language code to create the user-defined GAM; (6) feed the ratemaking input data into the user-defined GAM created to model the ratemaking input data and generate modeling results; and/or (7) display the modeling results to facilitate modeling ratemaking information. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for building a Generalized Additive Model (GAM) model and then model input data may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve input data; (2) accepting user input that identifies a file to which store results generated from a user-defined GAM created using user-selections acting upon the input data; (3) accepting user-selected variables related to the user-defined GAM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined GAM that is defined by the user-selected variables; (4) executing the programming language code to create the user-defined GAM; (5) feeding the input data into the user-defined GAM created to model the input data and generate modeling results; and/or (6) displaying the modeling results to facilitate modeling input data. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. Additional Exemplary ELM Embodiments In one aspect, a computer-implemented method in a computing device of enabling the management of data models may be provided. The method may include (1) generating, by a computer processor, a model build partition, including enabling a user to input, via a user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or an Ensemble Learning Method (ELM) to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generating, by the computer processor, the modeling output according to the model build partition; and/or (3) displaying, in the user interface, a set of results associated with generating the modeling output, the set of results including: (a) a set of model level results, and/or (b) a set of variable level results. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for enabling the management of data models may be provided. The computer system may include: a user interface; a memory storing a set of computer-executable instructions; and/or a processor interfaced with the user interface and the memory, and configured to execute the computer-executable instructions to cause the processor to: (1) generate a model build partition, including enabling a user to input, via the user interface: a storage location where a modeling output is to be stored; a set of variables to be binned; a set of identifications and/or user selections for modeling data, and/or an Ensemble Learning Method (ELM) to be created and/or programmed by the computer processor; and/or a set of selections associated with (i) an exploratory data analysis (EDA), (ii) a variable selection, and/or (iii) a challenger model comparison; (2) generate the modeling output according to the model build partition, and/or (3) cause the user interface to display a set of results associated with generating the modeling output, the set of results including: (a) a set model level results, and/or (b) a set of variable level results. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for building an Ensemble Learning Method (ELM) model and then model ratemaking information may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve ratemaking input data; (2) accepting user input that identifies a file to which store results generated from a user-defined ELM created using user-selections acting upon the ratemaking input data; (3) accepting user-selected variables related to the user-defined ELM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined ELM that is defined by the user-selected variables; (5) executing the programming language code to create the user-defined ELM; (6) feeding the ratemaking input data into the user-defined ELM created to model the ratemaking input data and generate modeling results; and/or (7) displaying the modeling results to facilitate modeling ratemaking information. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system configured to build an Ensemble Learning Method (ELM) model and then model ratemaking information may be provided. The computer system may include one or more processors, and/or a graphical user interface, configured to (1) accept user input that identifies a file from which to retrieve ratemaking input data; (2) accept user input that identifies a file to which store results generated from a user-defined ELM created using user-selections acting upon the ratemaking input data; (3) accept user-selected variables related to the user-defined ELM to be created; (4) translate the user-selected variables into a programming language code that can be compiled, executed, and/or run by the one or more processors to create the user-defined ELM that is defined by the user-selected variables; (5) execute the programming language code to create the user-defined ELM; (6) feed the ratemaking input data into the user-defined ELM created to model the ratemaking input data and generate modeling results; and/or (7) display the modeling results to facilitate modeling ratemaking information. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method of building an Ensemble Learning Method (ELM) model and then model input data may be provided. The method may include, via one or more processors: (1) accepting user input that identifies a file from which to retrieve input data; (2) accepting user input that identifies a file to which store results generated from a user-defined ELM created using user-selections acting upon the input data; (3) accepting user-selected variables related to the user-defined ELM to be created; (4) translating the user-selected variables into a programming language code that can be compiled, executed, and/or run by one or more processors to create the user-defined ELM that is defined by the user-selected variables; (5) executing the programming language code to create the user-defined ELM; (6) feeding the input data into the user-defined ELM created to model the input data and generate modeling results; and/or (7) displaying the modeling results to facilitate modeling input data. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. Additional Considerations Although the text herein sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the invention is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based upon any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this disclosure is referred to in this disclosure in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. The patent claims at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the claim(s). The systems and methods described herein are directed to an improvement to computer functionality, and improve the functioning of conventional computers. At various points herein, methods have been described as involving a first, second, and/or third block of a blockchain. It should be appreciated that the labels first, second, and third are used for ease of explanation and does not necessarily imply the involvement of multiple blocks. To this end, all transactions described as being included in a first, second, and/or third block may, in implementations, be included in just a single block of the blockchain. Additionally, although the systems and methods described herein describe functionality at particular nodes of the blockchain, such descriptions are done for ease of explanation. To this end, any functionally described as occurring at two separate nodes may be implemented at a single node. Similarly, any functionality described as occurring at a single node, may be implemented across any number of nodes. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (code embodied on a non-transitory, tangible machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a module that operates to perform certain operations as described herein. In various embodiments, a module may be implemented mechanically or electronically. Accordingly, the term “module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which modules are temporarily configured (e.g., programmed), each of the modules need not be configured or instantiated at any one instance in time. For example, where the modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different modules at different times. Software may accordingly configure a processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. Modules can provide information to, and receive information from, other modules. Accordingly, the described modules may be regarded as being communicatively coupled. Where multiple of such modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the modules. In some embodiments in which multiple modules are configured or instantiated at different times, communications between such modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple modules have access. For example, one module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further module may then, at a later time, access the memory device to retrieve and process the stored output. Modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of exemplary methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some exemplary embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for system and a method for assigning mobile device data to a vehicle through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims. The particular features, structures, or characteristics of any specific embodiment may be combined in any suitable manner and in any suitable combination with one or more other embodiments, including the use of selected features without corresponding use of other features. In addition, many modifications may be made to adapt a particular application, situation or material to the essential scope and spirit of the present invention. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered part of the spirit and scope of the present invention. While the preferred embodiments of the invention have been described, it should be understood that the invention is not so limited and modifications may be made without departing from the invention. The scope of the invention is defined by the appended claims, and all devices that come within the meaning of the claims, either literally or by equivalence, are intended to be embraced therein. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.
107,335
11861729
The Figures depict preferred embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the systems and methods illustrated herein may be employed without departing from the principles of the invention described herein. DETAILED DESCRIPTION Traditionally, businesses, customers, and central authorities, such as those involved in a subrogation claim, have stored information related to transactions, and records of transactions, in databases or ledgers. Often these databases and ledgers are held by the participants and must be reconciled to achieve consensus as to the validity of the information stored therein. Alternatively, a central authority may be responsible for determining the validity of information stored in a database or a ledger and functioning as an arbiter of consensus for interested parties, such as a recorder of deeds, an asset exchange, etc. A blockchain (also referred to herein as a distributed ledger or a shared ledger) is a way of achieving a distributed consensus on the validity or invalidity of information in the chain. In other words, the blockchain provides a decentralized trust to participants and observers. As opposed to relying on a central authority, a blockchain is a decentralized database in which a transactional record of changes to the ledger is maintained and validated by each node of a peer-to-peer network. The distributed ledger is comprised of groupings of transactions organized together into a “block,” and ordered sequentially (thus the term “blockchain”). Nodes may join and leave the blockchain network over time and may obtain blocks that were propagated while the node was gone from peer nodes. Nodes may maintain addresses of other nodes and exchange addresses of known nodes with one another to facilitate the propagation of new information across the network in a decentralized, peer-to-peer manner. The nodes that share the ledger form what is referred to herein as the distributed ledger network. The nodes in the distributed ledger network validate changes to the blockchain (e.g., when a new transaction and/or block is created) according to a set of consensus rules. The consensus rules depend on the information being tracked by the blockchain and may include rules regarding the chain itself. For example, a consensus rule may include that the originator of a change supply a proof-of-identity such that only approved entities may originate changes to the chain. A consensus rule may require that blocks and transactions adhere to format requirement and supply certain meta information regarding the change (e.g., blocks must be below a size limit, transactions must include a number of fields, etc.). Consensus rules may include a mechanism to determine the order in which new blocks are added to the chain (e.g., through a proof-of-work system, proof-of-stake, etc.). Additions to the blockchain that satisfy the consensus rules are propagated from nodes that have validated the addition to other nodes that the validating node is aware of. If all the nodes that receive a change to the blockchain validate the new block, then the distributed ledger reflects the new change as stored on all nodes, and it may be said that distributed consensus has been reached with respect to the new block and the information contained therein. Any change that does not satisfy the consensus rule is disregarded by validating nodes that receive the change and is not propagated to other nodes. Accordingly, unlike a traditional system which uses a central authority, a single party cannot unilaterally alter the distributed ledger unless the single party can do so in a way that satisfies the consensus rules. The inability to modify past transactions leads to blockchains being generally described as trusted, secure, and immutable. Third party intermediaries who assist in the resolution of subrogation claims may thus be disintermediated from the process by a decentralized blockchain. The validation activities of nodes applying consensus rules on a blockchain network may take various forms. In one implementation, the blockchain may be viewed as a shared spreadsheet that tracks data such as the ownership of assets. In another implementation, the validating nodes execute code contained in “smart contracts” and distributed consensus is expressed as the network nodes agreeing on the output of the executed code. Blockchains may be deployed in a public, decentralized, and permissionless manner meaning that any party may view the shared ledger, submit new information to be added to the ledger, or join the network as a validating node. Other blockchains are private (e.g., permissioned ledgers) that keep chain data private among a group of entities authorized to participate in the blockchain network. The present embodiments relate to systems and methods for using a blockchain to record and manage information related to resolution of a subrogation claim (e.g., a “subrogation” blockchain). The subrogation blockchain may be either a public or permissioned ledger. Exemplary Shared Ledger for Resolving Subrogation Claims with Determination of Fault FIG.1depicts an exemplary shared ledger system100for resolving subrogation claims with determination of fault in accordance with one aspect of the present disclosure. When an insured party, such as the owner of not-at-fault vehicle104experiences a covered loss, for example in a collision with at-fault vehicle102, the owner of not-at-fault vehicle104may submit an insurance claim110to an insurer106. The insurer106may have a contractual obligation to remit a payment112to the owner of the not-at-fault vehicle in exchange for assignment of any legal claim the owner of not-at-fault vehicle may have against the owner or operator of at-fault vehicle102for damage and expenses associated with the collision. After insurer106has remitted payment112to the owner of the not-at-fault vehicle104and receive assignment of the vehicle owner's claim against the owner or operator of at-fault vehicle102, the insurer106may initiate a process of managing and resolving the legal claim against the owner or operator of the at-fault vehicle102or against an insurer108of the at-fault vehicle (e.g., a subrogation claim). The shared ledger system100includes a blockchain118accessible by network participants via a network116(e.g., a private or public packet switched network). To begin the blockchain subrogation claim resolution process, the insurer106broadcasts a subrogation claim or transaction114to the blockchain118. The blockchain118may be a network wherein participating network nodes validate changes to a shared ledger based upon transactions broadcast by other network participants. The transaction114, and other transactions disclosed herein, may include information relating to the subrogation claim that may be modified by subsequent transactions broadcast over the network116. In another implementation, validators on the blockchain118are configured to maintain a state database and execute code in smart contracts deployed by network participants. A smart contract on the blockchain118may expose methods and maintain the state of data relating to a subrogation claim by the insurer106against the insurer108relating to an insured loss covered by the insurer106. Methods of the smart contact may be public methods available to be called by any of the network participants. Alternatively or additionally, methods of a smart contract on the blockchain118may include private methods that may only be called by certain parties. A smart contract on the blockchain118may have a contract owner who may establish and/or alter the smart contract. A smart contract owner may assign permissions to certain network participants. After a subrogation claim114has been lodged by insurer106, information relating to the covered loss may be collected by other parties that were involved in the collision to evaluate the subrogation claim. For example, a hospital120may broadcast a damages transaction126to the blockchain118that includes evidence of medical bills incurred as part of the collision. An automotive services repair provider122may broadcast a transaction128to supply information regarding repair services estimated or rendered as a result of the collision. A government entity124may supply evidence relating to the collision in vehicle transaction130. Vehicle transaction130may include information relating to one or more of the vehicles102and104involved in the collision relevant to the subrogation claim (e.g., registration status, registered owner, legal title information, police reports regarding the collision, driving records of the drivers involved in the collision, lien information regarding the vehicles102and104, etc.). When entities broadcast transactions to the blockchain118to initiate or add data to a subrogation claim, the transactions may be accompanied by a proof-of-identity of the entity broadcasting the transaction. In one implementation, a cryptographic proof-of-identity is included in transactions sent to the blockchain. For example, each of the entities106,108,120,122, and124may own private cryptographic keys that are associated with public cryptographic keys known to belong to the entity (e.g., public cryptographic keys associated with each of the entities may be published by a trusted third party or proven to other network participants, etc.) An entity wishing to broadcast a transaction to the blockchain118may sign a cryptographic message in the transaction with the entity's private cryptographic key to prove the identity of the entity broadcasting the transaction. In this way, other network participants may be provided with cryptographic proof that the information contained in the broadcast transaction was originated by the participating entity. After entities such as the hospital120, automotive repair services provider122, government agency124, etc. have supplied information relevant to the subrogation claim, the subrogation claim defendant insurer108may broadcast one or more subrogation transactions132to the blockchain118to indicate acceptance or rejection of the various components of the subrogation claim. For example, if the subrogation claim defendant108disputes that medical care provided by the hospital120was caused by the collision, and thus would form a proper basis for liability to the subrogation claimant insurer106, the subrogation defendant insurer108may broadcast a transaction marking the damages transaction126as disputed or not agreed. In response, the insurer106may broadcast a transaction114to respond to the subrogation defendant108's rejection, such as lowering the damages claimed as part of the medical costs incurred at the hospital120, adding more evidence of the nature of the medical services rendered by the hospital120, etc. If the subrogation defendant insurer108accepts the damages associated with the subrogation claim brought by the claimant insurer106, the subrogation defendant insurer108may broadcast a transaction132to the blockchain118to indicate a resolution of the subrogation claim for the amount reflected by the blockchain118. Alternatively, or additionally, the subrogation defendant insurer108may broadcast a transaction to the blockchain118reflecting payment of the subrogation claim and/or may broadcast a transaction sending a token having value that circulates on the blockchain118to the insurer claimant106. One or more of the parties to the subrogation claim (e.g., the insurers, the parties to the loss, etc.) may demand resolution of the subrogation claim by a third party. One example of a third party who may resolve the subrogation claim on the blockchain118is an arbitrator140. The arbitrator140may become involved in resolving the subrogation claim in any of several ways. In one implementation, the subrogation claim or transactions114broadcast to the network by the subrogation claimant106may include an arbitration demand. In other implementations, the subrogation defendant108may issue an arbitration demand in its subrogation transactions132. The parties to the subrogation claim may first attempt to resolve the dispute without arbitration, and may only demand arbitration if the claim fails to resolve. The parties may instead demand arbitration before attempting to resolve the dispute on the blockchain118without arbitration. An arbitration demand may be accepted by the arbitrator140on-chain by broadcasting an acceptance/decision transaction142. The acceptance/decision transaction142may mark the subrogation claim as accepted by the arbitrator140and therefore no longer in a pool of available arbitration cases on the chain118. In some implementations, the subrogation claim on the chain118(e.g., a smart contract representing the claim, a set of related transactions in an UTXO model chain, etc.) is also accepted by the parties to the claim as indicated by acceptance transactions (e.g., after the arbitrator140has cleared a conflict check or otherwise approved by the parties). An arbitration demand on the chain118may include a demand for an arbitrator who holds a qualification issued by a professional organization, who holds a membership in a professional society or organization, is whitelisted by the parties, or who holds a credential for deciding disputes in arbitration. In this manner, arbitrators140may perform freelance work. The arbitrators may accept any arbitration demands broadcasted to the network, as opposed to conventional systems where arbitrators may be employed by a particular insurer to hear non-party cases. For example, as a member of an arbitration group, an insurer may be expected to provide an arbitrator to hear a non-party case each time the insurer submits a party case to arbitration and obtains an arbitrator from another insurer which is not a party to the arbitration for the submitted case. Once an arbitrator140has accepted an arbitration demand, the arbitrator140may broadcast additional transactions142to the blockchain118to establish a schedule for the arbitration, to request evidence from the parties, to request damages and/or liability calculations, to request acceptance of damages and/or liability calculations, to request counter-proposals to damages and/or liability, to request settlement proposals and/or acceptance, etc. The parties may broadcast transactions to the blockchain118to indicate their proposals, acceptance/rejection, etc. of the requests by the arbitrator. In one implementation, the transactions broadcast by the parties to the subrogation claim and/or the arbitrator include sending data to a smart contract on the blockchain118. For example, the transactions may include sending a zero amount of a token on the chain to the smart contract to call functions on the smart contract. Functions on the smart contract may allow the parties to flip flags, set data, and perform changes to the state data of the contract that represents the subrogation claim. In some implementations, arbitration does not successfully resolve a subrogation claim between the parties. As such, one of the parties may withdraw from arbitration by broadcasting a subrogation transaction indicating the withdrawal. In some implementations, any smart contract state from the on-chain arbitration proceedings is preserved for future dispute resolution (e.g., evidence introduced, positions taken, calculations proposed/accepted, etc.) because it resides on the immutable blockchain118. In some implementations, parties may indicate lawyers144are desired to advance the subrogation claim, such as in a court of law for example. If a party to the subrogation claim desires involvement of lawyers it may be indicated on the blockchain118by sending data to a smart contract (e.g., to flip a flag in the smart contract state indicating interest, by adding a transaction to a UTXO model chain that indicates the interest and association with the subrogation claim, etc.). Lawyers144may monitor the blockchain118for available cases. If a lawyer144notices an available subrogation claim, the lawyer144may examine data already on-chain to evaluate the merits of the claim and to decide whether the lawyer144should take the case. Another third party that may be involved in the subrogation claim on the blockchain118are collections, or collection agencies134. Upon resolution of a claim, the prevailing party may experience difficulty in collecting any judgment owed. As such, the prevailing party may indicate interest in collection services from collections, or collection agencies134in much the same way as the parties indicate desire for arbitration or for lawyers to take the case. A subrogation judgment may be assigned to the collections, or collection agency or collections, or the collection agency may charge a fee for the collection services rendered. The collections, or collection agency134may broadcast collection transactions136to the blockchain118to indicate status of collection on judgments from the at-fault party102and/or the at-fault insurer108. The parties to the subrogation claim and entities who are not parties to the subrogation claim may transmit evidence of data relating to fault to the blockchain118for a determination of fault. For example, vehicle104and vehicle102may broadcast telematics data to the blockchain118from a time period around a crash that gave rise to the subrogation claim. The telematics data may be analyzed by that parties to the subrogation claim to determine whether the data gives rise to an indication of fault. Other types of data may be broadcast to the blockchain regarding fault, such as road sensors, IoT sensors, video cameras, biometric data regarding a driver, weather data, etc. Exemplary Validating Nodes in a Distributed Ledger System for Resolving Subrogation Claims FIG.2depicts an exemplary shared ledger system200for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure. The system200includes a shared subrogation ledger212and plurality of nodes202,204,206,208, and210. Each node maintains a copy of the subrogation ledger212. As changes are made to the subrogation ledger212, each node received the change via network214updates its respective copy of the shared subrogation ledger212. A consensus mechanism may be used by the nodes202-210in the shared ledger system200to decide whether it is appropriate to make received changes to the subrogation ledger212. Each node in the system therefore has its own copy of the subrogation ledger212, which is identical to every other copy of the subrogation ledger212stored by the other nodes. The shared ledger system200is more robust than a central authority database system because of the shared ledger's decentralized nature. As such, there is no single point of failure on the shared ledger system200as there would be in a centralized system. Exemplary Transaction Flow & Block Propagation Flow FIG.3depicts exemplary validating network nodes and an exemplary transaction flow300on a shared ledger network for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure.FIG.3includes two time frames320and322represented by the left and right sides of the dotted line, respectively, Node A302and Node B304, a set of transactions308A-308D, a set of blocks of transactions309A-309D, a distributed ledger310, and a blockchain318. The block propagation flow300may begin with Node A302receiving transaction306at time320. When Node A302confirms that transaction306is valid, the Node A302may add the transaction to a newly generated block308. As part of adding the transaction306to block308, Node A302may solve a cryptographic puzzle and include the solution in the newly generated block308as proof of the work done to generate the block308. In other embodiments, the transaction306may be added to a pool of transactions until a sufficient number of transactions in the pool exist to form a block. Node A302may transmit the newly created block308to the network at312. Before or after propagating the block308, Node A302may add the block308to its copy of the blockchain318. The transactions309A-309D may include updates to a state database316. The state database316may contain current values of variables created by smart contracts deployed on the blockchain318. Validated blocks such as block308may include transactions affecting state variables in state database316. At time322Node B304may receive the newly created block308via the network at312. Node B304may verify that the block of transactions308is valid by checking the solution to the cryptographic puzzle provided in the block308. If the solution is accurate then Node B304may add the block308to its blockchain318and make any updates to the state database316as rejected by the transactions in block308. Node B304may then transmit the block308to the rest of the network at314. Exemplary Node FIG.4depicts exemplary components of a network node400on a shared ledger network for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure. Node400is capable of performing the functionality disclosed herein. Node400may include at least one processor402, memory404, a communication module406, a set of applications408, external ports410, user interface412, a blockchain manager414, smart contracts416, operating system418, a display screen420, and input/output components422. In some embodiments, the node400may generate a new block of transactions or may broadcast transactions to other network nodes by using the blockchain manager414. Similarly, the node400may use the blockchain manager414in conjunction with the smart contracts416stored in memory404to execute the functionality disclosed herein. The memory404may further include chain data424including, for example, a state database of the blockchain for storing state of smart contracts deployed thereon. In other embodiments, the smart contracts416operate independent of the blockchain manager414or other applications. In some embodiments, node400does not have a blockchain manager414, or smart contracts416stored at the node. In some embodiments, the node400may have additional or less components than what is described. The components of the node400are described in more detail below. The node400, as part of a decentralized ledger system112, or another decentralized or centralized network, may be used as part of systems that interact with and/or manipulate data and transactions associated with the automotive claims process, the vehicle loss history process, and/or the vehicle identification number lifecycle process. Exemplary Smart Contract State FIG.5depicts an exemplary smart contract state500in a shared ledger network for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure. A smart contract may be deployed by any participant in the subrogation blockchain network (e.g., a subrogation claimant) to establish a contract state506for a particular subrogation claim. The deployed smart contract may expose methods and data to other participants in the subrogation blockchain network. Some of the data in the smart contract state may be private data that may only be altered by calling a method of the smart contract or only altered by authorized blockchain participants. One way of altering the smart contract state506is to broadcast a transaction to the subrogation blockchain502. If the broadcast transaction satisfies consensus rules, network validators may include the transaction in a block504. Inclusion in the blockchain502of a transaction sending data to the smart contract may cause validating nodes to update a state database, thus allowing network participants access to a rich state mechanism to manage the subrogation process and ultimately to resolve the subrogation claim. Subrogation smart contract state506may include pieces of data to identify and track the subrogation claim. For example, a contract owner may select a unique ID for the subrogation claim such that subsequent transactions and data sent to the smart contract can identify the contract by ID number. The contract owner may also specify an identity of the subrogation claimant and defendant. In at least one implementation, the subrogation claimant and defendant are identified by cryptographic public keys assigned to the respective entities. Subsequent data sent to the smart contract may include a message signed by private keys corresponding to the public keys identifying the subrogation claimant and defendant in the smart contract, thus providing cryptographic proof that the transaction was originated by one of the parties to the dispute. The private and public keys may be managed solely by the parties to minimize the attack surface for any attackers that might attempt to forge a transaction (e.g., the parties generate public/private cryptographic key pairs offline and only provide the public key to other network participants). A party's private keys may be generated according to a securely stored seed value (e.g., on a piece of physical paper or multiple copies of a piece of paper) such that the private keys may be recovered in the case of a data loss. The smart contract state506may further include information regarding the basis of the subrogation claim such as the policy holder and a description of the damages suffered (e.g., date, time, place, etc.). A subrogation blockchain network participant may monitor the blockchain502for any subrogation claims that identify the participant as a subrogation defendant. As such, it is not necessary for a subrogation claimant to notify a subrogation defendant “off-chain” of the existence of the claim. A subrogation claimant may additionally make such a notification to the subrogation defendant as a redundant communication. Resolving a subrogation claim may require assembling evidence of the damages suffered by a policy holder. The evidence could take the form of expert or witness statements (e.g., a statement from a treating doctor that an injury and/or medical care was the result of a collision, statement of a witness to a collision tending to establish the fault of the subrogation defendant's insured vehicle driver, etc.). The evidence could also take the form of documentary evidence (e.g., report from an authorized automotive repair services provider of damage to a vehicle as the result of a collision, a repair estimate, a repair bill for repair services rendered, a certification from a government entity that a vehicle involved in a collision had a valid registration, etc.). As evidence regarding the subrogation claim is collected from the various entities involved (medical, auto repair, governmental, etc.), these entities may broadcast transactions to the blockchain502to reflect the status of the loss and to provide the evidence therefor to other network participants, specifically the subrogation claimant and defendant. For example, a doctor who treated a patient for injuries sustained in a collision may broadcast a transaction sending data to the smart contract to connect the patient's injuries to the collision. The evidence may take the form of a cryptographically signed statement from the doctor attesting to the injuries. The evidence could take the form of a digitized X-ray or other medical record tending to prove the existence of an injury. The evidence could further take the form of medical bills issued by the hospital for services rendered for the injuries. Like the subrogation claimant and defendant, a doctor or hospital may own a private cryptographic key that corresponds to a public cryptographic key known to belong to the hospital or doctor by the other network participants. By signing any submitted evidence with the private cryptographic key, the hospital or doctor may provide cryptographic proof of identity to the subrogation defendant that the evidence was truly submitted by the doctor or hospital. A subrogation defendant may choose to rely solely on the cryptographic proof offered by the doctor/hospital without separately contacting the doctor/hospital to verify the data. In this way, the blockchain502reduces time and cost associated with resolving a subrogation claim. The subrogation defendant may also submit comments in response to the evidence by broadcasting transactions to the blockchain502. Additionally, the subrogation claimant may submit comments in response to the subrogation defendant's comments by broadcasting transactions to the blockchain502and the subrogation defendant and claimant may have a discussion back and forth that is broadcasted to the blockchain502. The comments that form the discussion back and forth may be stored as part of the smart contract state data for recordkeeping purposes. Another aspect of the subrogation smart contract state506associated with a subrogation claim is the smart contract data. Smart contract data may be thought of like the private and public data in an object created according to an object-oriented programming paradigm in that they may be directly updated from outside the object or they may be updated only in limited ways, such as by calling a method of the smart contract. In at least one implementation, smart contract data includes an indication (e.g., a flag) as to whether the parties to the subrogation claim accept evidence in the smart contract as representative of the damages owed by the subrogation defendant. These flags may be set according to methods in the smart contract that require the caller to prove its identity. The method may only permit, for example, a subrogation defendant to set a flag indicating the subrogation defendant's acceptance of a component of the damages of the subrogation claim. For example, once sufficient evidence relating to the cost of a medical treatment has been included in the smart contract, a subrogation claimant may indicate its approval of the evidence by setting a flag. A subrogation defendant, upon review of the medical evidence, may choose to either set its corresponding flag to indicate its acceptance of the medical evidence or it may decline to do so if it disputes the veracity of the evidence. As such, the smart contract state tracks the various components of the damages owed and refines points of dispute for the parties to the subrogation claim. When all sources of evidence for the value of the subrogation claim have been approved by the subrogation claimant and defendant, the value of the claim has been determined and agreed upon, and a subrogation defendant may mark the settlement as agreed by sending data to the smart contract. Additionally, the subrogation defendant may mark the settlement as paid. In at least one implementation, the blockchain502includes a circulating token having value with which the subrogation defendant may pay the subrogation claimant. The smart contract data may also include an indication (e.g., a flag) as to whether each of the parties to the subrogation claim have provided an offer or a counter-offer and the corresponding amount of the offer or counter-offer. These flags may be set according to methods in the smart contract that require the caller to prove its identity. The method may only permit, for example, the opposing party to set a flag indicating the opposing party's counter-offer and the amount of the counter-offer. For example, when the subrogation defendant submits an offer only the subrogation claimant may set a flag indicating a counter-offer and the amount of the counter-offer. When the subrogation defendant or claimant accepts the latest offer or counter-offer, the settlement may be marked as agreed for the latest offer or counter-offer amount. The offers and counter-offers that represent the negotiation back and forth may be stored as part of the smart contract state data for recordkeeping purposes. The smart contract data may also be used to make an arbitration demand. In one implementation, a party may send data to the smart contract to alter a smart contract state including an arbitration flag. The flag may be set to true to indicate a desire for arbitration. In other implementations, other data structures may be used to indicate a desire for arbitration such as a list that may include more information than a simple arbitration demand as depicted inFIG.5. For example, the list may include requirements of the arbitrator such as qualifications, a whitelist of approved arbitrators, conditions of the arbitration (e.g., evidence rules, time frame, cost, damages caps, etc.). In some implementation, the block of transactions504may organize the transactions it has received into a Merkle Tree to facilitate access to the stored transactions. The transactions may be hashed using a cryptographic hash algorithm, such as the algorithms discussed above, and the hash of each transaction may be stored in the tree. As the tree is constructed the hash of each adjacent node at the same level may be hashed together to create a new node that exists at a higher level in the tree. Therefore, the root of the tree, or the node at the top of the tree, is dependent upon the hash of each transaction stored below in the tree. Each transaction may include a set of data. The set of data may include identifying data for the transaction, and transaction data identifying the nature of the transaction and what the transaction entails (e.g., input and output addresses, a transaction value, a document hash value, a timestamp, a transaction fee value, etc.). In one implementation, documents stored “on” a blockchain are documents that have been hashed according to a cryptographic hashing algorithm (e.g., SHA-256) and the resulting output hash has been included in a transaction in a block that has been accepted by the network nodes as satisfying the consensus rules of the blockchain. As such, the documents may be later verified or validated by comparing the hash of the documents to the hash stored on the blockchain. For example, if a set of documents results in a SHA-256 hash that was recorded on a blockchain on a certain date, then the blockchain provides cryptographic proof that the documents existed as of that date. One way of storing a document on a blockchain is to broadcast a transaction including a hash of the document to the network, which will be included in a block if the transaction satisfies all of the consensus rules of the network. In some implementations, the blockchain is a permissioned ledger, meaning only authorized network participants may broadcast transactions. In other implementations, only some authorized network participants may make certain transactions. For example, vehicle telematics data tending to show which vehicle was at fault in a collision may be uploaded by the vehicle to the blockchain502contemporaneously with or subsequent to a collision. Only a cryptographic hash of the data may be included in the blockchain502such that the data may be verified using the blockchain even if it is obtained by a party off-chain. Validating network nodes may verify that the signed transaction or signed message was signed by the private cryptographic key corresponding to the published public cryptographic key owned by the authorized vehicle manufacturer. In at least one implementation, a valid proof-of-identity may be applied as a consensus rule by the blockchain network. As such, any transaction attempting to add a new VIN number to the blockchain without a cryptographic proof-of-identity matching an identity authorized to add a new VIN number is rejected by the network as non-compliant with the consensus rule. Exemplary Subrogation Evidence Transactions FIG.6depicts an exemplary transaction600on a shared ledger network for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure. The transaction600may modify a prior transaction or it may send data to a smart contract deployed on the blockchain602. An originator of the transaction600may broadcast the transaction to nodes on the blockchain network and the transaction600will be included in block604if it is a valid transaction. The transaction600may include various information regarding the transaction's changes to the subrogation claim managed by the blockchain602. For example, the transaction600may include the unique subrogation contract ID, the originator of the transaction, a description of the damages, and data including evidence of the damages suffered. After a collision, a repair facility typically takes control of the vehicle. In some cases the repair facility may provide a rental car, or substitute transportation to the vehicle owner. The repair facility secures authorization to repair the vehicle from the vehicle owner. Once this is secured, the repair facility identifies potential areas of prior damage/betterment, develops a repair plan, and prepares a repair estimate. The repair facility may request parts from suppliers, finalize any parts orders, update the estimate accordingly, and generally manage the repair of the vehicle. As part of the repair process, the repair facility may provide photographic evidence of the damage done to the vehicle. These photographs may then be uploaded to the blockchain after they have been hashed so as to ensure that any private information is protected, but also that the photographs provided are valid. Evidence of the damage done to the vehicle may also be provided from evidence oracles608. These evidence oracles may be devices connected to the internet that record information about the physical environment around them. For example, the evidence oracles may be connected video cameras, traffic cameras, road sensors, motion sensors, environmental conditions sensors (e.g., measuring atmospheric pressure, humidity, etc.) as well as other Internet of Things (IoT) devices. Evidence oracles608record information occurring in the physical world and may transmit that information to a distributed ledger where it can be used in the subrogation claims process. For example, an evidence oracle may collect data on a traffic intersection. This intersection may be of interest to insurers if it has a history for being a dangerous intersection where accidents frequently occur. The data may be packaged into a transaction, such as transaction606. The data from the evidence oracle608may include a transaction ID, an originator (identified by a cryptographic proof-of-identity, and/or a unique oracle ID), an evidence type, such as video and audio evidence, and a cryptographic hash of the evidence. In another implementation, the evidence is not stored as a cryptographic hash, but is directly accessible in block604by an observer or other network participant, such as the participants in a subrogation claim. Other examples of smart contract data added to the subrogation claim include arbitration schedule and/or arbitration decisions. An arbitration schedule may include deadlines for the parties to submit evidence, brief questions of law, bring or respond to filed motions, participate in conferences with the arbitrator, etc. The arbitration schedule need not be transmitted directly to the parties due to the blockchain's availability to the parties. The arbitrator may require the parties to monitor the subrogation claim state on the blockchain602to determine when the deadline exist. Another type of smart contract data is an arbitration decision. The arbitration decision may cover decisions on categories or calculations of damages, admission of evidence, settlement proposals, or the case as a whole. FIG.7depicts an exemplary transaction700on a shared ledger network for resolving subrogation claims with fault determination in accordance with one aspect of the present disclosure. The transaction700may modify a prior transaction or it may send data to a smart contract deployed on the blockchain702. An originator of the transaction700may broadcast the transaction to nodes on the blockchain network and the transaction700will be included in block704if it is a valid transaction. The transaction700may include various information regarding the transaction's changes to the subrogation claim managed by the blockchain702. For example, the transaction700may include the unique subrogation contract ID, the originator of the transaction, and information relating to the subrogation claim. In the example illustrated inFIG.7, the transaction700includes data relating to one of the vehicle involved in a collision. As such, the blockchain702will include evidence tending to show the value of the vehicle such as its make/model, year, color, options, etc. In one implementation, transaction700is broadcast by a government entity and includes data relating to the registration and insurance of the vehicle. One type of smart contract data shown inFIG.8includes the identity of a third party source of information. In the example illustrated inFIG.8, new smart contract state data introduced by a new transaction includes information regarding the status of the vehicle, such as an autonomous or semi-autonomous vehicle. An identity of the entity supplying the status of the vehicle may be included in the smart contract (e.g., a governmental body, insurer, registrar, etc.). FIG.8depicts an exemplary transaction800representing vehicle repair in a shared ledger network for resolving subrogation claims with fault determination associated with one aspect of the present disclosure. After a collision, a vehicle808, such as an autonomous or conventional vehicle, that was involved in a collision is inspected/repaired810by an authorized repair facility812. The authorized repair facility broadcasts transaction806to subrogation blockchain802to be included in a block, such as block804. The transaction806includes data to update the state of the subrogation claim such as a transaction ID, an originator (identified by a cryptographic proof-of-identity, a unique subrogation arbitration claim ID, a description of damages, a cryptographic hash of damages evidence, and a description of services rendered and cost). In another implementation, the evidence is not stored as a cryptographic hash, but is directly accessible in block804by an observer or other network participant, such as the subrogation claim defendant. The transaction800may also include a smart contract data field to allow a subrogation claim defendant to indicate approval or rejection of the damages and evidence therefor. Exemplary Approval of Fault Determination on a Subrogation Blockchain FIG.9depicts an exemplary transaction900representing a fault determination on a subrogation blockchain902. A subrogation claimant insurer904inspects a block906on the blockchain902. The block906may include a variety of data relating to fault of the subrogation claim disputed on the blockchain902. In block906, one category of data relating to fault is on-board vehicle diagnostics. A vehicle may broadcast on-board vehicle diagnostic data to the blockchain902, directly or indirectly, after a crash that gives rise to the subrogation claim. The on-board vehicle diagnostics may provide evidence of control signals received by a vehicle, position and velocity of a vehicle, and data relating to driver actions and road conditions in a time period leading up to a crash. Other types of data relating to fault in block906include IoT sensor data from an environment of a vehicle crash. IoT sensor data may include road condition data, traffic data, weather data, air quality data, vehicle position and velocity data, and forensic data relating to a vehicle crash giving rise to a subrogation claim. The IoT data may be obtained by a party and broadcast to the blockchain902, broadcast directly from the IoT devices themselves to the blockchain902at the request of a party, broadcast automatically to the blockchain902after a crash, etc. Another type of data relating to fault in block906include biometric data. Biometric data may be collected by sensors inside a vehicle regarding any of the occupants of the vehicle (heart rate, breathing rate, etc.) or it may be collected after a crash (e.g., police-administered sobriety test). A camera of other sensor inside a vehicle may record footage of a driver's posture, driving decisions, and/or other information regarding the interior of the vehicle (e.g., children in back seat, whether seatbelts were fastened, other distractions to the driver inside the vehicle, etc.). In some implementations, a claims auditor906determines fault based upon the data regarding fault on the blockchain902. The claims auditor906may analyze the data relating to fault at the request of one of the parties to the subrogation claim, an arbitrator, etc. The claims auditor908includes without limitation a forensics analysis of vehicle control decisions and actions prior to a crash, analysis of driver actions, analysis of vehicles not involved in a crash and/or the subrogation claim on the blockchain902, etc. FIG.10is a signal diagram1000of an exemplary process flow for resolving a subrogation claim in a shared ledger network associated with one aspect of the present disclosure. When a subrogation claimant pays1002an insured loss (1012) on a loss for which the subrogation claimant's insured was not at fault, the subrogation claimant1002deploys a smart contract1014to a shared ledger network1004for resolving subrogation claims. Operation1014may include depositing a bond with the smart contact in a token having value such that the rules of the smart contract determine when and to whom the bond will be released depending on the outcome of the subrogation claim (or lack of resolution of the subrogation claim). The subrogation sends data to the smart contract (1016) either as part of operation1014or separately to populate the smart contract with information pertaining to the subrogation claim. Third parties1006send data to the smart contract (1017) to supply evidence of the claimed loss. The information may include estimates of repair to covered property that was at-loss or for repairs that had already occurred and costs that had already been incurred (e.g., auto repairs, temporary transportation expense, etc.). The transaction1017may include a cryptographic proof of identity of the third party1006such that only trusted entities may supply data to the shared ledger network1004. Operation1017repeats with different third parties1006until no more third parties1006add data to the shared ledger1004. Data sent to the smart contract (1017) may include data relating to fault in the subrogation claim. The data relating to fault may be sent at the request of one of the parties to the subrogation claim or by the parties themselves. Third parties may also include autonomous sources or autonomous vehicles. Devices that have networking connectivity may detect the occurrence of a loss and may report loss data to other third parties1006or directly to the blockchain. In one implementation, a vehicle (such as a smart or autonomous vehicle) includes sensor data that can detect when a collision has occurred (e.g., airbag deployed, crash codes reported by on-board electronics, etc.). The vehicle may autonomously transmit the data to the blockchain or to other third parties who may, in turn, send the data to the blockchain. The vehicle data may be sent automatically upon detection of a collision and it may be signed with a cryptographic key on the vehicle that cryptographically proves the data originated with the vehicle that was involved in the collision. Due to the immutable nature of a blockchain, autonomous uploading of vehicle crash sensor data also cryptographically proves the time at which the data existed. For example, if vehicle crash sensor data becomes part of a shared ledger as of a certain time and date, it proves the information existed as of that time and date. If crash sensor data becomes part of the shared ledger with, for example, 10 minutes of a collision, then the blockchain proves the data existed as of 10 minutes after the collision. The parties to the subrogation claim may rely on the blockchain to provide evidence that the crash data was not manipulated or modified the closer in time the data was to the actual event which it describes. A subrogation defendant1010sends data to the smart contract (1120) to indicate acceptance or rejection of the components of the subrogation claim and/or the entire claim. At block1026, an arbitration request is made to a third party if one of the parties has demanded arbitration. The arbitration demand may be made on-chain by altering the state data of a smart contract associated with the subrogation claim. The subrogation claim is arbitrated at1030and, if the subrogation claim is settled (1122), the subrogation defendant makes a subrogation settlement payment (1124). If the subrogation claim is not successfully arbitrated at1030, the work flow returns (“NO”) at block1022to sending data to the smart contract. Exemplary Shared Ledger Operations FIG.11depicts exemplary operations1100for resolving a subrogation claim in a shared ledger network associated with one aspect of the present disclosure. A payment operation1102pays a claim for an insured loss. If the entity that performs the payment operation1102believes it is entitled to a subrogation payment from another insurer because the other insurer's customer was at-fault in the event that caused the insured loss, then the entity that performs the payment operation1102deploys a subrogation claim. The deploying operation1104may include broadcasting a transaction to a shared ledger network identifying the subrogation defendant or the deploying operation1104may include deploying a smart contract to a shared ledger to manage and resolve the subrogation claim. A broadcasting operation1106broadcasts an update to the subrogation claim deployed to the shared ledger. The broadcasting operation1106may initiate components of the subrogation claim such as damages incurred in various areas and includes data relating to fault. The broadcasting operation1106may be part of the deploying operation1104or the broadcasting operation may be a separate operation. A receiving operation1108receives a subrogation claim settlement payment upon resolution of the subrogation claim. The subrogation claim settlement payment may be disbursed according to the rules of a smart contract deployed on the blockchain. FIG.12depicts exemplary operations1200for resolving a subrogation claim in a shared ledger network associated with one aspect of the present disclosure. A monitoring operation1202monitors a blockchain for an indication of a subrogation claim including data regarding fault. The monitoring operation may be performed by a potential subrogation claim defendant or it may be performed by an independent entity on behalf of a subrogation claim defendant. In one implementation, insurers are subject to a common blockchain agreement that obligates the insurers to periodically check the blockchain for any pending claims against the insurer, such as subrogation claims. A determining operation1204determines a party at fault in the subrogation claim based at least in part on the data relating to fault in the subrogation claim on the blockchain. A broadcasting operation1206broadcasts to a blockchain network an indication of acceptance of the evidence regarding the insured loss if the evidence satisfies the acceptance condition. In one implementation, the broadcasting operation1306broadcasts a transaction that modified a portion of the ledger representing the subrogation claim. A cryptographically signed message may be included in the transaction signifying the acceptance. In another implementation, the broadcasting operation1306sends data to a smart contract deployed on the blockchain, the data sent to the smart contract indicating the acceptance of the evidence. A remitting operation1210remits a payment to the subrogation claimant in settlement of the subrogation claim. The remitting operation1210may be made off-chain or it may involve transmitting a token having value on the blockchain. In at least one implementation, a smart contract is programmed to disburse the subrogation settlement claim to the subrogation claimant upon the satisfaction of certain conditions (e.g., the acceptance of the subrogation defendant). Exemplary Computer-Implemented Methods for Settling a Subrogation Claim by a Shared Ledger with Data Relating to Fault In one aspect, a computer-implemented method of settling a subrogation claim by a shared ledger may be provided. The method may include, via one or more processors, servers, and/or associated transceivers: (1) paying a claim to an insured for an insured loss; (2) generating and/or deploying an electronic or virtual subrogation claim to a shared ledger including data relating to fault (such as by transmitting the subrogation claim via wireless communication or data transmission over one or more radio frequency links or communication channels), the subrogation claim identifying a subrogation defendant and including information regarding the insured loss; (3) broadcasting an update to the subrogation claim deployed to the shared ledger (such as via wireless communication or data transmission over one or more radio frequency links or communication channels); and/or (4) receiving (such as via wireless communication or data transmission over one or more radio frequency links or communication channels) a subrogation claim settlement payment upon resolution of the subrogation claim to facilitate resolving subrogation claims. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer-implemented method of settling a subrogation claim by a shared ledger may be provided. The method may include, via one or more processors, servers, and/or associated transceivers: (1) paying a claim to an insured for an insured loss; (2) generating a subrogation claim, the subrogation claim including data relating to fault and identifying a subrogation defendant and including information regarding the insured loss; (3) transmitting or otherwise deploying the subrogation claim to a shared ledger; (4) broadcasting an update to the subrogation claim deployed to the shared ledger; and/or (5) receiving a subrogation claim settlement payment upon resolution of the subrogation claim. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer-implemented method of settling a subrogation claim by a shared ledger may be provided. The method may include, via one or more processors, servers, and/or transceivers: (1) monitoring a blockchain for an indication of a subrogation claim with data relating to fault, the subrogation claim identifying a subrogation claimant and including evidence regarding an insured loss; (2) determining whether the evidence regarding the insured loss satisfies an acceptance condition; (3) broadcasting to a blockchain network an indication of acceptance of the evidence regarding the insured loss if the evidence regarding the insured loss satisfies the acceptance condition; and/or (4) remitting a payment to the subrogation claimant in settlement of the subrogation claim. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer-implemented method of settling a subrogation claim by a shared ledger may be provided. The method may include, via one or more processors, servers, and/or associated transceivers: (1) monitoring a blockchain for an indication of a subrogation claim, the subrogation claim including data relating to fault and identifying a subrogation claimant and including evidence regarding an insured loss; (2) determining whether the evidence regarding the insured loss satisfies an acceptance condition; (3) generating an indication of acceptance of the evidence regarding the insured loss if the evidence regarding the insured loss satisfies the acceptance condition; (4) broadcasting (such as via wireless communication or data transmission over one or more radio frequency links or communication channels) to a blockchain network the indication of acceptance of the evidence; and/or (5) remitting a payment to the subrogation claimant in settlement of the subrogation claim. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. Exemplary Validating Network Node on a Shared Ledger In one aspect, a validating network node on a shared ledger network may be provided. The node may include (1) a transceiver configured to exchange shared ledger data with peer network nodes, the shared ledger data including subrogation claim transactions including data relating to fault; (2) a storage media configured to store a copy of the shared ledger; and/or (3) a transaction validator configured to apply a set of consensus rules to shared ledger data received from the peer network nodes, the transaction validator being further configured to append shared ledger data received from peer nodes to the copy of the shared ledger if the shared ledger data satisfies the consensus rules. The node may include additional, less, or alternate functionality, including that discussed elsewhere herein. Additional Considerations This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One may be implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. With the foregoing, a user may be an insurance customer who may opt-in to a rewards, insurance discount, or other type of program. After the insurance customer provides their affirmative consent, an insurance provider remote server may collect data from the customer's mobile device, smart home controller, smart or autonomous vehicle, or other smart devices—such as with the customer's permission or affirmative consent. The data collected may be related to smart home functionality (or home occupant preferences or preference profiles), smart or autonomous vehicle functionality, and/or insured assets before (and/or after) an insurance-related event, including those events discussed elsewhere herein. In return, risk adverse insureds, such as home or vehicle owners, or home, vehicle, or apartment occupants may receive discounts or insurance cost savings related to home, renters, personal articles, auto, mobile, and other types of insurance from the insurance provider. In one aspect, smart or interconnected home data, autonomous or smart vehicle data, and/or other data, including the types of data discussed elsewhere herein, may be collected or received by an insurance provider remote server, such as via direct or indirect wireless communication or data transmission from a smart home controller, mobile device, or other customer computing device, after a customer affirmatively consents or otherwise opts-in to an insurance discount, reward, or other program. The insurance provider may then analyze the data received with the customer's permission to provide benefits to the customer. As a result, risk adverse customers may receive insurance discounts or other insurance cost savings based upon data that reflects low risk behavior and/or technology that mitigates or prevents risk to (i) insured assets, such as homes, personal belongings, or vehicles, and/or (ii) home or apartment occupants, or vehicle passengers. Furthermore, although the present disclosure sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this patent and equivalents. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical. Numerous alternative embodiments may be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this patent and equivalents. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical. Numerous alternative embodiments may be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a machine-readable medium or in a transmission signal) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In exemplary embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules may provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and may operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a business or home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. The patent claims at the end of this patent application are not intended to be construed under 35 U.S.C. § 112(f) unless traditional means-plus-function language is expressly recited, such as “means for” or “step for” language being explicitly recited in the claim(s).
70,314
11861730
DETAILED DESCRIPTION The present embodiments may relate to, inter alia, maintaining a distributed ledger to enforce a plurality of smart contracts associated with a plurality of autonomous vehicles. In some aspects, the present embodiments relate to autonomous vehicle operation monitoring and/or assessment. The operation of the autonomous vehicles may impact the obligations of various parties associated with the autonomous vehicle, for example, an operator of the autonomous vehicle, a manufacturer of the autonomous vehicle, an insurer of the operator, an insurer of the autonomous vehicle, and/or other parties. To this end, the present embodiments may leverage the use of a distributed ledger and/or smart contracts to codify and/or automatically enforce these obligations. A distributed ledger is a transactional record that is maintained at each node of a peer to peer network. Commonly, the distributed ledger is comprised of groupings of transactions bundled together into a “block.” When a change to the distributed ledger is made (e.g., when a new transaction and/or block is created), each node must form a consensus as to how the change is integrated into the distributed ledger. Upon consensus, the agreed upon change is pushed out to each node so that each node maintains an identical copy of the updated distributed ledger. Any change that does not achieve a consensus is ignored. Accordingly, unlike a traditional, centralized ledger, a single party cannot unilaterally alter the distributed ledger. In an application of distributed ledgers, each new block may be cryptographically linked to the previous block in order to form a “blockchain.” More particularly, to create a new block, each transaction within a block may be assigned a hash value (i.e., an output of a cryptographic hash function, such as SHA-2 or MD5). These hash values may then be combined together utilizing cryptographic techniques (e.g., a Merkle Tree) to generate a hash value representative of the entire new block. This hash value may then be combined with the hash value of the previous block to form a hash value included in the header of the new block, thereby cryptographically linking the new block to the blockchain. To this end, the precise value utilized in the header of the new block is dependent on the hash value for each transaction in the new block, as well as the hash value for each transaction in every prior block. According to aspects, the hash value generated for the new block may be used as an input to a cryptographic puzzle that manipulates a nonce value. When a solution to the cryptographic puzzle is found, the solving node publishes the solution and the other nodes then verify that the solution is the correct solution. Because the solution also depends on the particular hash values for each transaction within the blockchain, if the solving node attempted to modify any transaction, the solution would not be verified by the other nodes. More particularly, if a single node attempts to modify a prior transaction within the blockchain, a cascade of different hash values are generated for each tier of the cryptographic combination technique. This results in the header for one or more blocks being different than the corresponding header(s) in every other node that did not make the exact same modification. As a result, the solution generated by the modifying node would not solve the cryptographic puzzle presented to any node without the identical modification. Thus, the version of the new block generated by the modifying node is readily recognized as including an improper modification and is rejected by the consensus. This inability to modify past transactions lead to blockchains being generally described as trusted, secure, and/or immutable. A smart contract is a computer protocol that enables the automatic execution and/or enforcement of an agreement between different parties. The smart contract may include one or more trigger conditions, that, when satisfied, correspond to one or more actions. For some smart contracts, which action(s) from the one or more actions are performed is determined based upon one or more decision conditions. An enforcement entity corresponding to the smart contract may subscribe to one or more data streams including data related to a trigger condition and/or a decision condition. Accordingly, the enforcement entity may route the data streams to the smart contract so that the smart contract may detect that a trigger condition has occurred and/or analyze a decision condition to direct the enforcement entity to perform one or more actions. As an example, a pay-per-trip insurer may include a maximum distance the autonomous vehicle may traverse in each trip. In this example, a driver and the pay-per-trip insurer may generate a smart contract to insure a particular trip. In response, the enforcement entity may receive an odometer data stream from the covered vehicle. If the autonomous vehicle incurs liability during the trip (e.g., a trigger event occurred), the smart contract may automatically analyze the odometer data feed to determine whether the autonomous vehicle was operated within the bounds of the maximum distance in the insurance agreement (e.g., a decision condition). Accordingly, the smart contract may direct the performance of an action to automatically assign liability to an operator or the insurer based upon the odometer data feed. In another example, an insurer of an autonomous vehicle and a manufacturer of an autonomous vehicle may generate a smart contract to divide the liability for damage to and/or caused by the autonomous vehicle. In particular, the insurer may agree to cover liability incurred during manual operation and the manufacturer may agree to cover liability incurred during autonomous operation. The enforcement entity for this smart contract may subscribe to a data feed indicative of a control state of the autonomous car. Accordingly, in response to the autonomous vehicle incurring liability (e.g., a trigger event occurred), the smart contract may direct the performance of an action to generate a claim and assign it to the appropriate entity based upon the control state (e.g., a decision condition). Of course, sensors monitoring an autonomous vehicle may be leveraged to facilitate many other types of liability arrangements in a generated smart contract. Given the relative ease to modify computer files, including a smart contract computer file, and the parties' competing incentives, there needs to be a system that all parties trust to fairly and accurately regulate and enforce the smart contract. For at least the above reasons, a distributed ledger and/or a blockchain system may be utilized to establish such a trusted system. To this end, the distributed ledger may be leveraged to record the smart contract and/or the data related to the trigger conditions and/or decision conditions of the smart contract. More particularly, the data utilized to determine the presence of a trigger condition and/or to analyze decision conditions to determine an action may be recorded within a transaction included in the distributed ledger. By recording this data in the distributed ledger, there is a public and trusted record of the smart contract and the reasoning behind any action performed as directed by the smart contract. As a result, the parties that generated the smart contract may automatically enforce their contracts in a transparent and objective manner. For at least this reason, an entity the regularly generates smart contracts, such as an insurer, may establish a distributed ledger to govern and enforce a plurality of its smart contracts. According to certain aspects, the distributed ledger may either be a public ledger (each node may readily view the underlying data of each transaction) or a private ledger (the underlying data needs an encryption key to be viewed), or a combination of public and private ledger aspects. According to certain aspects, an electronic device associated with each vehicle may execute an application to monitor operational autonomous vehicle data that is relevant to the enforcement of a smart contract. The application may interpret the operational data to generate a “transaction” or a time-stamped record of the relevant operational data. In one embodiment, the transaction may include an identification of the autonomous vehicle or operator, a time of the transaction, and an indication of one or more vehicle conditions relevant to a smart contract. In one embodiment, the application may process the operational data to create the indication of the vehicle condition. For example, the application may process an airbag activation event to determine that the autonomous vehicle was involved in a collision. As a result, the application may generate a transaction that indicates a liability-inducing event occurred. The transaction may further include data relating to one or more decision conditions that the smart contract analyzes to determine an action to perform in response to the trigger condition. As another example, the presence of microvibrations in a steering wheel may indicate a vehicle operator does not have his or her hands on the steering, such as is likely to occur in an autonomous operation mode. Accordingly, the application may monitor the presence of these microvibrations. In one scenario, when a transaction indicative of a trigger condition is generated, the transaction polls the microvibration sensor to determine a control state of the vehicle and generate an indication of the control state to include in the transaction. In another scenario, when the application detects that the microvibrations appear or disappear, the application may generate a transaction indicative of a shift to autonomous or manual operation, respectively. In one aspect, the application may receive an indication from an enforcement entity that the enforcement entity is subscribing to a data stream associated with the autonomous vehicle. For example, when a new smart contract is generated, the enforcement entity may subscribe to one or more data streams related to a trigger condition and/or decision condition associated therewith. Accordingly, in response to the subscription request, the application may monitor one or more sensors relevant to trigger conditions and/or the decision conditions associated with the smart contract. It should be appreciated that the electronic device may monitor these sensors for other purposes (e.g., controlling the operation of the autonomous vehicle). Accordingly, as it used herein, “monitoring” may refer to the act of monitoring these sensors for the purpose of enforcing a smart contract and/or generating a transaction that may be included in the distributed ledger. In some embodiments, after the enforcement and/or termination of the smart contract, the application may receive an indication that the enforcement entity is unsubscribing from a data streams associated with the autonomous vehicle. Accordingly, the application may cease monitoring the relevant sensors in response to the unsubscribe request. According to certain aspects, not all events that incur liability have the same priority. For example, an autonomous vehicle that is involved in a “fender bender” may be a lower priority than an autonomous vehicle that was involved in a head-on collision. However, in some embodiments, the enforcement entity may not compile and analyze the block of transactions until a threshold time has elapsed. As a result, in an emergency situation, precious time may be wasted waiting for the block to be compiled. Accordingly, the application may include in each transaction a priority indication. When the application transmits to the enforcement entity a transaction with an urgent priority indication, the reception of the urgent transaction may trigger the enforcement entity to compile a new block that includes any pending transactions received after the prior compilation period. Consequently, the urgent transaction may be processed faster than possible in a traditional blockchain implementation. It should be appreciated that while the block size may generally vary based upon the number of transactions received, the aperiodic compilation of a block in response to an urgent transaction may cause a greater variation in the block size than a traditional blockchain. In one aspect, the systems and methods discussed herein address a challenge that is particular to blockchains. As an example, the challenge relates to reducing the amount of data included in the blockchain to keep the size of the blockchain within a reasonable size. To this end, by the application processing the operational data of the autonomous vehicles to create state-based transactions instead of operational data-based transactions, fewer transactions are generated. Instead of a transaction being generated each sample period of an operating data, the transaction may only be generated in response to a change in state of a condition relevant to a smart contract. This may reduce the number of transactions generated at each autonomous vehicle, enabling the enforcement entity to compile blocks having a smaller, faster to process, block size. Moreover, because the blocks are transmitted to each validation entity, decreasing the block size may reduce network congestion enabling each block to be validated faster. As a result, the systems and methods are necessarily rooted in computer technology in order to overcome the aforementioned shortcomings that specifically arise in the realm of blockchain technology. Moreover, in one aspect, the enforcement entity may further reduce the size of the blockchain by pruning old blocks and/or transactions. For example, when a driver purchases a smart contract insurance product for a single trip, if no collision occurs during that trip, there may be no need to maintain the transactions relating to the trip upon its conclusion. Similarly, even if a collision occurs, liability for any damage may be assigned and processed, negating the need to include the data in the block chain. Accordingly, the enforcement entity may analyze the blockchain for any transactions that may be pruned from the blockchain. However, as described above, simply removing the transaction from the distributed ledger may change one or more hash values within the block causing the pruned block to be rejected when consensus is sought. Accordingly, pruning may involve removing the underlying transaction data but maintaining the header and/or hash value of the transaction. Because the header and/or hash value generally requires less storage than the underlying data, pruning can reduce the block size of older blocks while still being able to form a consensus on a blockchain that includes the pruned block. In another aspect, to further reduce the size of the blocks, the systems and methods disclosed may reduce duplicate and/or correlated transaction generated by one or more vehicles. To this end, when one or more autonomous vehicles collide with one another, the applications monitoring the operation of the respective autonomous vehicles may exchange operational data for a period of time leading up to the collision. Based upon the operational data respectively corresponding to each of the one or more vehicles, one of the applications may generate a single transaction that includes the condition of each of the one or more autonomous vehicles involved in the collision. In some embodiments, this transaction may include decision condition data indicating a relative fault for each of the one or more autonomous vehicles based upon the analysis of the respectively corresponding sets of operational data. Accordingly, the single transaction may be routed to a plurality of smart contracts associated with any one of the one or more autonomous vehicles involved in the collision. Exemplary Environments for Maintaining Distributed Ledger FIG.1Adepicts an exemplary environment100maintaining a distributed ledged for the enforcement of a plurality of smart contracts. AlthoughFIG.1depicts certain entities, components, and devices, it should be appreciated that additional or alternate entities and components are envisioned. As illustrated inFIG.1A, the environment100may include a plurality of autonomous vehicles105a-f. As it is generally used herein, the term “autonomous vehicle” refers to any vehicle with autonomous (or even semi-autonomous) capabilities. Thus, “autonomous vehicle” is not limited to fully autonomous vehicles (SAE level 5) and includes even partially automated vehicles (SAE level 2). It should be appreciated that in fully autonomous vehicles, an “operator” may include a person that provides navigational input to the autonomous vehicle and/or a person located within the vehicle at a location wherein the person is capable of engaging manual control should the need and/or desire arise. As illustrated on the autonomous vehicle105a, the autonomous vehicle105amay include one or more sensors101a-bthat monitor the operational status of the autonomous vehicle105a. The sensors101may include, for example, a pressure sensor, a gyroscope, an accelerometer, an odometer, a vibration sensor, a microphone, an image sensor, a temperature sensor, and/or a radar or LIDAR sensor. Some of the sensors101may be included in the autonomous vehicle105aby a manufacturer of the vehicle105aand others of the sensors101may be retrofitted onto the vehicle105aat some point after manufacture. For example, a fleet manager may retrofit the vehicle105awith a particular type of sensor that relates to a smart contact frequently generated by the fleet manager. The autonomous vehicle105amay further include an electronic device103configured to interpret operational data generated by the sensors101. AlthoughFIG.1Aillustrates the electronic device103as a processing unit of the vehicle105ainterconnected to the sensors101via a communication bus of the vehicle105a, in other embodiments the electronic device103may be a personal electronic device (e.g., a mobile phone, a tablet, a laptop computer, a smart watch, smart glasses, other types of wearable electronics, an on-board diagnostic monitor, and so on) associated with an operator of the vehicle105a. In these embodiments, the personal electronic device may receive the operational data via a wireless interface (e.g., a Bluetooth interface, a Wi-Fi interface, or other known wireless communication interfaces) or a wired interface (e.g., an OBD port, a USB interface, an auxiliary interface, or other known wired communication interfaces). Additional information describing the operation of autonomous vehicles may be found in co-owned U.S. patent application Ser. No. 14/713,249 entitled “AUTONOMOUS VEHICLE OPERATION FEATURE MONITORING AND EVALUATION OF EFFECTIVENESS,” the entire disclosure of which is hereby incorporated by reference. Regardless of the particular type of electronic device, the electronic device103may include an application configured to analyze the operational data generated by the sensors101. More particularly, the application may be configured to analyze the operational data to detect a plurality of conditions (e.g., trigger conditions or decision conditions) associated with the vehicle105a. Periodically and/or in response to a change in condition, the application may generate a transaction that incorporates one or more of the detected conditions. According to certain aspects, the transaction may include indications of the one or more conditions, an identifier of the vehicle105aand/or the operator of the vehicle105a, a timestamp, an indication of a priority, and/or a portion of the operational data upon which the one or more detected conditions may be based. The electronic device103may transmit generated transactions via an antenna104. AlthoughFIG.1illustrates the antenna104as being separate from the electronic device103, it should be appreciated that for some types of electronic devices, such as a mobile phone, the antenna104may be included in the electronic device103itself. According to certain aspects, the electronic device103may also be configured to receive control signals from a command center and/or other remote computing device (not depicted) to remotely control the operation of the vehicle105a. In some scenarios, the control signals are indicative of a remote user actively controlling the vehicle105a(e.g., the remote user is piloting the vehicle105aas a drone operator would pilot a drone). In other scenarios, the control signals are indicative of particular actions the autonomous systems of the vehicle105ashould undertake. For example, an ambulance may broadcast a control signal to nearby autonomous vehicles to cause the autonomous vehicles to yield to the ambulance and/or pull over to the side of the road. The plurality of autonomous vehicles105a-fmay be configured to communicate with an enforcement server115via one or more communication networks110. The networks110may facilitate any data communication between the plurality of autonomous vehicles105a-fand an enforcement server115via any standard or technology (e.g., GSM, CDMA, TDMA, WCDMA, LTE, EDGE, OFDM, GPRS, EV-DO, UWB, IEEE 802 including Ethernet, WiMAX, and/or others). According to present embodiments, the plurality of autonomous vehicles105a-ftransmit generated transactions to the enforcement server115via the networks110. In some embodiments, the networks110may include a mesh or ad hoc network wherein a portion a of the plurality of autonomous vehicles105a-ffunction as nodes of the mesh or ad hoc network. Thus, in some embodiments, a transaction generated at the autonomous vehicle105amay be routed to, for example, the autonomous vehicle105cand the autonomous vehicle105fprior to the enforcement server115. It should be appreciated that standard or technology used to communicate between and among the plurality of autonomous vehicles105a-fis not necessarily the same standard or technology utilized to communicate between one of the plurality of autonomous vehicles105a-fand the enforcement server115. In addition to the transaction, in some embodiments, one or more of the plurality of autonomous vehicles105may exchange operational data over the mesh or ad hoc network in response to the one or more of the plurality of autonomous vehicles being involved in a collision. According to certain aspects, the enforcement server115may be configured to compile new blocks to add to a blockchain and to enforce a plurality of smart contracts. AlthoughFIG.1Aillustrates a single enforcement server115, it should be appreciated that in some embodiments, the enforcement server115may be a plurality of interconnected servers, for example, in a cloud computing environment. In one aspect, the enforcement server115may periodically compile a plurality of transactions received from the plurality of autonomous vehicles105. The enforcement server115may also aperiodically compile a plurality of transactions received from the plurality of autonomous vehicles105in response to receiving an urgent transaction. After the new block is compiled, the enforcement server115may transmit the new block to the plurality of autonomous vehicles105and/or dedicated validation entities135to generate a solution to incorporate the block into blockchain and/or to form a consensus on the solution. AlthoughFIG.1illustrates that the dedicated validation entities as being separate from the enforcement server115, it should be appreciated that the enforcement server115may itself include a module dedicated to generating a solution to the cryptographic puzzle and/or forming a consensus on the solution. In another aspect, the enforcement server115may analyze a smart contract database (not depicted) to determine whether any transactions compiled into the new block are associated with a smart contract. To this end, the enforcement server115may extract from each transaction one or more indications identifying an autonomous vehicle and/or an operator of the autonomous vehicle and route the transaction to a respectively corresponding one or more smart contracts that govern the identified autonomous vehicle and/or operator. In one scenario, the transaction may include a plurality of data relating to the status of a trigger condition and/or one or more decision conditions. In response, the particular smart contract may direct the enforcement server115to perform an action to enforce the particular smart contract. For example, the action may be to generate and/or file an insurance claim. Depending on the action, the enforcement server115may execute one or more third party applications125to carry out the action. In the insurance claim example, a third party insurer may include an application configured to generate and/or process the insurance claim based upon data included in the transaction. As another example, an emergency response entity (e.g., an EMT) may include an application in the third party applications125to dispatch a responder to a location of an autonomous vehicle. In some scenarios, a decision condition requires the analysis of data not generated at an autonomous vehicle. As an example, a decision condition may be related to a weather condition at the time liability occurred (e.g., the presence of rain when the liability was incurred). Accordingly, the smart contract may interact with one or more third party applications125to retrieve this additional decision condition data. In this example, one of the third party applications125may be a weather service application capable of outputting weather conditions at the location of the autonomous vehicle at the time indicated by the timestamp of the transaction. In one aspect, the smart contract may modify the transaction to include the additional condition data (assuming the transaction has not been compiled into a block) and/or generate a new transaction that indicates the additional condition data. The exemplary environment100may include additional, fewer, or alternate equipment or components, including those discussed elsewhere herein. Further, in some embodiments, the actions described as being performed by the enforcement server115may additionally or alternatively be performed at one or more of the autonomous vehicles105a-f. Turning now toFIG.1B, depicted is another exemplary environment150for maintaining a distributed ledger associated with autonomous vehicles. AlthoughFIG.1Bdepicts certain entities, components, and devices, it should be appreciated that additional or alternate entities and components are envisioned. As illustrated inFIG.1B, the environment150may include a distributed ledger145. The distributed ledger145may be maintained via a network of nodes, including one or more autonomous vehicles105and/or an enforcement server115. The nodes may have access distributed ledger145and/or generate data included in the distributed ledger145. As described above, the distributed ledger145may not be changed without first forming a consensus on the change. Accordingly, as depicted byFIG.1B, the distributed ledger145may be considered separate from any individual node, even though the individual nodes may store local copies of the distributed ledger145. According to certain aspects, as described with respect toFIG.1A, the autonomous vehicle105may include a plurality of sensors101a-b, an electronic device103, and/or an antenna104. The autonomous vehicle105may communicate with the enforcement server115via the electronic device103and/or the antenna104. As illustrated, the enforcement sever115may include a blockchain manager117. The blockchain manager117may be a software program, engine, and/or a module that is executed by one or more processors interconnected with the enforcement server115. In one embodiment, the blockchain manager117may compile a plurality of transactions into a block, update the distributed ledger145to include a block, route transaction data to one or more smart contracts, and/or automatically enforce one or more smart contracts. According to certain aspects, an operator of the enforcement server may interact with a management interface119to control aspects of the distributed ledger145and/or set control parameters associated with the blockchain manager117. For example, a period for which blocks are generated may be set via the management interface119. In an aspect, the plurality of smart contracts associated with the distributed ledger145may be stored in a smart contracts database130. AlthoughFIG.1Bdepicts the smart contract database130as a part of the enforcement sever115, the smart contract database may be maintained within the distributed ledger145. According to certain aspects, one or more public devices123may access data stored at the enforcement server via a public interface121. The public interface121may be a read only interface that prevents the one or more public devices123from writing transactions to the distributed ledger145. To this end, the one or more public devices123may be used, for example, to view data maintained within the distributed ledger145, to view the status of one or more smart contracts associated with the distributed ledger145, compile statistics regarding data maintained in the distributed ledger, and so on. Additionally or alternatively, one or more third party applications125may interact with the distributed ledger145via an API127of the enforcement server115. The third party applications125may be associated with one or more entities associated with an autonomous vehicle. For example, the third party applications125may include an application to generate and/or file an insurance claim, send a repair request, send a tow request, contact an emergency service provider, and so on. It should be appreciated that althoughFIG.1Bdepicts the third party applications125as separate from the enforcement sever115, in some embodiments a portion of the third party applications125may be stored locally at the enforcement server115. The exemplary environment150may include additional, fewer, or alternate equipment or components, including those discussed elsewhere herein. Further, in some embodiments, the actions described as being performed by the enforcement server115may additionally or alternatively be performed at one or more of the autonomous vehicles105. Exemplary Flow Diagrams for Maintaining Distributed Ledger Turning now toFIG.2A, illustrated is an exemplary flow diagram200associated with compiling a plurality of transactions into blocks. As illustrated, each transaction may include several components. A first component may include an identification of a vehicle and/or a driver associated with the transaction. The vehicle identification may be a VIN, an identifier assigned by a fleet operator, a license plate, or any other identifier that corresponds to a particular autonomous vehicle. The driver identification may be a name, a policy or account number, a username, or any other identifier that corresponds to a person operating the autonomous vehicle. Although the transaction illustrated byFIG.2only depicts a single vehicle or driver identification, it should be appreciated that some transactions may include an identification of a plurality of vehicles and/or drivers. For example, if multiple autonomous vehicles are involved in a collision, a single transaction may include a vehicle and/or driver identification for each autonomous vehicle involved in the collision. Further, each transaction may include a timestamp indicating a time the transaction was generated and/or a time the underlying data was measured. Another component of each transaction may be a transaction information component. The transaction information may include a plurality of condition data that are analyzed by a smart contract associated with the vehicle and/or driver. In some aspects, the vehicle and/or driver identification and the timestamp may be viewed as a transaction header, whereas the transaction information may be viewed as the transaction payload. The transaction information may include an indication that a trigger condition occurred, an indication related to one or more decision conditions, and/or any underlying operational data generated by sensors within an autonomous vehicle that the condition indications are based upon. For example, the transaction information may include an indication that a liability inducing event occurred (e.g., a trigger condition) and/or an indication that the autonomous vehicle was operated in a manual operation mode (e.g., a decision condition). According to illustrated embodiments, a plurality of transactions may be compiled into a block. In one scenario, a plurality of transactions generated by a plurality of autonomous vehicle205a-zare compiled into a block245a. For example, each of the plurality of autonomous vehicle may transmit transactions to an enforcement server (such as the enforcement server115as described with respect toFIG.1) for compilation into the block245a. In another example, an autonomous vehicle205acompiles a plurality of transactions generated at the autonomous vehicle205ainto a block245bthat only includes transaction associated with the autonomous vehicle205a. In this example, the autonomous vehicle205amay transmit to the enforcement server the generated block245bfor distribution to a plurality of validation entities that attempt to solve a cryptographic puzzle based upon the header of the generated block245band/or form a consensus on said solution. The exemplary flow diagram200may include additional, fewer, or alternate actions, including those discussed elsewhere herein. Turning now toFIG.2B, depicted is an example flow diagram250indicating the generation of transactions in response to an autonomous vehicle, such as the autonomous vehicle105aas described with respect toFIG.1A. The autonomous vehicle may be associated with a smart contract230. In some embodiments, the smart contract230is stored within a distributed ledger and/or at an enforcement server. Although the smart contract230depicted by the flow diagram250is associated with assigning liability based upon autonomous or manual control of the autonomous vehicle, it is envisioned that other smart contracts may be associated with arrangements based upon the detection of other events. The flow diagram250may begin at block252where an operator of the autonomous vehicle is manually operating the autonomous vehicle and no liability has been incurred. For example, the operator may have powered on the autonomous vehicle and began to drive towards a destination. An electronic device associated with the autonomous vehicle may generate a transaction to be included in a block245cof the distributed ledger. The transaction information component of this transaction may include a flag that indicates that the autonomous vehicle is being operated manually (“Control: 0”) and/or a flag that indicates that the autonomous vehicle has not incurred liability (“Liability: 0”). At block254, the operator of the autonomous vehicle may have engaged autonomous control functionality associated with the autonomous vehicle. For example, the operator may have instructed the autonomous vehicle to automatically drive to the destination. In response, the electronic device may generate a transaction to be included in block245dof the distributed ledger. The transaction information component of this transaction may include a flag that indicates that the autonomous vehicle is being operated autonomously (“Control: 1”) and/or a flag that indicates that the autonomous vehicle has not incurred liability (“Liability: 0”). At block256, the operator of the autonomous vehicle may have disengaged autonomous control functionality associated with the autonomous vehicle. For example, the operator may resume manual control by interacting with manual control (e.g., a steering wheel, brake pedal, etc.). In response, the electronic device may generate a transaction to be included in block245eof the distributed ledger. The transaction information component of this transaction may include a flag that indicates that the autonomous vehicle is being operated manually (“Control: 0”) and/or a flag that indicates that the autonomous vehicle has not incurred liability (“Liability: 0”). At block258, the autonomous vehicle may have incurred liability. For example, the autonomous vehicle may have experienced a collision. In the illustrated scenario, the autonomous vehicle may have deployed an airbag in response to detecting the collision. According to aspects, the electronic device may generate a transaction to be included in block245fof the distributed ledger. The transaction information component of this transaction may include a flag that indicates that the autonomous vehicle is being operated manually (“Control: 0”) and/or a flag that indicates that the autonomous vehicle has, in fact, incurred liability (“Liability: 1”). At block260, one or more actions associated with enforcing the smart contract may be performed. To this end, when transactions included in the block245fare routed to the smart contract230, the smart contract230may analyze the transaction generated at block258; more particularly, the transaction information component of the transaction generated at block258. Based on this transaction information, the smart contract230may determine one or more actions to enforce the smart contract230. As an example, the smart contract230may cause the enforcement server to generate an insurance claim that assigns liability for damage incurred in the collision to the operator of the autonomous vehicle. It should be appreciated that althoughFIG.2Bdepicts blocks245c-fare depicted as separate blocks, in some scenarios, one or more of the blocks245c-fmay actually be the same block. For example, the events at blocks254and256may have occurred within the block compilation period associated with the distributed ledger. The exemplary environment250may include additional, fewer, or alternate actions, including those discussed elsewhere herein. For example, as described elsewhere herein, the autonomous vehicle may be operated remotely. Accordingly, the control state may include an indication that the autonomous vehicle is being remotely operated. In this example, when a liability-inducing event occurs, the smart contract230may cause the enforcement server to generate an insurance claim that assigns liability to an entity associated with a remote operator of the autonomous vehicle. Exemplary Distributed Ledger Communication Referring toFIG.3, illustrated is an exemplary signal diagram300associated with maintaining a distributed ledger associated with a plurality of smart contracts. In particular,FIG.3may include a plurality of autonomous vehicles305(such as the plurality of autonomous vehicles105a-fas described with respect toFIG.1), an enforcement server315(such as the enforcement server115as described with respect toFIG.1), dedicated validation entities335(such as the dedicated validation entities135as described with respect toFIG.1), and/or a smart contracts database330. Autonomous vehicles within the plurality of autonomous vehicles may be associated with an electronic device (such as the electronic device103as described with respect toFIG.1) executing an application. It should be appreciated the electronic device may be any electronic device (e.g., an on-board computer, a smartphone, a desktop computer, a laptop, a tablet, phablet, netbook, notebook, a smart watch, smart glasses, smart contact lenses, wearable electronics device, other mobile device, etc.). The signal diagram300may begin when one or more of the plurality of autonomous vehicles305detects (320) a change in a condition. In one scenario, the change in condition is associated with a decision condition of one or more smart contracts governing the autonomous vehicle. For example, the decision condition may relate to whether the vehicle is being operated in a manual or an autonomous mode as determined, for example, by detecting the presence of microvibrations in a steering wheel and/or a control signal communicated over a communication bus of the autonomous vehicle. In another example, the decision condition may relate to a distance traversed by the autonomous vehicle as determined, for example, by an odometer sensor. In another scenario, the change in condition is associated with a trigger condition of one or more smart contracts governing the autonomous vehicle. For example, the change in condition may be the autonomous vehicle incurring liability, such as a liability in response to damage to the autonomous vehicle. In this example, the trigger condition may be detected by detecting a deployment of an airbag, detecting an output from a front or side impact sensor, and/or detecting a malfunction or other abnormal condition for one or more sensors of the autonomous vehicle. The one or more autonomous vehicles of the plurality of autonomous vehicles305may then generate (324) a transaction that indicates the detected change in condition. As described above with respect to the environment200, the transaction may include an indication of an identity of one or more autonomous vehicles and/or operators thereof, a timestamp and/or a plurality of transaction data. According to certain aspects, when multiple autonomous vehicles of the plurality of autonomous vehicles305are involved in a collision, the involved autonomous vehicles may communicate with one another to generate a single transaction. To generate this single transaction, the involved autonomous vehicles may exchange operating data describing a time period straddling the collision to generate a transaction that indicates relative fault for the collision and/or one more amounts of liability incurred. In some further aspects, the autonomous vehicles may exchange diagnostic data to determine which autonomous vehicle should generate the transaction. For example, an antenna of a particular vehicle may have been damaged in the collision causing data transmissions therefrom to be susceptible to additional noise and/or data loss. Accordingly, a different autonomous vehicle involved in the collision may be assigned the task of generating the transaction. According to other aspects, as part of generating the transaction, the autonomous vehicle may determine a priority of the transaction. To this end, not every transaction may have the same priority. For example, a transaction may indicate that significant damage occurred to a vehicle, rendering the vehicle inoperable and/or a passenger in a dangerous environment (e.g., the transaction indicates there is a leak in the gas tank). Accordingly, the autonomous vehicle may assign this transaction an urgent priority. On the other hand, in one example, for a transaction that indicates a shift between manual and autonomous control, or a transaction that indicates relatively minor damage (e.g., the autonomous vehicle experienced light damage to a bumper), the autonomous vehicle may assign the transaction a normal or other non-urgent priority. After the one or more autonomous vehicles of the plurality of autonomous vehicles305generate the transactions, the one or more autonomous vehicles may transmit (328) the transactions to the enforcement server315via a communication network. In some embodiments, the communication network may include an ad hoc or mesh network comprised of the plurality of autonomous vehicles305. At some point after receiving the transactions, the enforcement server315may compile (332) a new block of the distributed ledger that includes the transactions. As part of compiling the block, the enforcement server315may generate a hash value for each transaction included in the block. The enforcement server315may then cryptographically combine these hash values, such as through the use of a Merkle Tree, to generate a hash value of the block as a whole. The enforcement server315may include the hash value of the block as a whole in a header of the block. In one embodiment, the enforcement server315may compile the block periodically (e.g., every five minutes, every ten minutes, etc.). It should be appreciated, that the period may change over time in an attempt to keep the block size below a threshold size. Generally, as more autonomous vehicles are included in the plurality of autonomous vehicles305, more transactions are generated. As a result, over time, using a fixed period may result in more and more transactions being included in each block, thereby increasing the size of the average block. Accordingly, the enforcement server315may adjust (e.g., shorten) the compilation period to ensure that the average block size does not exceed the threshold size despite the reception of a greater volume of transactions. Further, according to aspects, the enforcement server315may compile the block aperiodically upon the reception of a transaction having an urgent priority. The enforcement entity may transmit (336) the compiled block to one or more nodes of the distributed ledger. The nodes may include the dedicated validation entities335(236b) and/or a portion of the plurality of autonomous vehicles305(236a). The nodes that receive the block may attempt to generate (340) a solution to a cryptographic puzzle that is based upon the hash value included in the header in the block. After a particular node finds a solution to the cryptographic puzzle, the node may transmit the solution to the other nodes to verify the solution. The other nodes, such as the portion of the plurality of autonomous vehicles305, the enforcement server315, and/or the dedicated validation entities335, may then form a consensus (344) on the solution found by the particular node. More particularly, the other nodes may vote to approve block's inclusion into the distributed ledger upon successfully verifying the solution. Consensus may be formed when over half of the nodes have voted for the inclusion of the block. It should be appreciated that finding the solution to the cryptographic puzzle involves significantly more processing power than verifying the solution. Accordingly, in some embodiments, pools of nodes may coordinate their processing power in an attempt to jointly find the solution to the cryptographic puzzle. To this end, the enforcement server315may determine that a portion of the plurality of autonomous vehicles305participated in finding the verified solution to the cryptographic puzzle. To encourage participation in finding the solution, the enforcement server315may credit a respective account associated with each autonomous vehicle (and/or operator thereof) that participated in finding the verified solution with a currency. In some embodiments, the currency may be reward points. In other embodiments, the currency may be a cryptocurrency. It should be appreciated that in some scenarios, when multiple autonomous vehicles participated in finding the verified solution, each autonomous vehicle may have made an unequal contribution to the overall processing power. Accordingly, the enforcement server315may divide the credit between and among the portion of the plurality of autonomous vehicles305in accordance to the respective processing power contributed to finding the solution. The signal diagram300continues when the enforcement server315routes (348) the plurality of transactions compiled into the block to the smart contract database330. The smart contract database330may be maintained at the enforcement server315and/or within the distributed ledger itself. In one embodiment, routing a transaction may include extracting the indication(s) of the autonomous vehicle and/or the operator from the transaction and utilizing the indication(s) to query the smart contract database330. If a particular smart contract matches the query (e.g., the smart contract governs an autonomous vehicle and/or operator thereof indicated by the transaction), routing may further include the particular smart contract processing the transaction information included in the transaction. To this end, the particular smart contract may determine whether a trigger condition occurred and/or analyze a plurality of condition data to determine one or more actions to perform in response to the trigger condition occurring. It should be appreciated that a plurality of transactions may be routed to a plurality of smart contracts. Accordingly, the plurality of smart contracts within the smart contract database330may generate a plurality of actions to enforce the smart contracts. In one example, a particular smart contract may govern a relationship between an insurer and a manufacturer of a particular autonomous vehicle. In particular, the insurer may agree to cover liability incurred while the particular autonomous vehicle is operated in a manual mode; whereas the manufacturer may agree to cover liability incurred while the particular autonomous vehicle is operated in a fully or semi-autonomous mode. In this example, if the particular autonomous vehicle incurs liability, the particular autonomous vehicle may generate a transaction indicating the liability. When the transaction is routed to the particular smart contract, the particular smart contract may detect the presence of a trigger condition (i.e., incurring liability). Accordingly, the particular smart contract may then analyze condition data to determine whether the particular autonomous vehicle was operated in a manual or autonomous mode (e.g., a decision condition). If the particular autonomous vehicle was operated in a manual mode, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the insurer and/or the operator of the particular autonomous vehicle. On the other hand, if the particular autonomous vehicle was operated in an autonomous mode, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the manufacturer. In a similar example, a particular smart contract may govern a relationship between an insurer and a remote operator of a particular autonomous vehicle. In particular, the insurer may agree to cover liability incurred while the particular autonomous vehicle is operated in a manual mode; whereas the remote operator may agree to cover liability incurred while the particular autonomous vehicle is remotely operated. In this example, if the particular autonomous vehicle incurs liability, the particular autonomous vehicle may generate a transaction indicating the liability. When the transaction is routed to the particular smart contract, the particular smart contract may detect the presence of a trigger condition (i.e., incurring liability). Accordingly, the particular smart contract may then analyze condition data to determine whether the particular autonomous vehicle was operated in a manual or remote mode (e.g., a decision condition). If the particular autonomous vehicle was operated in a manual mode, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the insurer and/or the operator of the particular autonomous vehicle. On the other hand, if the particular autonomous vehicle was operated in a remote mode, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the remote operator. In another example, a particular smart contract may govern a relationship between an insurer and an operator of a particular autonomous vehicle. In particular, the insurer may agree to only cover liability incurred while the particular autonomous vehicle is operated within a mileage limit. Similarly, in this example, if the particular autonomous vehicle incurs liability, the particular autonomous vehicle may generate a transaction indicating the liability. When the transaction is routed to the particular smart contract, the particular smart contract may detect the presence of a trigger condition (i.e., incurring liability). Accordingly, the particular smart contract may then analyze condition data to determine whether the particular autonomous vehicle traversed a distance that exceeds the mileage limit (e.g., a decision condition). If the particular autonomous vehicle traversed a distance that exceeds the mileage limit, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the operator of the particular autonomous vehicle. Conversely, if the particular autonomous vehicle has not traversed a distance that exceeds the mileage limit, the particular smart contract may determine that an action is to automatically generate an insurance claim that assigns liability to the insurer. In any event, the plurality of smart contracts in the smart contract database may transmit (352) the one or more determined actions to the enforcement server315, which executes (356) the actions. In one embodiment, the enforcement server315may include a plurality of third party applications (such as the third party applications125as described with respect toFIG.1) that may assist in the execution of the actions. For example, a manufacturer or insurer may provide an application that enables the enforcement server315to generate, file, and/or subrogate a claim with the manufacturer or insurer. As another example, the enforcement server315may interact with an application provided by an incident response service provider (e.g., a police entity, an EMT, a tow service, a fire department, an autonomous vehicle dispatch, etc.) to execute one or more actions to ensure the safety of persons affected by the event that incurred liability. According to certain aspects, the enforcement server315may analyze (360) the distributed ledger to determine that a particular block of the distributed ledger includes one or more transactions that are no longer relevant to the plurality of smart contracts in the smart contract database330. For example, a transaction in the particular block may include transaction information relating to a decision condition when a trigger condition did not occur (e.g., an autonomous vehicle completed a trip without incurring liability). As another example, the transaction may be older than a threshold age (e.g., older than thirty days). In response, in order to reduce the overall size of the distributed ledger, the enforcement server315may prune the transaction that is no longer relevant from the particular block. As explained above, the corresponding to the particular block is dependent on the header for the transaction. Accordingly, pruning may involve deleting the underlying data from the transaction (e.g., raw transaction information) while maintaining the header of the transaction. Thus, the amount of data stored in each block may be reduced without impacting the cryptographic link that secures the distributed ledger. In some embodiments, the pruned transaction and/or the data associated therewith may be copied to an archival database for record-keeping. It should be appreciated that signal diagram300may include additional, less, and/or alternative actions, including those discussed elsewhere herein. For example, in some embodiments, some of the actions described with respect to the enforcement server315may be alternatively performed by one or more of the plurality of autonomous vehicles305, and/or vice versa. Exemplary Methods of Maintaining Distributed Ledger Referring toFIG.4, depicted is a block diagram of an exemplary computer-implemented method400of processing transactions included in a distributed ledger. The method400may be facilitated by an electronic device associated with an autonomous vehicle, such as the electronic device103associated with the autonomous vehicle105aas described with respect toFIG.1, that may be in direct or indirect communication with an enforcement server, such as the enforcement server115as described with respect toFIG.1. The method400may begin by the electronic device monitoring a plurality of vehicle sensors (block405). More particularly, the electronic device may monitor a plurality of data generated by the plurality of vehicle sensors. For example, the electronic device may monitor the output of an accelerometer, a gyroscope, a brake sensor, an impact sensor, an image/video sensor, an audio sensor, a pressure sensor, and/or any other sensor that monitors a condition associated with the autonomous vehicle. In some embodiments, the sensor data may be communicated over a communication bus associated with the autonomous vehicles. Additionally or alternatively, the sensor data may be communicated directly to the electronic device via a wireless or wired connection. At block410, the electronic device may analyze the sensor data to detect a change in a monitored condition. In one embodiment, the autonomous vehicle may be associated with one or more smart contracts, each associated with a trigger condition and/or a decision condition. An application executing on the electronic device may receive a subscription request to provide data streams relating to trigger conditions and/or decision conditions associated with one or more smart contracts. The application may be able to associate a trigger and/or decision condition with one or more sensors that are relevant to the requested data streams. For example, for a trigger condition of incurring liability, the application may cause the electronic device to monitor an impact sensor, an airbag deployment sensor, a gyroscope, an accelerometer, a window integrity sensor, or even a sensor of another autonomous vehicles and/or a smart infrastructure device proximate to the autonomous vehicle. In some embodiments, not only may the application associate the trigger and/or decision condition with the one or more relevant sensors, but also one or more expected ranges of values associated with one or more statuses associated with the trigger condition. In the above example, an accelerometer may be associated with a threshold value that, when exceeded, is indicative that the autonomous vehicle experienced a collision. Accordingly, for the accelerometer, an expected range of values for a status indicating no liability may be values below the threshold and an expected range of values for a status indicating liability may be values that exceed the threshold. For some sensors, for example a window integrity sensor, the expected range may be a Boolean value or another similar flag that indicates a status beyond raw measured statistics (e.g., the range includes, for example, TRUE or “SHATTERED,” as opposed to, say, 8 m/s2). In any event, the electronic device may monitor the data generated by the relevant sensors to detect whether a status for a trigger condition and/or a decision condition has changed. Additionally or alternatively, as described elsewhere herein, the status (and/or ranges of values indicative thereof) may be associated with an urgent, normal, low, and/or any other priority level. At block415, the electronic device may generate a transaction that describes the detected change in status for a trigger condition and/or a decision condition. As described elsewhere herein, the transaction may include an indication of the autonomous vehicle and/or an operator thereof, a time stamp, and/or a plurality of transaction information. In some embodiments, the transaction information includes an indication of the status of trigger and/or decision condition. Additionally or alternatively, the transaction information may include at least a portion of the raw data that indicated that change in the trigger and/or decision condition. At block420, the electronic device may transmit the generated transaction to the enforcement server for processing and/or compilation into a block to be added to the distributed ledger. In some embodiments, instead of the enforcement server compiling the transactions into the block, the electronic device may instead compile a plurality of transaction the electronic device generated (and/or received from another autonomous vehicle) into the proposed block. In these embodiments, transmitting the transaction to the enforcement server may include transmitting the proposed block that includes the transaction to the enforcement server. At block425, in embodiments wherein the electronic device is also a validation entity, the electronic device may attempt to solve a cryptographic puzzle based upon the header of the proposed block. It should be appreciated that because there are typically several validation entities attempting to solve the same cryptographic puzzle, the electronic device may never actually solve the cryptographic puzzle. To this end, once one validation entity claims to have solved the cryptographic puzzle, that validation entity may transmit its solution to the other validation entities for verification. At block430, in embodiments wherein the electronic device is also a validation entity, the electronic device may attempt to form a consensus on the solution. In scenarios in which another validation claims to have solved the cryptographic solution, the electronic device may attempt to verify the received solution. If the solution is successfully verified, the electronic device may vote to include the block in the distributed ledger. In scenarios in which the electronic device solved the cryptographic puzzle, the electronic device may be considered to have attempted to form the consensus when the electronic device sent the proposed solution to the other validation entities. It should be appreciated that the method400may include additional, less, or alternative actions, including those described elsewhere herein. Referring now toFIG.5, depicted is a block diagram of an exemplary computer-implemented method500of processing transactions included in a distributed ledger. The method500may be facilitated by an enforcement server, such as the enforcement server115as described with respect toFIG.1, that may be in direct or indirect communication with a plurality of autonomous vehicles, such as the plurality of autonomous vehicles105a-fas described with respect toFIG.1, and interconnected with a smart contract database that stores a plurality of smart contracts. The method500may begin by the enforcement server receiving a plurality of transactions from the plurality of autonomous vehicles (block505). As described elsewhere herein, the transactions may include an indication of an autonomous vehicle that generated the transaction and/or an operator thereof, a time stamp, and/or a plurality of transaction information. According to aspects, the enforcement server may continue to receive transaction until the expiration of a threshold amount of time has expired and/or an urgent priority transaction is received. At block510, the enforcement server may compile the received transactions into a proposed block. As described elsewhere, compiling the block may involve cryptographically combining, such as through the use of a Merkle Tree, a hash value associated with each of the transactions to form a hash value associated with the block as a whole. This hash value may be included in the header file of the proposed block. At block515, the enforcement server may distribute the proposed block to a plurality of validation entities that attempt to solve a cryptographic puzzle based upon the hash value included in the header and a nonce value. In some embodiments, the plurality of validation entities may include a portion of the plurality of autonomous vehicles. At some point, the plurality of validation entities may form a consensus on the proposed block. Upon consensus, the proposed block may be included in the distributed ledger. At block520, the enforcement server may route the transactions included in the now included block to the plurality of smart contracts. As described above, routing may involve extracting an indication of the autonomous vehicle and/or operator thereof, utilizing the indication to query the database of smart contracts, and/or inputting the transaction information from the transaction into the smart contracts that match the query. If the transaction indicated that a trigger condition occurred, the smart contract may output an action that is to be performed to enforce the smart contract. At block525, the enforcement server may automatically execute the action that is to be performed to enforce the smart contract. For some actions, the enforcement server may utilize one or more third party applications accessible by the enforcement server. As an example, an emergency response entity may make an application accessible that enables the enforcement server to initiate an emergency response. For other actions, the enforcement server may be able to perform the action without the assistance of a third party application. As an example, the enforcement server may be able to generate and transmit a text message to an emergency contact without the assistance of a third party application. It should be appreciated that the method400may include additional, less, or alternative actions, including those described elsewhere herein. Exemplary Enforcement Server FIG.6illustrates a diagram of an exemplary enforcement server615(such as the enforcement server115as discussed with respect toFIG.1) in which the functionalities as discussed herein may be implemented. It should be appreciated that the enforcement server615may be associated with a distributed ledger that governs a plurality of smart contracts, as discussed elsewhere herein. The enforcement server615may include a processor622, as well as a memory678. The memory678may store an operating system679capable of facilitating the functionalities as described herein. The enforcement server615may also store a set of applications675(i.e., machine readable instructions). For example, one application of the set of applications675may be a blockchain manager684configured to compile transactions into blocks and/or to route transactions to smart contracts. As another example, the set of applications675may include one or more third party applications685to assist in executing an action to enforce a smart contract. It should be appreciated that other applications may be included in the set of application675. The processor622may interface with the memory678to execute the operating system679and the set of applications675. According to some embodiments, the memory678may also include a plurality of smart contracts680. The blockchain manager684may access the smart contracts680to facilitate the enforcement of the smart contracts680. The memory678may include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others. The enforcement server615may further include a communication module677configured to communicate data via one or more networks610. Network(s)610may include a mesh network comprised of one or more autonomous vehicles. According to some embodiments, the communication module677may include one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and configured to receive and transmit data via one or more external ports676. In some embodiments, the communication module677may include separate transceivers configured to interact with the local and remote networks separately. The enforcement server615may further include a user interface681configured to present information to a user and/or receive inputs from the user. As shown inFIG.6, the user interface681may include a display screen682and I/O components683(e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, speakers, microphones). According to the present embodiments, the user may access the enforcement server615via the user interface681to monitor the distributed ledger, update software executing at the enforcement sever615and/or perform other functions. In some embodiments, the enforcement server615may perform the functionalities as discussed herein as part of a “cloud” network, or may otherwise communicate with other hardware or software components within the cloud to send, retrieve, and/or otherwise analyze data. In general, a computer program product in accordance with an embodiment may include a computer usable storage medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having computer-readable program code embodied therein, wherein the computer-readable program code is adapted to be executed by the processor622(e.g., working in connection with the operating system679) to facilitate the functions as described herein. In this regard, the program code may be implemented in any desired language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via C, C++, Java, Actionscript, Objective-C, Javascript, CSS, XML). In some embodiments, the computer program product may be part of a cloud network of resources. Exemplary Autonomous Vehicle Event Distributed Ledger FIG.7depicts an exemplary computer-implemented method of building, using, and/or maintaining a distributed ledger and/or blockchain related to autonomous vehicle transactions and/or events700. The method700may include recording autonomous vehicle or system transactions and/or events702. The transactions and/or events may detail an amount of usage, and/or type of usage, for each autonomous or semi-autonomous system or technology on an autonomous or semi-autonomous vehicle. The transactions and/or events may also detail an amount of operation and usage of the autonomous or semi-autonomous vehicle, including operation or usage under (i) autonomous (or processor) control, and (ii) human control, as well as detail which human of a household or company was driving the autonomous or semi-autonomous vehicle, and include a timestamp of when they were driving. The method700may include compiling the autonomous vehicle or system transactions and/or events recorded into a log of events704. For instance, the log may include transactions and/or events directed or related to autonomous vehicle usage (time of day, type of operation, type of control (autonomous/processor vs. human), location or area driven, miles driven, type of weather or environment driven in, etc.). The log may include transactions and/or events directed to individual or specific autonomous or semi-autonomous system or technology usage (such as whether used or not, an amount or time of usage, a type of setting, etc.). The transactions and/or events recorded in the distributed ledger and/or blockchain may be determined or sensed by various smart sensors and/or processors. The transactions and/or events may detail (i) a setting that an autonomous or semi-autonomous system or technology is used at; (ii) how long the autonomous or semi-autonomous system or technology was used or employed (or not used or employed); (iii) the time of day or year during which the autonomous or semi-autonomous system or technology was used or employed (or not used or employed); (iv) the type of road (such as highway, rural, or city) on which the autonomous or semi-autonomous system or technology was used or employed (or not used or employed); (v) an identification of the human driver and/or how many passengers where in the vehicle; (vi) the environmental or weather conditions (rain, ice, snow, etc.) during which the autonomous or semi-autonomous system or technology was used or employed (or not used or employed); (vii) whether the autonomous or semi-autonomous system or technology was used or employed (or not used or employed) when recommended to be employed by a smart vehicle controller or other processor; (viii) the status, operational status, or working condition of each autonomous or semi-autonomous system or technology of an autonomous or semi-autonomous vehicle; (ix) a maintenance log of each autonomous system or technology on the autonomous or semi-autonomous vehicle; (x) whether recommended periodic maintenance has been performed on the autonomous or semi-autonomous vehicle, and/or on individual autonomous or semi-autonomous systems mounted by the vehicle; (xi) whether each autonomous system or technology is working as intended or not, such as via system self-checks; (xii) autonomous vehicle and/or autonomous vehicle system warranty information; (xiii) make, model, type, and/or version of autonomous vehicle system or software; (xiv) a timestamp of switching events during which control of the vehicle went from manual operation to autonomous operation, or vice versa; and/or (xv) a timestamp of engagement events during which an autonomous system was engaged or disengaged. The method700may use the transactions and/or events recorded and/or log of events recorded (for the vehicle as a whole, and for individual autonomous or semi-autonomous vehicle systems or technologies) for various actions. The actions may include generating a usage-based insurance quote based upon past or expected future autonomous system or technology usage (which may be determined or predicted based upon the transactions and/or events recorded and/or the log of events recorded). The actions may include generating or estimating a current or actual value of the autonomous or semi-autonomous vehicle based upon the events recorded and/or the log of events recorded. The actions may include determining that the autonomous or semi-autonomous vehicle was involved in a vehicle collision, and/or assign percentage of fault for the vehicle collision (such as to the driver, the autonomous vehicle or system, or another vehicle or driver) based upon the events recorded and/or the log of events recorded. The actions may include handling insurance claims based upon the events recorded and/or the log of events recorded. In one embodiment, the method700may employ machine or deep learning techniques on the autonomous vehicle transactions and/or events recorded in the log of events, and/or the blockchain or distributed ledger to determine an action or end result. For instance, the recorded events may be input into a machine learning algorithm trained to determine a usage-based insurance quote, a vehicle actual or replacement valuation, and/or that the autonomous vehicle was involved in a vehicle collision based at least in part on the recorded events associated with the autonomous vehicle, such as events related to individual autonomous vehicle feature or system usage (and/or amount thereof). The method700may use the transactions and/or events recorded and/or the log of events recorded to update a blockchain708or form a consensus among distributed nodes that the blockchain should be updated, such as a blockchain related to point-in-time operator control and/or to an individual autonomous or semi-autonomous vehicle. For instance, the autonomous vehicle related events and/or actions determined may be distributed among nodes in a public or private network via wireless communication or data transmission over one or more radio links. The method700may update the blockchain at each node using the events, the individual events recorded, and/or the actions determined (including those mentioned above), and use the events and/or actions to update or generate one or more smart contracts710. For instance, the smart contracts may relate to insurance quote generation, usage-based insurance quotes and contracts, and/or insurance claim handling. The types of transactions and/or events recorded may relate to autonomous or semi-autonomous vehicle-related functionality or technology that replace human driver actions, and may include and/or be related to the following types of functionality: (a) fully autonomous (driverless); (b) limited driver control; (c) vehicle-to-vehicle (V2V) wireless communication; (d) vehicle-to-infrastructure (and/or vice versa) wireless communication; (e) automatic or semi-automatic steering; (f) automatic or semi-automatic acceleration; (g) automatic or semi-automatic braking; (h) automatic or semi-automatic blind spot monitoring; (i) automatic or semi-automatic collision warning; (j) adaptive cruise control; (k) automatic or semi-automatic parking/parking assistance; (l) automatic or semi-automatic collision preparation (windows roll up, seat adjusts upright, brakes pre-charge, etc.); (m) driver acuity/alertness monitoring; (n) pedestrian detection; (o) autonomous or semi-autonomous backup systems; (p) road mapping systems; (q) software security and anti-hacking measures; (r) theft prevention/automatic return; (s) automatic or semi-automatic driving without occupants; and/or other functionality. With the present embodiments, the events or transactions recorded may relate to autonomous or semi-autonomous vehicle technology or functionality directed toward: automatic or semi-automatic steering; automatic or semi-automatic acceleration and/or braking; automatic or semi-automatic blind spot monitoring; automatic or semi-automatic collision warning; adaptive cruise control; and/or automatic or semi-automatic parking assistance. Additionally or alternatively, the autonomous or semi-autonomous technology or functionality may include and/or be related to: driver alertness or responsive monitoring; pedestrian detection; artificial intelligence and/or back-up systems; navigation or GPS-related systems; security and/or anti-hacking measures; and/or theft prevention systems. In one aspect, a computer-implemented method of building, utilizing, and/or maintaining an autonomous vehicle-related event blockchain may be provided. The method may include one or more of the following, and the sequence of actions may be rearranged: (1) detecting and/or recording, via one or more processors, sensors, and/or transceivers, autonomous vehicle events, the autonomous vehicle events including autonomous vehicle system or technology usage or operational events; (2) compiling, via the one or more processors, the autonomous vehicle events into a log of recorded autonomous vehicle events; (3) determining, via the one or more processors, an action to implement based upon the autonomous vehicle events recorded and/or the log of recorded autonomous vehicle events; (4) forming a consensus with other distributed nodes to update and/or otherwise updating, via the one or more processors, an autonomous vehicle-related blockchain to reflect or otherwise show (i) the autonomous vehicle events recorded, (ii) the log of recorded autonomous vehicle events, and/or (iii) the action to implement; and/or (5) distributing, via the one or more processors and/or transceivers, the log of recorded autonomous vehicle events and/or the autonomous vehicle-related blockchain to a public or private network of distributed nodes to facilitate maintaining the shared ledger of autonomous vehicle events up-to-date. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the method may include updating a smart contract based upon the blockchain, events records, and/or actions determined to implement. The action determined to implement based upon the autonomous vehicle events recorded and/or the log of recorded autonomous vehicle events may be (i) estimating an actual value of the autonomous vehicle; (ii) estimating a replacement cost of the autonomous vehicle; (iii) generating a usage-based insurance quote for the autonomous vehicle and/or a specific trip; and/or (iv) to determine that the autonomous vehicle was involved in a vehicle collision, and to commence a claim handling process. The autonomous vehicle events recorded may be related to at least one of the following autonomous vehicle systems, features, or technologies: driver alertness monitoring; driver responsiveness monitoring; pedestrian detection; artificial intelligence; a back-up system; a navigation system; a positioning system; a security system; an anti-hacking measure; a theft prevention system; and/or remote vehicle location determination. In another aspect, a computer system configured to build and utilize an autonomous vehicle-related event distributed ledger and/or blockchain may be provided. The computer system may include one or more processors, sensors, and/or transceivers configured to perform one or more of the following, and the order of actions may be rearranged: (1) detect and/or record autonomous vehicle events, the autonomous vehicle events including autonomous vehicle system or technology usage or operational events; (2) compile the autonomous vehicle events into a log of recorded autonomous vehicle events; (3) determine an action to implement based upon the autonomous vehicle events recorded and/or the log of recorded autonomous vehicle events; (4) form a consensus with other distributed nodes to update and/or otherwise update an autonomous vehicle-related blockchain to reflect or otherwise show (i) the autonomous vehicle events recorded, (ii) the log of recorded autonomous vehicle events, and/or (iii) the action to implement; and/or (5) distribute the log of recorded autonomous vehicle events and/or the autonomous vehicle-related blockchain to a public or private network of distributed nodes to facilitate maintaining the shared ledger of autonomous vehicle events up-to-date. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In one aspect, a computer-implemented method for maintaining a distributed ledger and/or blockchain of autonomous vehicle-related transactions and/or events pertaining to one or more smart contracts and/or autonomous vehicles may be provided. The method may include the following actions, and the actions may be performed in various orders: (1) receiving, at one or more processors, one or more autonomous vehicle-related transactions and/or events from one or more autonomous vehicles, the autonomous vehicle-related transactions and/or events indicative of at least one of a trigger condition or a decision condition associated with one or more smart contracts; (2) compiling, by the one or more processors, the one or more autonomous vehicle-related transactions and/or events into a block of transactions and/or events; (3) distributing, by the one or more processors, the block of transactions and/or events to a plurality of validation entities via wireless or data transmission over one or more radio links or wireless communication channels to form a consensus on an update to (and/or whether or not to update) the distributed ledger and/or blockchain; (4) routing, by the one or more processors, the one or more autonomous vehicle-related transactions and/or events within the block to respective or corresponding smart contracts, wherein a particular transaction corresponding to a particular smart contract indicates that a trigger condition for the particular smart contract has occurred; and/or (5) automatically executing, by the one or more processors, an action the particular smart contract directs should be performed in response to the particular trigger condition, the action determined based upon a decision condition included in a transaction and/or event routed to the particular smart contract. The method may include additional, less, or alternate actions, including those discussed elsewhere herein, and may be implemented via computer systems and/or non-transitory computer readable medium. In another aspect, a computer-implemented method for maintaining a distributed ledger or blockchain of transactions or events pertaining to autonomous vehicles may be provided. The method may include one or more of the following: (1) receiving, at one or more processors, one or more autonomous vehicle-related transactions or events from one or more autonomous vehicles, the transactions indicative of at least one of a trigger condition; (2) compiling, by the one or more processors, the one or more autonomous vehicle-related transactions or events into a block of transactions or events; (3) distributing, by the one or more processors, the block of autonomous vehicle-related transactions or events to a plurality of validation entities or nodes within a communication network to form a consensus on whether or not to update the distributed ledger; and/or (4) when a consensus is formed, updating, by the one or more processors, the distributed ledger at the plurality of validation entities or nodes to facilitate maintaining a distributed ledger or blockchain associated with autonomous vehicle-related transactions or events up-to-date. The method may include additional, less, or alternate actions, including those discussed elsewhere herein, and may be implemented via computer systems and/or non-transitory computer readable medium. For instance, the method may include routing, by the one or more processors, the plurality of transactions or events within the block to a plurality of smart contracts, wherein a particular transaction or event corresponding to a particular smart contract may indicate that a trigger condition for the particular smart contract has occurred. The method may include automatically executing, by the one or more processors, an action the particular smart contract directs should be performed in response to the particular trigger condition, the action may be determined based upon a decision condition included in a transaction or event routed to the particular smart contract. The trigger condition for the particular smart contract may be related to a particular autonomous vehicle incurring liability or being involved in a vehicle collision; and/or the particular transaction or event may indicate that the particular autonomous vehicle incurred liability or was involved in a vehicle collision. The decision condition for the particular smart contract may be a control state of the particular autonomous vehicle; and/or the particular transaction or event may indicate whether the particular autonomous vehicle was being autonomously or manually operated. In another aspect, a computer-implemented method of building, utilizing, and/or maintaining an autonomous vehicle-related event distributed ledger or blockchain may be provided. The method may include one or more following, and in various order: (1) detecting and/or recording, via one or more processors, sensors, and/or transceivers, autonomous vehicle-related events, one or more autonomous vehicle-related events including autonomous vehicle system or technology usage or operational events; (2) compiling, via the one or more processors, the one or more autonomous vehicle events into a log of recorded autonomous vehicle-related events; (3) determining, via the one or more processors, an action to implement based upon the autonomous vehicle-related events recorded and/or the log of recorded autonomous vehicle events; and/or (4) distributing, via the one or more processors and/or transceivers, the log of recorded autonomous vehicle-related events and/or the action to implement to a public or private network of distributed nodes (such as via wireless communication or data transmission over one or more radio links or wireless communication channels) to facilitate maintaining a shared ledger of autonomous vehicle-related events up-to-date. The method may include additional, less, or alternate actions, including those discussed elsewhere herein, and may be implemented via computer systems and/or non-transitory computer readable medium. Exemplary Autonomous Vehicle Embodiments In one aspect, a computer-implemented method for maintaining a distributed ledger of transactions pertaining to a plurality of smart contracts may be provided. The method may include (1) monitoring, by one or more processors, a plurality of sensors associated with a vehicle; (2) detecting, by the one or more processors, a change in a condition of the vehicle, the condition being associated with a smart contract of the plurality of smart contracts that governs the vehicle and/or an operator of the vehicle; (3) generating, by the one or more processors, a transaction describing the detected change in the condition of the vehicle; and/or (4) transmitting, to a server, the transaction. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the method may include compiling, by the one or more processors, the transaction into a block of transactions, the block being an update to the distributed ledger. Transmitting the transaction to the server may include transmitting, to the server, the block of transactions. The method may include receiving, from the server, a subscription request indicating the condition of the vehicle; and associating, by the one or more processors, the condition of the vehicle with a set of the plurality of sensors associated with the vehicle. The subscription request may indicate one or more expected ranges of values for outputs of the set of the plurality of sensors, the expected ranges of values being associated with one or more states corresponding to the condition of the vehicle and/or one or more operational states of one or more autonomous vehicle systems or technologies, including those discussed herein. Detecting the change in the condition of the vehicle may include detecting, by the one or more processors, that an output of a sensor (such as an autonomous vehicle system or technology sensor, or other vehicle-mounted sensor) the set of the plurality of sensors changed from a first expected range of values to a second expected range of values. The change in the condition of the vehicle may be indicative of a collision with another vehicle. The method may include receiving, from the other vehicle, operating data generated by one or more sensors of the other vehicle, and the sensors of the other vehicle may be associated with an autonomous vehicle system of the other vehicle. The method may include analyzing, by the one or more processors, the operating data to determine a relative fault between a party associated with the vehicle and a party associated with the other vehicle, and/or a relative fault between the vehicles if the vehicles are autonomous or semi-autonomous vehicles, or relative fault between different autonomous vehicle systems or technologies mounted on one or more vehicles. The method may include analyzing, by the one or more processors, the operating data to determine that the vehicle, and not the other vehicle, should generate the transaction. The operating data may indicate that an antenna associated with the other vehicle is damaged. The method may include receiving, from a node of the distributed ledger, a proposed block to add to the distributed ledger; and attempting, by the one or more processors, to solve a cryptographic puzzle based upon a header of the proposed block. When the one or more processors solve the cryptographic puzzle, the method may include transmitting, to one or more nodes of the distributed ledger, a solution to the cryptographic puzzle. The method may include receiving, from a node of the distributed ledger, a proposed solution to the cryptographic puzzle; verifying, by the one or more processors, the proposed solution to the cryptographic puzzle; and/or communicating, with one or more nodes of the distributed ledger, whether or not the proposed solution to the cryptographic puzzle was verified in an attempt to from a consensus on the proposed solution. The vehicle may be an autonomous vehicle, and the change in a condition of the vehicle is a change in operation or operational state (such as on, off, hi, low, etc.) of an autonomous vehicle system or technology mounted on the autonomous vehicle. The autonomous vehicle system or technology may be associated with or related to: driver alertness monitoring; driver responsiveness monitoring; pedestrian detection; artificial intelligence; a back-up system; a navigation system; a positioning system; a security system; an anti-hacking measure; a theft prevention system; and/or remote vehicle location determination. In another aspect, a computer system for maintaining a distributed ledger of transactions pertaining to a plurality of smart contracts may be provided. The computer system may include one or more processors; a communication module adapted to communicate with one or more nodes of the distributed ledger; and a non-transitory program memory coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to: (1) monitor a plurality of sensors associated with a vehicle; (2) detect a change in a condition of the vehicle, the condition being associated with a smart contract of the plurality of smart contracts that governs the vehicle and/or an operator of the vehicle, or governs an autonomous vehicle system or technology mounted on the vehicle; (3) generate a transaction describing the detected change in the condition of the vehicle; and/or (4) transmit, to a server, the transaction. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for maintaining a distributed ledger of transactions pertaining to one or more smart contracts may be provided. The method may include (1) monitoring, by one or more processors, one or more sensors associated with a vehicle; (2) detecting, by the one or more processors, a change in a condition of the vehicle, the condition being associated with a smart contract that governs the vehicle and/or an operator of the vehicle, or governs an autonomous vehicle system or technology mounted on the vehicle; (3) generating, by the one or more processors, a transaction describing the detected change in the condition of the vehicle; and/or (4) transmitting, to a server, the transaction. The method may include compiling, by the one or more processors, the transaction into a block of transactions, the block being an update to the distributed ledger, and/or transmitting, to the server, the block of transactions. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for maintaining a distributed ledger of transactions pertaining to one or more smart contracts may be provided. The computer system may include one or more processors; a communication module adapted to communicate with one or more nodes of the distributed ledger; and a non-transitory program memory coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to: (1) monitor one or more sensors associated with an autonomous or other vehicle; (2) detect a change in a condition of the vehicle, the condition being associated with a smart contract that governs the vehicle and/or an operator of the vehicle; (3) generate a transaction describing the detected change in the condition of the vehicle; and/or (4) transmit, to a server, the transaction. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for maintaining a distributed ledger of transactions pertaining to an autonomous vehicle may be provided. The method may include (1) monitoring, by one or more processors, one or more sensors associated with an autonomous vehicle; (2) detecting, by the one or more processors, a change in a condition of the autonomous vehicle, the condition being associated with operation, or an operational state, of an autonomous vehicle system or technology mounted on the autonomous vehicle; (3) generating, by the one or more processors, a transaction describing the detected change in the condition of the vehicle; and/or (4) transmitting, to a server, the transaction. The method may include compiling, by the one or more processors, the transaction into a block of transactions, the block being an update to the distributed ledger, and/or transmitting, to the server, the block of transactions. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for maintaining a distributed ledger of transactions pertaining to an autonomous vehicle may be provided. The computer system may include one or more processors; a communication module adapted to communicate with one or more nodes of the distributed ledger; and a non-transitory program memory coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to: (1) monitor one or more sensors associated with an autonomous vehicle; (2) detect a change in a condition of the vehicle, the condition being associated with operation, or an operational state, of an autonomous vehicle system or technology mounted on the autonomous vehicle; (3) generate a transaction describing the detected change in the condition of the autonomous vehicle; and/or (4) transmit, to a server, the transaction. The system may include additional, less, or alternate functionality, including that discussed elsewhere herein. In another aspect, a computer-implemented method for maintaining a distributed ledger of transactions pertaining to one or more smart contracts may be provided. The method may include (1) monitoring, by one or more processors, a plurality of sensors associated with an autonomous vehicle; (2) detecting, by the one or more processors, a change in a condition of the autonomous vehicle, the condition being associated with a smart contract (such as a warranty, maintenance, service, or other contract) that governs an autonomous vehicle system or technology employed by or mounted on the autonomous vehicle; (3) generating, by the one or more processors, a transaction describing the detected change in the condition of the autonomous vehicle; and/or (4) transmitting, to a server, the transaction. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. In another aspect, a computer system for maintaining a distributed ledger of transactions pertaining to a one or more smart contracts may be provided. The computer system may include: one or more processors; a communication module adapted to communicate with one or more nodes of the distributed ledger; and a non-transitory program memory coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the computer system to: (1) monitor one or more sensors associated with an autonomous vehicle; (2) detect a change in a condition of the vehicle, the condition being associated with a smart contract that governs an autonomous vehicle technology or system employed by or mounted on the autonomous vehicle; (3) generate a transaction describing the detected change in the condition of the autonomous vehicle; and/or (4) transmit, to a server, the transaction. The system may include additional, less, or alternate functionality, including that discussed elsewhere herein. ADDITIONAL CONSIDERATIONS An authoritative, trusted, immutable, distributed, shareable, secure system may be needed to record if a human driver is controlling a vehicle, and/or if the vehicle is acting autonomously. The record may include crash sensor data to record crash information correlating to driver control information. Blockchain technology may be used to store the transactions of control instances (from autonomous to human control to autonomous, for example). These control instances may be stored as they occur into blocks. Accordingly, this data may be included into the distributed ledger environment of the blockchain. In this environment, a consensus system may fix the events/blocks immutably and securely. At the same time and from separate systems, vehicle sensor data may also be stored into the block at various intervals by block management software (for example, an airbag deployed flag may remain at “0” prior to deployment, but becomes fixed to “1” after airbags deploy). This information may denote if a crash has occurred. In an aspect, this information is also synched to the same timestamps as the driver control events. The data of automobile control and accident sensor flags may be streamed via the various systems in the car by existing technologies in the automobile. For example, a vehicle's steering wheel, when in autonomous mode, may be exposed to micro-vibrations; when a human driver grabs the wheel, the micro-vibrations may stop. This may be indicative of the steering system shifting to manual control. This shift may be logged in a control status event. The blockchain management software in the automobile may take this reported information (the status changing from autonomous to human control) and write into the block the timestamp, geolocation, speed, and any other data as prescribed (by the manufacturer, insurers, consumer groups, regulatory, law enforcement, and/or other organizations that may have input into what data should be reported). The blockchain driver control and accident reporting system may be software composed of data aggregation from various automotive systems in a vehicle, a blockchain management system for writing driving control events to a block, and/or a means of passing this information (with the auto ID and user key signature/public ID) to a distributed ledger blockchain platform (such an Ethereum or similar system) via a mobile cellular, satellite, Wi-Fi, and/or other wireless data transmission technologies. In some scenarios, the driver control and accident reporting blockchain may have public interfaces that allow visibility into the data. In an embodiment, a private blockchain interface may also be used by auto manufacturers, law enforcement, insurers, and regulatory agencies. An element of smart contracts may also be enabled in the system. Depending on the sequence of events in the blockchain, terms of the smart contract may be executed immediately, such as sending a tow truck to the geolocation if tow assistance is a part of the policy, filing a legal action by a subrogation team of an insurer is brought against an auto manufacturer (for example, if an accident occurs when the autonomous vehicle was in autonomous control), conducting a policy review, filing a police report request with the jurisdiction of the roadway, processing claims awards made (for example, a partial payment if deductible is met, to handle car rental or minor medical expense), sending a cancellation notice for the policy, and so on. In embodiments in which the blockchain is associated with an automotive or insurance consortium, and in a two or more car collision with vehicles containing this blockchain driver control and accident monitoring systems, it is possible that information (geolocation, timestamps, other information) can be shared between and among actors. This data may indicate that two cars collided, and the insurance information may be contained in the blockchain. The smart contract may be enforced, for example, to dispatch insurance claims personnel, and/or begin an automatic process to contact the other insurance companies involved. In some aspects, customers may opt-in to a rewards, loyalty, or other program. The customer may allow a remote server, such as an enforcement server, to collect sensor, telematics, vehicle, mobile device, and other types of data discussed herein. With customer permission or affirmative consent, the data collected may be analyzed to provide certain benefits to customers. For instance, insurance cost savings may be provided to lower risk or risk averse customers. As described herein, rewards, including cryptocurrency, may be awarded to accounts associated with the customer. The other functionality discussed herein may also be provided to customers in return for them allowing collection and analysis of the types of data discussed herein, as well as participating in the validation of the data discussed herein. Although the text herein sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the invention is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based upon any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this disclosure is referred to in this disclosure in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based upon the application of 35 U.S.C. § 112(f). Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (code embodied on a non-transitory, tangible machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a module that operates to perform certain operations as described herein. In various embodiments, a module may be implemented mechanically or electronically. Accordingly, the term “module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which modules are temporarily configured (e.g., programmed), each of the modules need not be configured or instantiated at any one instance in time. For example, where the modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different modules at different times. Software may accordingly configure a processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. Modules can provide information to, and receive information from, other modules. Accordingly, the described modules may be regarded as being communicatively coupled. Where multiple of such modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the modules. In embodiments in which multiple modules are configured or instantiated at different times, communications between such modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple modules have access. For example, one module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further module may then, at a later time, access the memory device to retrieve and process the stored output. Modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. This description, and the claims that follow, should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for system and a method for assigning mobile device data to a vehicle through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims. The particular features, structures, or characteristics of any specific embodiment may be combined in any suitable manner and in any suitable combination with one or more other embodiments, including the use of selected features without corresponding use of other features. In addition, many modifications may be made to adapt a particular application, situation or material to the essential scope and spirit of the present invention. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered part of the spirit and scope of the present invention. While the preferred embodiments of the invention have been described, it should be understood that the invention is not so limited and modifications may be made without departing from the invention. The scope of the invention is defined by the appended claims, and all devices that come within the meaning of the claims, either literally or by equivalence, are intended to be embraced therein. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.
116,555
11861731
DETAILED DESCRIPTION The subject matter of the present invention is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventor has contemplated that claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. As one skilled in the art will appreciate, embodiments of the invention may be embodied as, among other things, a method, system, or set of instructions embodied on one or more computer-readable media. Accordingly, the embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. In an embodiment, the invention takes the form of a computer-program product that includes computer-usable instructions embodied on one or more computer-readable media. Computer-readable media include both volatile and nonvolatile media, removable and nonremovable media, and contemplate media readable by a database, a switch, and various other network devices. By way of example, and not limitation, computer-readable media comprise media implemented in any method or technology for storing information, including computer storage media and communications media. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Computer storage media examples include, but are not limited to, information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVDs), holographic media or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, and other computer hardware or storage devices. These technologies can store data momentarily, temporarily, or permanently. As described herein, embodiments of the invention are directed to enabling improvements in loss ratio estimation in insurance underwriting and pricing, in particular in health insurance. A recent concern regarding credit-based scoring systems, in particular insurance risk models derived from them, is that proactive actions taken by lenders to reduce potential losses by lowering revolving credit limits. Some contend that this might spuriously lower a consumer's Insurance Risk Score, penalizing consumers in the form of higher premiums and less favorable coverage. In addition, a majority of the credit characteristics calculate credit utilization as a function of a consumer's revolving credit limits combined with original installment loan amounts. This different approach dilutes the potential impact associated with the lowering of revolving credit limits. Various credit parameters, ranging from less severe (payments more than 60 days delinquent) to more severe (bankruptcy), may be included in data that are available from secondary sources. Based on these parameters, if a key measure of credit quality is having a debt payment that is 60 days or more past due, then the use of credit characteristics may have a disparate impact on lower income households. Empirical evidence from individual experience may confirm this. It is a common misconception that during a recession virtually every consumer's credit score (and, hence, Insurance Risk Score) will decline. Examining recession-associated changes is instructive in that it reveals how the predictive ability of credit-based Insurance Risk Scores is preserved even in the case when almost 100 percent of the population experiences a decline in their Credit Risk Score. Statistically, such a shift in the entire population would likely have little impact on insurance rates (except to the extent that actual loss performance deteriorates), since the Insurance Risk Scores differentiate risk among groups with varying degrees of loss expectancy. These groupings would still exist as would the ability of insurers to differentiate between them statistically even if a bad economy caused the credit and insurance scores of every member of the population to decline. Conversely, an improving economy raises all credit and insurance scores, and the insurer's ability to distinguish among groups is not in any way impaired. In an embodiment, the action of lowering revolving credit limits would not significantly affect the individual's Insurance Risk Score, insofar as the Bayesian power spectral density of the de-meaned and de-trended credit utilization ratio time series is not materially changed. Revolving credit utilization ratio characteristics are included in Credit Risk Score models, but, by themselves, they are frequently not included in the calculation of Insurance Risk Scores. Based upon empirical evidence, relatively few credit utilization characteristics, of dozens that have been tested, are found to be highly correlated to insurance loss ratio. The role of stress is important to recognize because it is a precursor to more serious mental health conditions, to less healthful lifestyle choices and behavior, and to exacerbation of existing health conditions, resulting in worsening of insurance loss ratio and claims experience. Stress is widespread in society and in the workplace. Hundreds of research studies have examined how aspects of jobs, organizational behavior, and activities of daily living can create stress for consumers and can contribute to mental health conditions and other physical health problems. Events in one's family can be a major source of stress that can manifest itself in the workplace. Many persons in the prime of their working years are stressed by caring for both young children and for an aging parent. Many caregivers experience significant employment-related consequences from having to balance greater amounts of time devoted to providing family support with time at work. For some people, that stressful path reaches a point where the burdens of family care and working a job can no longer both be managed. Many of the health problems of insured individuals can be attributable to worsening public health, with poor diets, growing obesity, smoking and more sedentary lifestyles all playing a part. Some can also be explained by growing levels of workplace ‘stress’, personal debt, and family breakdown and their links to depressive illness. Of course, part of the solution rests with government. It must take the lead in the public health arena, encouraging and educating citizens to make healthier choices in their lives. For individuals, it means taking more proactive personal responsibility for their lifestyle choices, health and wellbeing. However, employers and insurers have a role to play too. Partly, this is accomplished by instituting certain incentives for the insured to behave in specific ways that are salutary for health. The incentives may be based on contracting by the insured to receive health promotion-related rewards or discounted insurance premia, such as for smoking cessation, weight loss, or pedometer-measured walking 10,000 steps per day. In like fashion, with effective management various acute and chronic stressors of daily life that impose a significant burden on physical and psychological health may be reduced, averting significant adverse physiological, emotional, behavioral, and financial outcomes. Among these, cardiovascular disease continues to be a leading cause of spending and mortality in the United States, and ischemic heart disease is the most common type of heart disease. Cardiovascular disease-related health insurance claims are therefore a convenient index of statistical relationships to stress or other factors. Established risk factors for ischemic heart disease include diabetes mellitus, disorders of lipid metabolism, high blood pressure, cigarette smoking, obesity, and physical inactivity. The role of the work environment or work climate in the development of heart disease and other health challenges is of great interest. Much of the focus is on the role of job stress, financial stress, and perceived employment insecurity, as these factors have all been shown to contribute to heart disease. However, it is difficult to devise objective and longitudinal measures of physical and psychological stressors that would be practical to use in insurance plan management, health plan management and insurance underwriting. The epidemiological literature is replete with studies demonstrating the relationship between modifiable health risks and morbidity and mortality. However, there is less direct evidence on the association between modifiable health risks and individual health care expenditures. Recent reviews of published studies examining the financial impact of health promotion programs have concluded that there are good correlational data to suggest that high levels of stress, excessive body weight, and multiple risk factors are associated with increased health care costs and illness-related absenteeism. Recent reviews have also concluded that health promotion programs are associated with reduced health care costs. A major step forward was taken when Goetzel and colleagues used the Health Enhancement Research Organization (HERO) database to examine the association between ten modifiable health risks and health care expenditures. The focus of this study and the central unit of analysis was the individual employee. The study sought to document increased health care expenditures associated with certain health risks at the individual level. It was found that employees at high risk for poor health outcomes had significantly higher expenditures than did employees at lower risk in seven of ten risk categories: those who reported themselves as depressed (70% higher expenditures), at high stress (46%), with high blood glucose levels (35%), at extremely high or low body weight (21%), with high blood pressure (12%), and with a sedentary lifestyle (10%). Employees with multiple risk profiles for specific disease outcomes had higher expenditures than did those without these profiles for the following diseases: heart disease (228% higher expenditures), psychosocial problems (147%), and stroke (85%). Researchers have concluded that stress and other common modifiable health risks are associated with increases in the likelihood of incurring health expenditures and in the magnitude of those expenditures. Productivity and health have been important themes in job stress research for several decades. Some researchers have called for “new models” to help stress and productivity. A prominent argument of research using models of job strain is that traditional bureaucratic and Frederic Taylor-esque (i.e., ‘scientific management’) work organization and management principles stifle the full use of human capital. It is crucial, therefore, that workers and employers find the optimal balance between job demands and high decision making autonomy so that the goals of individual well-being and productivity can be achieved and sustained. There is abundant evidence that working conditions in which workers experience a combination of high job demands and low decision making latitude are associated with a range of psychological and physical health problems. The ‘demand-control’ model of stress has been used to predict the risk of heart disease, depression, and other illnesses for which lost productivity costs and increased insurance claims can be calculated. These relationships are stronger if workers participate in the design and implementation process. In terms of effective interventions, research suggests that lifestyle and work redesigns that afford greater autonomy and decision-making authority, more skill discretion, more social supports, and decreased physical and psychological demands are associated with better health, lower health services utilization, and fewer medical insurance claims. A number of studies suggest that the impact of debt on mental health may be mediated by personal attitudes towards debt, or more specifically ‘debt worry’. It is possible, for example, that participants' attitudes towards debt as recorded in the studies also reflect other personal concerns or variables that may not be measured by a study (for example, current income, expected future income, family financial situation). Where unmeasured, or not controlled for, these variables may also impact on measures of a person's mental health or psychological wellbeing. Similarly, anxiety about debt might reflect a person's general anxiety or psychological outlook. People who score higher on measures of anxiety or depression might be more likely to have a negative view of their finances. Although studies indicate a correlation between actual debts and debt worries, there is also evidence that the relationship between the two is more complex, and may additionally be affected by other factors. Credit-based Insurance Risk Scores and Credit Risk Scores are not identical. Credit Risk Scores are designed to predict the likelihood of individual default risk, while Insurance Risk Scores are designed to predict claims loss ratio. Credit Risk Scores are generally more volatile because they tend to rely more upon various forms of revolving credit utilization, including recent new account openings and recent account delinquencies, than Insurance Risk Scores. Although different aspects of utilization, account openings, and delinquency are contained within Insurance Risk Scores, these credit characteristics are defined differently and are not weighted as heavily as they are in Credit Risk Scores. Prior art Insurance Risk Scores, when compared to Credit Risk Scores, tend to place more emphasis on credit characteristics that demonstrate a consumer's depth of credit history as reflected by the number and type of accounts maintained over time and a longer-term view of account delinquency likelihood. Recent results show that borrowers who experience a decline of 10% in their FICO score (credit quality) after insurance coverage origination increase their credit line utilization by 15.5%. The present technology augments these tendencies by incorporating basis characteristics of credit utilization ratio time series Bayesian power spectral density. For individuals as well as businesses, one of the most commonly used measures is the “current ratio.” The current ratio measures financial liquidity, the extent to which current liabilities are covered by current assets, calculated by dividing current assets by current liabilities. The current ratio is the most commonly used measure of short-term solvency. Debt management ratios measure the extent to which an entity is using debt financing, or financial leverage. Debt management ratios denote the degree of risk or safety afforded to creditors. The debt ratio, or ratio of total debt to total assets, measures the percentage of funds provided by creditors. Total debt includes both current liabilities and long-term debt. The lower the ratio, the greater the protection afforded creditors in the event of liquidation. A debt ratio that exceeds the industry average raises a red flag and may make it costly for an entity to borrow additional funds without first raising more equity capital. If the entity earns more on investments financed with borrowed funds than it pays in interest, the return on the owners' capital is magnified, or “leveraged.” Entities with relatively high debt ratios have higher expected returns when the economy is normal, but they are exposed to risk of loss when the economy goes into a recession. Entities with low debt ratios are less risky, but also forgo the opportunity to leverage up their return on equity. For public businesses, analysts use two procedures to examine the entity's debt: (1) They check the balance sheet to determine the extent to which borrowed funds have been used to finance productive assets as contrasted with covering operating expenses, and (2) they review the income statement to see the extent to which fixed charges are covered by operating profits. Neither procedure is readily accomplished for entities who are private individuals. The credit utilization ratio also measures solvency and leverage. Based on data from credit bureaus and credit card issuers, several investigators have recently found negative correlation between the credit utilization ratio and Credit Risk Score that is even stronger than the correlation between credit limit and Credit Risk Score: low-score consumers have much higher credit utilization rates than those with higher scores. Causality for this relation may run the other way as well: high credit card utilization rates may cause low Credit Risk Scores over time. Nevertheless, this finding—that consumers with higher credit utilization rates used debit cards more frequently—could imply that consumers with a lower credit score experience lasting credit limitations—due to lower credit limits, or greater liquidity needs in the past, or both. Other attempts or efforts are deficient due to: (1) Omission of basis characteristics that objectively quantify stress experienced by the insured over time. (2) Excessive false-negative rate (financial loss for the insurer; adverse selection; negative percentage error of actual compared to estimated or budgeted amount, covered by contracted premium payments). (3) Excessive false-positive rate (financial gain for the insurer; positive percentage error of actual compared to estimated or budgeted amount, covered by contracted premium payments). False-positive errors lead to premium price-setting at a higher level than would have been necessary to insure plan solvency, causing the cost to the insured to be higher. (4) Heteroskedasticity (scale-dependent variance) of credit utilization ratio and other raw measures of the insured's liquidity, such that use of standard deviation, median absolute deviation from the median, or other measures of dispersion have, in general, low predictive accuracy and precision in regard to estimating future insurance loss ratio, claims incidence, or services utilization intensity. (5) Many potentially relevant variables that have strong statistical associations with health insurance loss ratio are proscribed by law and/or the Comptroller of the Currency in U.S. (and may be similar in other jurisdictions), e.g. Medical history and records; Consumer buying habits; Bank checking and savings account information; Income; Marital status; family status; Race, age, religion, receipt of public assistance, disability, gender, national origins. (6) Metrics drawn from a covered individual's self-reported data may have deficiencies in some circumstances, such as being subjective; impractical to solicit as a self-report very frequently (e.g. more than twice per year), propensity for bias, non-reporting, or fraudulent reporting, leading to ‘adverse selection’. (7) Failure of conventional insurance risk scoring variables to discover the detailed multi-scale dynamics of the physical and psychologic sequellae of stress and their impact on health services utilization over time. An embodiment establishes a method for ameliorating these limitations and providing objective, quantitative means for predicting the loss ratio. In particular, a method is employed that accurately characterizes physical and psychological stress associated with frequent or unexpected changes in financial liquidity. Turning now toFIG.1, there is presented an example operating environment100suitable for practicing an embodiment. Example operating environment100includes a computerized system for compiling and/or running an embodiment of an information architecture that performs decision support recommendation service. With reference toFIG.1, an Electronic Insurance Record (EIR) system, such as agency EIR system160containing an insurance claims database, is communicatively coupled to network175, which is communicatively coupled to computer system120. In an embodiment, components of operating environment100that are shown as distinct components may be embodied as part of or within other components of environment100. For example, an EIR system160may be implemented in computer system120. Similarly, a single EIR system may perform functions for one or more remote EIR systems (not shown). In an embodiment, network175includes the Internet and/or one or more public networks, private networks, other communications networks such as a cellular network, or similar network(s) for facilitating communication among devices connected through the network. Network175may be determined based on factors such as the source and destination of the information communicated over network175, the path between the source and destination, or the nature of the information. For example, intra-organization or internal communication may use a private network or virtual private network (VPN). Moreover, in some embodiments, items shown communicatively coupled to network175may be directly communicatively coupled to other items shown communicatively coupled to network175. In an embodiment, operating environment100may include a firewall (not shown) between a first component and network175. In such an embodiment, the firewall may reside on a second component located between the first component and network175, such as on a server (not shown), or reside on another component within network175, or may reside on or as part of the first component. An embodiment of electronic insurance record (EIR) system160includes one or more data stores of insurance claims records, which may be stored on storage121, and may further include one or more computers or servers that facilitate the storing and retrieval of the claims records. In an embodiment, an EIR system160is implemented as a cloud-based platform or is distributed across multiple physical locations. EIR system160may further include record systems, which store real-time or near-real-time user information, such as purchasing information, loyalty card information, or health record information indicative of insurance claims. AlthoughFIG.1depicts an exemplary EIR system160, it is contemplated that an embodiment relies on other servers (not shown) that provide purchasing information service, loyalty card information or health record information from an Electronic health record System. Example operating environment100further includes risk analyst system140including an Insurance Risk Scoring program and user interface. System140is communicatively coupled to an EIR system160. Although environment100depicts an indirect communicative coupling between system140and EIR system160through network175, it is contemplated that an embodiment of system140is communicatively coupled to EIR system160directly. Example operating environment100further includes computer system120, which may take the form of a server, which is communicatively coupled through network175to EIR system160, storage121, and system140. An embodiment of system140includes a user interface operated by a software application or set of applications on a client computing device such as a personal computer, laptop, smartphone, or tablet computing device. In an embodiment, the application includes Risk Analysis and classification system reporting insurance risk through a screen display to a user who operates system140. In an embodiment, the application is a Web-based application or applet. A user application facilitates accessing and receiving information from a user, server or EIR system160about a specific patient or set of patients for which Insurance Risk is to be evaluated and the application displays results, recommendations, prices, policies, or risk results, for example. In an embodiment, system140also facilitates receiving policies for an applicant from a policy generation system which may reside on system160, for example. System140may be used for providing Risk Analysis information, such as the information as illustrated and discussed in connection withFIGS.4A-14. In an embodiment, EIR system160is a workstation that receives a risk indication such as a loss ratio prediction, or a loss ratio category from system140and EIR system160generates a policy and a price based on a risk indication. In an embodiment, EIR system160comprises an electronic display that presents the results of risk analysis to a user/analyst. In an embodiment, EIR system160emits an indication of an incentive program to reduce the premium for the user to present to an applicant, and provides this information in a message to the user of system140, where system140is a personal communication device. In an embodiment, a personal communication device is a computer, a pager, a laptop computer, a computer workstation, a desktop computer, a tablet, a wired telephone, a wireless telephone, cellular telephone, personal digital assistant, or smartphone. In an embodiment, system160provides a short message service (SMS) message, email, audible tone, audible announcement, or a display message. An embodiment of system140takes the form of a user interface and application, which may be embodied as a software application operating on one or more mobile computing devices, tablets, smartphones, front-end terminals in communication with back-end computing systems, laptops, or other computing devices. In an embodiment, system140includes a Web-based application or set of applications usable to manage user services provided by an embodiment. For example, in an embodiment, system140facilitates processing, interpreting, accessing, storing, retrieving, and communicating information acquired from credit rating agency systems1(190), i (191) or N (142). In an embodiment, system140includes functionality for processing user-derived information locally or for communicating the information to computer system120or system160, where it may be processed. In an embodiment, the processing may be carried out or facilitated by one or more software agents, as described below. In an embodiment, the processing functionality, which may occur on system140, and/or computer system120, includes signal conditioning, such as removing noise or erroneous information. In an embodiment, processing functionality is operable to process user-derived information, such as credit data derived from a soft pull from a credit rating agency from system190. In an embodiment, a soft-pull is performed over an interval periodically, e.g. daily, weekly, bi-weekly, monthly, bi-monthly, quarterly, or yearly for an applicant and accumulated data is stored in storage121. In an embodiment, the processing includes classifying the user-derived information acquired for a particular time interval into a category. Computer system120comprises one or more processors operable to receive instructions and process them accordingly, and may be embodied as a single computing device or multiple computing devices communicatively coupled to each other. In an embodiment, processing actions performed by system120are distributed among multiple locations such as one or more local clients and one or more remote servers. In an embodiment, system120comprises one or more computing devices, such as a server, desktop computer, laptop, or tablet, cloud-computing device or distributed computing architecture, a portable computing device such as a laptop, tablet, ultra-mobile P.C., or a mobile phone. An embodiment of computer system120includes computer software stack125, which in some embodiments operates in the cloud, as a distributed system on a virtualization layer within computer system120. An embodiment of software stack125includes operating system129. Operating system129may be implemented as a platform in the cloud. Operating system129is capable of hosting a number of services such as122,124,126, and128. An embodiment of services122,124,126, and128run as a local or distributed stack in the cloud, on one or more personal computers or servers such as system120, and/or a computing device140running an insurance system risk scoring application. In an embodiment, system140operates in conjunction with software stack125. In an embodiment, variables indexing service122and records/documents ETL service124provide services that facilitate retrieving frequent item sets, extracting database records, and cleaning the values of variables in records. For example, variables mapping service122may perform functions for synonymic discovery, indexing or mapping variables in records, or mapping disparate record systems' ontologies, such as determining that a particular credit condition of a first record system is the same as another credit condition on a second record system. In an embodiment mapping service122provides service that facilitates retrieving frequent item sets, extracting database records, and cleaning values of variables in records. In an embodiment, these services may invoke software services126. Software services126perform statistical software operations, and include statistical calculation packages such as, in an embodiment, the R system (the R-project for Statistical Computing, which supports R-packages or modules tailored for specific statistical operations, and which is accessible through the Comprehensive R Archive Network (CRAN) at http://cran.r-project.org); R-system modules or packages including tsDyn or similar services for facilitating implementation of nonlinear autoregressive time series models, pracma for performing practical numerical mathematical functions, bspec for performing operations related to Baysian inferences on a discrete power spectrum time series, copula for multivariate dependence analysis with Copulas, CopulaRegression for Bivariate Copula based regression modeling, MASS for support functions and datasets for Venables and Ripley's mass, mvtnorm for multivariate normal and t distributions, VineCopula for statistical inference of vine copulas, scatterplot3d for 3D scatter plots, multinbmod for regression analysis of overdispersed correlated count data, zoo for S3 Infrastructure for regular and irregular time series (z's ordered observations), psd for estimating the power spectral density, wavelets for computing wavelets, strucchange for testing monitoring and dating structural change, tseriesChaos for nonlinear time series operations, arulesSequences or similar services for facilitating operations such as K-nearest neighbor distance calculations, SIGNAL or similar services such as MATLAB, for performing signal processing functions such as performing digital synthesis of digital filters such as butterworth, chebyshev, elliptical, finite impulse response filter, infinite impulse response, and savitzky-golay filters and quantreg for computing quantile regression and related methods such as kuantile and quantile. Software packages126are associated with services128, which include IBM infosphere stream processing services, Apache Hadoop and Hbase framework, or similar frameworks operable for providing a distributed file system, and which in some embodiments facilitate or provide access to cloud-based services such as those provided by Cerner Healthe Intent®. Example operating environment100also includes storage (or data store)121, which in some embodiments includes patient data for a candidate patient and information for multiple patients; variables associated with patient recommendations; recommendation knowledge base; recommendation rules; recommendations; recommendation update statistics; an operational data store, which stores events, frequent itemsets (such as “X often happens with Y”, for example), and item sets index information; association rulebases; agent libraries, solvers and solver libraries, and other similar information including data and computer-usable instructions; patient-derived data; and health-care provider information, for example. It is contemplated that the term data includes any information that can be stored in a computer-storage device or system, such as user-derived data, computer usable instructions, software applications, or other information. In an embodiment, data store121comprises the data stores associated with the one or more EIR systems, such as160and computer system140. Further, although depicted as a single storage data store, data store121may comprise one or more data stores, or may be in the cloud. Turning briefly toFIG.2, there is shown one example embodiment of computing system200that has software instructions for storage of data and programs in computer-readable media. Computing system200is representative of a system architecture that is suitable for computer systems such as computing system120. One or more CPUs such as201, have internal memory for storage and couple to the north bridge device202, allowing CPU201to store instructions and data elements in system memory215, or memory associated with graphics card210, which is coupled to display211. Bios flash ROM240couples to north bridge device202. South bridge device203connects to north bridge device202allowing CPU201to store instructions and data elements in disk storage231such as a fixed disk or USB disk, or to make use of network233for remote storage. User I/O device232such as a communication device, a mouse, a touch screen, a joystick, a touch stick, a trackball, or keyboard, couples to CPU201through south bridge203as well. The system architecture depicted inFIG.2is provided as one example of any number of suitable computer architectures, such as computing architectures that support local, distributed, or cloud-based software platforms, and are suitable for supporting computing system120. Returning toFIG.1, in an embodiment, computer system120is a computing system made up of one or more computing devices. In an embodiment, computer system120includes an adaptive multi-agent operating system, but it will be appreciated that computer system120may also take the form of an adaptive single agent system or a non-agent system. Computer system120may be a distributed computing system, a data processing system, a centralized computing system, a single computer such as a desktop or laptop computer or a networked computing system. In an embodiment, computer system120is a multi-agent computer system with agents. A multi-agent system may be used to address the issues of distributed intelligence and interaction by providing the capability to design and implement complex applications using formal modeling to solve complex problems and divide and conquer these problem spaces. Whereas object-oriented systems comprise objects communicating with other objects using procedural messaging, agent-oriented systems use agents based on beliefs, capabilities and choices that communicate via declarative messaging and use abstractions to allow for future adaptations and flexibility. An agent has its own thread of control which promotes the concept of autonomy. In an embodiment, a corporate benefits analyst operates system140, for a uniformly priced company health service plan, which is available to all employees offering plans at the same price. In an embodiment the analyst obtains voluntary permission from employees at enrollment time for the employee to participate in an incentive program to receive a rebate, or discount on the premium for participation in a health risk assessment and/or reduction program. Analyst system140queries the credit rating agency systems191,190and142periodically obtaining soft-pull data for each enrolled employee, and the raw data is stored in storage121. As a result of processing the credit time history, as disclosed further herein, the analyst system140determines a risk category for an incentive program enrollee, such as, likely to increase in debt-load, or erratic debt-load, or likely to enter a high risk of financial stress. In an embodiment, the category is selected based on a predictor that predicts an increased frequency of healthcare visits at some point in the future based on a current history of personal financial records. In an embodiment, the category is correlated with claim amount. In an embodiment the category is correlated with a quantifier that incorporates through a mathematical equation both claim frequency and claim amount into a composite score. In an embodiment the category of enrollee that has been identified is communicated to the analyst in the form of a text alert, e.g. “enrollee John Doe is likely to enter a high risk financial stress region in one year, recommend incentive X, for contact at [email protected].” Where X may be an incentive consisting of one or more of: providing discounted financial education service, providing discounted stress management service, providing reward incentives such as a greater discount on healthcare insurance, free meals, drinks, coupons, etc. if John Doe completes a company provided mitigation service such as visiting a personal finance coach. In an embodiment a message to the analyst indicates the frequency of high stress present in an applicant pool, while keeping the identities confidential, so that the analyst is able to make recommendations for funding mitigating services. For example, on the basis of such information, a free course is offered to all employees for reducing financial stress, and free follow-up given anonymously, without the analyst knowing the particular details for any individual of underlying financial data or category. In an embodiment, a plan offers a three-tier price level, with a first, highest level available to all who wish to keep their financial records private, a second discounted level available to those who allow their records to be accessed, but who have poor financial performance, and a third, most discounted level available to those who allow their records to be accessed, and demonstrate a low fiscal stress life style through the testing described herein. Out of necessity, astrophysicists who study gravitational waves have developed techniques that extract the maximum amount of information from short time series that arise from brief events or short time series. The same mathematical methods that are used in empirical identification of time series associated with gravitational waves can be fruitfully applied to the problem of identifying other short time series, including time series that arise in health and health care contexts. The existence of gravitational waves has been inferred from changes in the orbital periods of several binary pulsars, such as PSR 1913+16. However, gravitational waves have not yet been directly detected on Earth because of their extremely small effect on matter. ‘Orbital lifetime’ is a characteristic property of celestial objects that are gravitational radiation sources. Orbital lifetime determines the average number of binary stars in the universe whose gravitational waves are likely to be detectable. Short-lifetime binaries produce strong, readily-detectable gravitational radiation but are rare. Long-lifetime binaries are more numerous but are emit gravitational waves that are weak and hard to detect. The ground-based instrument called LIGO (the Laser Interferometer Gravitational-Wave Observatory; two observatories 3 km apart) is most sensitive in the frequency band (30 Hz to 7 KHz) where two neutron stars are about to merge. The time frame for merger or coalescence lasts only a few seconds. The LIGO or similar instruments must detect this “blink” of gravitational waves emitted over a few seconds out of a million-year orbital lifetime. It is calculated that only about once per decade or so does a coalescence of two neutron stars happen in a manner that could be detected by LIGO. The Laser Interferometer Space Antenna (LISA; three spacecraft 5 million km apart, flying in a triangle formation) is a planned collaboration between the U.S. space agency, NASA, and the European space agency ESA. If completed, LISA would be most sensitive in the frequency band between 0.1 mHz and 100 mHz, where coalescence of massive black holes or galactic binaries would be detected in the final months leading up to merger. In astrophysics, binary systems of objects that radiate gravitational waves may, over time, experience a decrease in the distance between the objects. This causes the emitted waves' frequency and amplitude to increase over time. The swept-frequency pattern is known as a ‘chirp’. Other types of objects that radiate gravitational waves include spinning neutron stars, whose waves' frequencies and amplitudes follow a recurrent, periodic cycle. In the case of the gravitational collapse of massive stars, resulting in supernovae, the patterns of gravity wave emission are far more complex and burst-like, with chirp-up and chirp-down motifs with frequencies ranging over 2 or 3 or more orders of magnitude in the frequency domain. As noted above, gravitational wave bursts can have a very short duration, so current GW detector design has to take this into account. There are approximately 3×10{circumflex over ( )}10 msec per year, so even a fluctuation that has a probability of 10{circumflex over ( )}-10 of occurring is likely to occur in one year of data. In order to eliminate most false-positive signals, a signal-to-noise ratio threshhold is often used or, in some cases, multi-detector coincidence discrimination. But in insurance underwriting, there may be no need for coincidence discrimination by multiple events synchronously incident upon two or more ‘detectors’. Ordinarily, each event is incident upon only one insured. An embodiment, therefore utilizes a gravitational wave analytic method that does not depend on multi-detector coincidence detection. Furthermore, traditional time-series analysis and forecasting methods are highly sensitive to the sequence in which events occur. Van den Berg described an example where the frequency domain power spectrum of a time series s(t) can accurately establish the probability of the identity of an object when ordinary human and time-series methods fail to identify the object correctly. The power spectrum of a classical symphony or other musical work reveals in each time segment the dominating key, through the pattern of spectral intensities at frequencies associated with fundamentals and harmonics. If the sections of the musical work are played in a different order, the power spectrum would not change, but the ear and the mind, which make time-frequency analysis, perceive a very different content then compared to how the original symphony is perceived. To avoid excessive sensitivity to arbitrary differences in the sequencing of events, an embodiment relies on a frequency-domain power spectrum analysis method to detect predominant frequencies and motifs. On a finite segment of length delta-t, the resolution in frequency is 1/delta-t. We can give up fine resolution in frequency-space but, by so doing, gain information about when an event happened. Therefore, in one embodiment, rather than working in frequency-space with arbitrarily good resolution, we operate in the time-frequency plane, achieving a good compromise between the accuracy in frequency and the accuracy in time. This has advantages when we aim to detect transient phenomena, such as gravitational wave bursts or irregular alternations of patterns of credit utilization ratio changes (CUR motifs). In this regard, it is a commonplace that people naturally experience ‘epochs’ in their personal financial history. Each epoch is associated with characteristic patterns and rates of spending and, often, health services utilization. The temporal event motifs of chronic conditions like FICO score<600 or FICO score>800 are distinct and different from motifs associated with conditions such as arise with financial shocks that accompany major family events, like undertaking or retiring major mortgage or installment debt, birth of a child, children's entry into college, divorce, death of a member of the immediate family, retirement from employment, and so forth. The motifs associated with declining liquidity are punctuated by ‘ups-and-downs’, but the epochs' durations and successors are not, in general, as predictable as for the conditions noted for ‘exacerbations-and-remissions’. Through power spectrum analysis methods the offset of one epoch and the onset of a new epoch can often be detected from time series, within a span of 3 or 4 events or measurement periods. An embodiment treats the median power spectrum likelihood ascertained by Bayesian Markov Chain Monte Carlo simulation as one marker or ‘weight’ that measures instability of credit utilization ratio time series and, optionally, may measure the similarity of the record associated with the current entity to records from putative matching entities stored in the target database. In an embodiment, a distance between a reference spectrum ref1 and the present spectrum estimate is found, and the distance d1 is compared to a distance threshold Td to determine whether or not the likelihood measure d1 is below a Td. When the likelihood measure d1 is below Td then an adverse loss ratio or excess claim condition is predicted. When the likelihood measure d1 is above Td then an acceptable loss ratio or an acceptable claim frequency is predicted. Turning now toFIG.3, there is depicted in300a representative flow diagram of insurance risk decision processing. In an embodiment, a risk estimate is formed as a predictor of loss. In an embodiment the loss ratio experienced for an individual in the claims database is computed as a ratio of cost to the difference of revenues and cost. At310the current entity of interest is bound to the data to form data.frame (attributes and current data). The person being studied for risk assessment is associated with a data frame for analysis. At320the Groups are determined via Basis Characteristics. For example, there may be different groups of insured individuals that are separately grouped for analysis purposes. A family plan insurance enrollee, for example, is studied as a member of the family group, who, no doubt will have a higher number of claims associated, on average, than an individual enrollee, all other things being equal. Another example of determining group is high, medium, or low deductible plans. The high deductible plan enrollee is likely to have fewer claims than a low deductible enrollee, thus in an embodiment, the model is formed for each group, and predictions are made with a knowledge of the group modeled. At330a “soft Pull” credit information time series is formed. In an embodiment data is originally drawn from a credit agency such as190, on a periodic basis, e.g., daily, weekly, bi-weekly, monthly, quarterly, etc. and stored in operational data store325. In an embodiment a sliding window of data is formed from the raw data forming a minimum analysis window. In an embodiment a 24 month window is used for input time series. In an embodiment, a window length of 24 samples is used. At340the raw credit utilization ratio values are scaled to put all data on the same interval range and meaning. Different reporting agencies may have different periods or conditions for reporting soft-pull data, and so this step, when used, mitigates any potential agency bias. Beginning at350and continuing through360, and370to380, a method of determining a normalized likelihood weight from time series data is provided. Additional information about determining a normalized likelihood weight from a time series is provided by U.S. patent application Ser. No. 13/874,961 titled “System and Method for Record Linkage,” filed on May 1, 2013, which is herein incorporated by reference in its entirety. At350the time series is cast as a time series datatype. In an embodiment, the time series is projected beyond the observed time using a linear trend extension of the last six months of samples to project a trend into the next six or eight months of samples. In an embodiment, the linear projection is capped, so that the projection does not extend above 100% credit utilization. In an embodiment the most recent samples are mirrored to project behavior for future months. In an embodiment, the record is extended into the future to form a power of two sample size such as 32 samples. In an embodiment, the time series is created at a high sampling rate such as a daily basis or a weekly basis, and the data is reduced to a monthly value by taking the peek credit utilization over a monthly window to form the time series. In an embodiment, a windowing method is applied to minimize a discontinuity at the edge of the sample window. In an embodiment, records are overlapped and windowed to form two parallel time series records, and two resultant power spectrum estimates, and the resulting power spectra area added to form a power spectrum estimate. In an embodiment, the overlapped records only differ by a single month of data. At360the Bayesian power spectra is computed for the time series. In an embodiment, the R-System package bspec is used. In an embodiment, a power spectrum estimate is formed using one of a wavelet transform, a discrete cosine transform, a discrete fourier transform, a periodogram method, a Bartlett method, a Welsh method, and an autoregressive moving average estimate. In an embodiment, the low frequency terms are used, and the high frequency terms are discarded. In an embodiment only the lowest eighth of the frequency terms are kept. In an embodiment the likelihood (probability) of each spectrum is calculated by iteratively permuting the spectrum and sampling the resulting permutations by Bayesian Markov Chain Monte Carlo simulation. In an embodiment 500 iterations are computed, and the median likelihood for each entity is retained. In an embodiment the entropy is computed by one of the Shannon entropy, symbol entropy, approximate entropy or Chao-Shen entropy. In an embodiment the disorder in the spectrum is quantified and used as a measure of disorder in a financial time series. In an embodiment, a variability statistic such as entropy is calculated. In an embodiment the entropy is computed by one of the Shannon entropy, approximate entropy, or Shannon Renyi entropy. In an embodiment a variability statistic over the raw CUR series is calculated. In an embodiment, a variability statistic and/or entropy is calculated from a series as provided in U.S. Provisional Patent Application 61/879,792 titled “Personal Analysis and Chronotherapy,” filed on Sep. 19, 2013, which is herein incorporated by reference in its entirety. In an embodiment a variability statistic is calculated iteratively as each observation sample, such as monthly sample is added. At370the likelihood values for all power spectra are optionally sorted and rank-ordered. At380The resulting series is scaled to a range such as (0,1), to calculate a normalized power spectrum likelihood weight. At390a distance is optionally calculated between the resultant spectrum and one or more reference spectra. A number of reference spectra may be chosen according to classification criteria, such as identifying clusters for choosing a threshold that is commensurate with the underlying pattern. For example, ref1 typifies cluster1, ref2 typifies cluster 2, ref3 typifies cluster 3. In an exemplary embodiment a distance is calculated from the reference spectra to each of ref1, ref2, and ref3, and if the distance is small between ref3 and the resultant spectrum, then the cluster 3 threshold is used at393rather than the default threshold. Other reasons for calculating distances might include tracking the spectrum change month to month, or adapting the underlying reference model over time. Other reasons for calculating a distance include looking for aberrant patterns from the past that have been associated with very poor individual claims performance In such a use case, an aberrant pattern is identified by checking each case of bad individual claim performance, and testing the distance of the use case from other, non-aberrant cases across the spectrum of users. In an embodiment machine learning is used to identify an aberrant pattern worthy of looking for in the future. In an embodiment distance is a vector norm formed over the difference vector. In an embodiment the norm is the 2-norm or euclidean distance. In an embodiment, the distance is the p-norm. In an embodiment the distance is the 1-norm or sum of absolute values of elements. In an embodiment the norm is the infinity norm, or effectively the maximum absolute value over the set of elements. In an embodiment, variability is taken as an indication of stress. In an embodiment percentiles are calculated over the interval from the CUR data, including in an embodiment, variability or entropy. In an embodiment, a stress statistic is formed to represent the stress of the applicant for incorporation into an actuarial model of risk. In an embodiment a stress statistic is formed over a time series representing variability of the CUR time series. In an embodiment the stress statistic is formed by computing one or more of mean, median, mode, standard deviation, variance, skewness, kurtosis, mean absolute difference, median absolute difference, a rank order statistic, an absolute difference, a peak value, a coefficient of variation, and a peak difference. In an embodiment, adjacent values in a series are compared by forming a first adjacent absolute difference statistic and a second adjacent absolute difference statistic, and so on until a kth adjacent absolute difference statistic is calculated. In an embodiment disorder is quantified as the sum of the averages of the first k absolute difference statistics. In an embodiment k=3. In an embodiment k=5. In an embodiment a risk category such as high risk, moderate risk, or low risk is computed from the stress statistic. In an embodiment a percentile of a statistic is identified for the applicant. In an embodiment the applicants in a pool that are among the top X% of variability or entropy are identified as high stress. In an embodiment the applicants in a pool that are among the bottom Y% are identified as being low stress. For example, the highest 10% of entropy are determined to be in a high variability regime with increased stress, and the bottom 20% are deemed to be in a regime with decreased stress. In an embodiment, percentages are identified in an insurance coverage sense are calculated. In an embodiment one or more stress statistics are used as an input into an actuarial model that calculates one or more of insurance risk score, predicted insurance loss ratio, predicted annualized claim number, likelihood of excess claims, and other indeces. In an embodiment amount of disorder is taken as a reflection of stress. At393a decision is formed, e.g. by comparing the median posterior probability to a threshold. If the probability is greater than the chosen threshold, the method proceeds to395where a favorable claims condition is predicted such as an acceptable loss ratio or claims frequency is below population norms. In an embodiment a claim risk category is stored, e.g. in operational data store325. In an embodiment, at395a favorable claim condition is predicted and displayed as shown, e.g. inFIGS.10A-10C. If the probability is less than the threshold, the method proceeds to397where an unfavorable claims condition is predicted such as adverse loss ratio or excessive amounts of claims. In an embodiment, at397a claim risk category is stored, e.g. in operational data store325. In an embodiment, at397an unfavorable risk condition is displayed as shown in one or more ofFIGS.7A-7C. An embodiment selects the threshold weighing the relative financial costs of an estimated false positive rate against the costs of an estimated false negative rate. An embodiment selects the threshold to identify a certain fraction of the population as determined to be of higher risk. For example, in an embodiment in which the financial coaching services are provided free of charge, the top N employees could be identified as most likely in need of stress-reducing financial coaching. In an embodiment, a low-risk pool, such as the bottom 5% of risk is identified as a pool that would be desirable clients to attract, or as meriting a lower cost-group. In an embodiment, at393a decision is formed by comparing a number of distance measures to a threshold, so that the method proceeds to395when all of the distances compared are greater than the corresponding thresholds for each test, and predicts that the loss ratio is acceptable or the claims frequency are in accord with population norms. When at least one of the distance measures are less than a threshold, the method proceeds to397where an adverse loss ratio is predicted or excessive claims frequency is predicted. In an embodiment, at393a likelihood measure is chosen to be near zero when a calculated distance is within a tolerance of zero, and otherwise the likelihood is determined to be a reciprocal of the distance measure. In an embodiment a likelihood measure is chosen to be near zero when the sum of the distance measures is within a tolerance of zero, and otherwise the likelihood measure is determined to be a reciprocal of the sum of the distance measures. Continuing withFIG.3, and approaching the algorithm performance from another vantage point, a flow diagram is provided which illustrates an embodiment of a system and method for generating a list of claim performance predictions. An embodiment includes the following steps:1. Bind the record of an entity for which it is desired to find any and all matching entities in the target system.2. Optionally, determine the group to which the entity belongs, based on policy type or conventional basis characteristics.3. Perform “soft pull” inquiry for preferably not less than 24 months of credit utilization ratio data, from one or a plurality of credit rating agency records.4. Scale the raw CUR values if necessary (for example, to a unified scale from 0.0 to 1.0, or to a unified scale from 0 to 100).5. Take the x.6. Scale the credit utilization values to a standardized scale (for example, 0 to 1 or 0 to 100 floating-point).7. Calculate power spectra for each time series from Step 6.8. Calculate the likelihood (probability) of each spectrum by iteratively permuting the spectrum and sampling the resulting permutations by Bayesian Markov Chain Monte Carlo simulation, preferably executed not less than 500 iterations, retaining the median likelihood for each entity.9. Sort and rank-order the median likelihood values.10. Normalize the likelihood values from Step 9 to lie within the range (0,1) to form a power spectrum weight (PS_wt) for each entity.11. Determine for each entity whether the power spectrum weight of Step 10 exceeds an heuristic threshold, or utilize the power spectrum weight as an Insurance Risk Score independently from other actuarial models and methods.12. Optionally, enter the value of the power spectrum weight or a transformed variable derived from the power spectrum weight into an actuarial model in combination with a plurality of other basis characteristics variables. An embodiment of the flow diagram ofFIG.3is shown in greater detail in the computer program routine shown inFIG.15. The total loss may be estimated using a regression model as demonstrated in the program routine shown inFIGS.16A-C. Turning now toFIGS.4A-4C, there is shown therein a representative analysis for a first individual over a 24 month interval as depicted in the originating time series (FIG.4A), the resultant autocorrelation function (FIG.4B) and the resultant Bayesian Power Spectrum with error bars (FIG.4C). Representative data for the first individual shows a car purchase which happened at some time after the 14th month. This car purchase resulted in a hard pull of the credit information of the applicant from a credit agency. For this case, the underlying model produced a Median posterior Bayesian likelihood near zero and therefore predicted excess claims in the subsequent 12 month interval, since a probability threshold of about 10−5is used for the illustrated embodiment. There were no excess claims experienced in this case for the subsequent 12 month interval. Turning now toFIGS.5A-5C, which presents a case analogous toFIGS.4A-4Cfor a second individual with completely flat CUR time series of zero ratio. Since CUR is ordinarily defined as the amount of all outstanding balances on all credit cards divided by the sum of the limits of the credit cards, and is typically expressed as a percentage.FIG.5Adepicts a person with no balance, and no activity over the interval. Since the probability is still below the threshold, the embodiment depicted does not flag the second individual as likely to have excess claims in the subsequent 12-month interval. The second individual is living debt-free. Notice the results would be the same for a person who was not allowed to carry any debt balance, because he had not been issued any credit cards with allowed balances. Turning now toFIGS.6A-6C, which presents a case analogous toFIGS.4A-4Cfor a third individual with completely flat CUR time series but a small (3%) balance that remains for the interval. The decision for the third individual is the same as for the second individual. Thus the illustrated embodiment makes a similar decision even if an individual carries a stable load of debt, as opposed to living debt-free. Turning now toFIGS.7A-7C, which presents a case analogous toFIGS.4A-4Cfor a fourth individual who has been building debt balance for two years. The illustrated embodiment decides based on the power spectrum that excess claims are likely in the subsequent 12 month interval. Turning now toFIGS.8A-8C, which presents a case analogous toFIGS.4A-4Cfor a fifth individual who has an erratic balance over the two years. The illustrated embodiment decides based on the power spectrum that excess claims are likely in the subsequent 12 month interval. Turning now toFIGS.9A-9C, which presents a case analogous toFIGS.4A-4Cfor a sixth individual who has an erratic and balance over the two years, with a recent trend toward increasing balance and instability. The illustrated embodiment decides based on the power spectrum that excess claims are likely in the subsequent 12 month interval. Turning now toFIGS.10A-10C, which presents a case analogous toFIGS.4A-4Cfor a seventh simulated individual who has had a small budgeting variation has curtailed it over the two years. The illustrated embodiment decides based on the power spectrum that excess claims are not likely in the subsequent 12 month interval. In an embodiment, a small amount of stable variation like the present case is used to select a threshold, for example, choosing the threshold as a factor smaller than the resultant value, e.g. a factor of 2 or a factor of 5, or a factor of 10. Turning now toFIGS.11A-11C, which presents a case analogous toFIGS.4A-4Cfor an eighth simulated individual who has had a small budgeting variation but has curtailed similar problems in the past over the two years. The illustrated embodiment decides based on the power spectrum that excess claims are not likely in the subsequent 12 month interval. Turning now toFIGS.12A-12C, which presents a case analogous toFIGS.4A-4Cfor an ninth individual who is carrying a large and variable balance over the two years. The illustrated embodiment decides based on the power spectrum that excess claims are likely in the subsequent 12 month interval. Turning now toFIGS.13A-13C, which presents a case analogous toFIGS.4A-4Cfor an tenth individual who is carrying a small and variable balance over the two years. The illustrated embodiment decides based on the power spectrum that excess claims are likely in the subsequent 12 month interval. This example thus presents a false-positive result. FIG.14illustrates performance of a personalized insurance risk scoring using credit utilization time series. A series of 1,002 subjects received informed consent according to applicable U.S. law and regulations. Measurements of subjects' credit were collected via monthly “soft pull” inquiries to credit rating agencies for each subject for a period of 24 months. Records were randomly selected from a health plan records data warehouse (analogous to an EIR) containing 100% of claims that are incident upon the plan during the year subsequent to the measurement period. The personally-identifiable information was removed in conformance with U.S. HIPAA law and regulations, and the de-identified data were stored in a separate, secure database. We recast the data in the form of time series, and analyzed the sequences using the open-source R statistical package bspec. The results shown inFIG.14indicate that there were about 21 false positives in the pool, for a false positive rate of about 2%. Accurate loss ratio estimation is vital to the financial performance of insurance products and health plans. In an embodiment an application service enables improvements in loss ratio estimation in insurance underwriting and pricing, particularly in health insurance. Stress affects claims because it is a gateway to serious health conditions, to less healthful lifestyle choices and behavior, and to worsening existing health conditions, resulting in deteriorating insurance loss ratio and claims experience. Stress is widespread in society and in the workplace. Hundreds of research studies have examined how aspects of jobs, organizational behavior, and activities of daily living can create stress for consumers and can contribute to mental health conditions and other physical health problems. Events in one's family can be a major source of stress that can manifest itself in the workplace. Many persons in the prime of their working years are stressed by caring for both young children and for an aging parent. Many caregivers experience significant employment-related consequences from having to balance greater amounts of time devoted to providing family support with time at work. Insurance Risk Scores must be based exclusively on objective, factual information, including consumer accounts such as credit cards, retail store cards, mortgages, and auto loans. Public record information, including bankruptcies, liens and judgments, and collection accounts are also permitted. All of this factual credit information is received by credit rating agencies such as Equifax, TransUnion, Experian, and FICO from tens of thousands of financial institutions, retailers, and court houses on a monthly basis. To date, no basis characteristic related to patterns of credit utilization ratio or other information in credit reports has been known to be prohibited by the Comptroller of the Currency, for use in insurance underwriting. The present technology solves the challenge to discover how such information relates to health claims experience and services utilization. An application service performs periodic, ongoing “soft-pull” retrievals of an insured individual's credit utilization ratio or CUR. The CUR is the percentage of the total lines of credit that are currently being used (currently unpaid balances). Bi-weekly or monthly values are assembled into a time series for each insured, and a Bayesian power spectrum is calculated. A mathematical model calculates the amount of irregular or chaotic variability (entropy), and the Bayesian probability (spectral likelihood) is also computed. The result is a measure that correlates with the number and size of insurance claims. An embodiment focuses on medical or health-related claims. In an embodiment empirical financial stress variability is used as a relationship to determine claim risk for other insurance types such as property, auto, life, casualty, etc. In other words, the amount of credit extended that is used by the insured has only a weak relationship to claims experience, but spectral-analytic features of the variability in the CUR are strongly and consistently related to claims. An application service provides a new and important measure of financial stress that is distinct from traditional actuarial measures and distinct from “macro” financial metrics like the CUR itself. The underlying metric provides an important new predictor of health-related financial risk that works hand-in-hand with conventional actuarial models. In an embodiment an application service is embedded as a component in an existing model for plan and product management, premia-setting, cash-reserving, and other purposes. Financial Stress is related to Health increased insurance claims. An application service predicts the effect of financial stress on future health claims that can arise in several ways. A number of studies suggest that the impact of debt on mental health may be mediated by personal attitudes towards debt or, more specifically, “debt worry.” It is possible, for example, that participants' attitudes towards debt as recorded in the studies also reflect other personal concerns or variables that may not be measured (for example, current income, expected future income, family financial situation). Where unmeasured or not controlled for, these variables may also impact the measures of a person's mental health or psychological wellbeing. Similarly, anxiety about debt might reflect a person's general anxiety or psychological outlook. People who score higher on measures of anxiety or depression might be more likely to have a negative view of their finances. Although studies indicate a correlation between actual debts and debt worries, there is also evidence that the relationship between the two is more complex, and may additionally be affected by other factors. Although the invention has been described with reference to the embodiments illustrated in the attached drawing figures, it is noted that substitutions may be made and equivalents employed herein without departing from the scope of the invention. For example, additional steps may be added and steps omitted without departing from the scope of the invention. Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the spirit and scope of the present invention. Embodiments of the invention have been described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to those skilled in the art that do not depart from its scope. A skilled artisan may develop alternative means of implementing the aforementioned improvements without departing from the scope of the invention. It will be understood that certain features and subcombinations are of utility and may be employed without reference to other features and subcombinations and are contemplated within the scope of the claims. Not all steps listed in the various figures need be carried out in the specific order described.
71,446
11861732
DESCRIPTION The systems and methods of the present disclosure provide a machine learning based solution to the technical problem of identifying fraudulent or other criminal activity such as e.g., fraudulent merchants, fraudulent transactions, criminal monetary transactions, and fake invoices. FIG.1shows a flowchart for a computer-implemented method100for detecting fraud in accordance with an example embodiment of the present disclosure. Certain aspects of the method100are disclosed in U.S. patent application Ser. No. 16/710,973, which is incorporated herein by reference in its entirety. The steps of method100are exemplary, and elements may be added or removed from the method100without deviating from the inventive concepts of the present application. In one or more embodiments, the method100may include the following steps: a step110of obtaining financial data of a merchant, wherein the financial data includes a declared industry of the merchant; a step120of determining, via a machine learning model, a first prediction of the merchant's industry; a step130of generating a first probability matrix based on the first prediction and the declared information regarding the merchant's industry; a step140of determining, via the machine learning model, a second prediction of the merchant's industry; a step150of generating a second probability matrix based on the second prediction and the declared information regarding the merchant's industry; a step160of obtaining a declared industry of a subject merchant in a runtime environment; a step170of determining, via the machine learning model, a predicted industry for the subject merchant; a step180of obtaining, based on the declared industry and the predicted industry of the subject merchant, a first value from the first probability matrix and a second value from the second probability matrix, and a step190of labeling the subject merchant for further investigation based on a comparison of the first value to a first threshold and a comparison of the second value to a second threshold. In one or more embodiments, at step110, the financial data can be obtained from various sources including, but not limited to, data management systems such as small business data management systems, personal financial data management systems, transaction data management systems, and the like, that offer various financial document preparation and submission capabilities such as billing, bill payment, estimates, inventory, and other financial document creation and dissemination capabilities, to the users of these data management systems. In example embodiments, the financial data can be obtained from financial data documents that include, but are not limited to, invoices generated by the merchant; invoices received by the merchant; estimates provided by the merchant; inventory documents associated with the merchant; revenue documents associated with the merchant; accounting documents associated with the merchant; correspondence documents associated with the merchant; social media postings associated with the merchant; website postings associated with the merchant; domain names associated with the merchant; email addresses associated with the merchant; phone numbers associated with the merchant; addresses associated with the merchant; and any other document or business related document data associated with a merchant as discussed herein, known in the art at the time of filing, or as becomes known after the time of filing. One or more of the aforementioned financial data documents may provide information regarding a self-declaration or self-reporting by a merchant of their industry (i.e., declared information regarding the merchant's industry), which may be based on a classification of industries by its type of economic activity (process of production). Non-limiting examples include the North American Industry Classification System (NAICS) code, a Merchant Category Code system (MCC) code, Standard Industrial Classification (SIC) system, etc. Well known examples of industries include educational services, accommodation and food services, mining, real estate and rental and leasing, to name a few. Known techniques can be used to obtain or extract relevant financial data (e.g., a self-declaration or self-reporting by a merchant of their industry) from the financial data documents. For example, optical Character Recognition (OCR) techniques and/or JSON formatting can be used to identify and extract the financial data associated with each of the financial documents. In one or more embodiments, the method100includes the step120of determining, via a machine learning model, a first prediction of the merchant's industry. The machine learning model can be trained as described in FIG. 1 of U.S. patent application Ser. No. 16/710,973 and the associated description. In an example embodiment, the training data can be generated using a subset (e.g., 20%) of merchants. The machine learning model can be a supervised learning model (e.g., neural networks, support vector machines, etc.) or an unsupervised learning model (e.g., regression, reinforcement learning, clustering etc.). U.S. patent application Ser. No. 16/710,973 provides further details of the machine learning models. The predictions generated by the machine learning model include data indicating one or more industries associated with the merchant. The first prediction has the highest business segment probability score of all the predictions of the machine learning model. As described in U.S. patent application Ser. No. 16/710,973, this score indicates a probability or confidence in the model's prediction of the merchant's industry. In one or more embodiments, the method100includes the step130of generating a first probability matrix based on the first prediction (determined in step120) and the declared information (obtained in step110) regarding the merchant's industry. Known methods of generating a probability matrix may be used for this step. An example of such a method is described in U.S. Patent Publication No. U.S.20210256579A1, incorporated herein by reference. FIG.2shows an example first probability matrix200generated using step130. In this example, the predicted industry (rows of the matrix) and the declared industry (columns of the matrix) are educational services, accommodation and food services, mining, and real estate and rental and leasing. For the prediction “Educational Services”, the declared industry is 30% “Educational Services”; 30% “Accommodation and Food Services”; 30% “Mining”; and 10% “Real Estate and Rental and Leasing”. Therefore, for each of the predictions (predicted industry), the sum of the declared industry percentages is 100% (30%+30% +30%+10%). For the prediction “Accommodation and Food Services”, the declared industry is 5% “Educational Services”; 70% “Accommodation and Food Services”; 10% “Mining”; and 15% “Real Estate and Rental and Leasing”, with the sum being 100%. For the prediction “Mining”, the declared industry is1′)/0“Educational Services”;3% “Accommodation and Food Services”; 90% “Mining”; and 6% “Real Estate and Rental and Leasing”, with the sum being 100%. For the prediction “Real Estate and Rental and Leasing”, the declared industry is 0% “Educational Services”; 0% “Accommodation and Food Services”; 0% “Mining”; and 100% “Real Estate and Rental and Leasing”, with the sum being 100%. In one or more embodiments, the method100includes the step140of determining, via the machine learning model, a second prediction of the merchant's industry. Similar to the first prediction, the second prediction includes data indicating one or more industries associated with the merchant. The second prediction has the second highest business segment probability score of all the predictions of the machine learning model. As described in U.S. patent application Ser. No. 16/710,973, this score indicates a probability or confidence in the model's prediction of the merchant's industry. In one or more embodiments, the method100includes the step150of generating a second probability matrix based on the second prediction and the declared information regarding the merchant's industry. Similar to step130, known methods of generating a probability matrix may be used for step150. FIG.3shows an example second probability matrix300generated using step150. In this example, the predicted industry (rows of the matrix) and the declared industry (columns of the matrix) are educational services, accommodation and food services, mining, and real estate and rental and leasing. For the prediction “Educational Services”, the declared industry is 55% “Educational Services”; 23% “Accommodation and Food Services”; 11% “Mining”; and 11% “Real Estate and Rental and Leasing”. Therefore, for each of the predictions (predicted industry), the sum of the declared industry percentages is a 100% (55%+23%+11%+11%). For the prediction “Accommodation and Food Services”, the declared industry is 1%“Educational Services”; 97% “Accommodation and Food Services”; 1% “Mining”; and 1% “Real Estate and Rental and Leasing”, with the sum being 100%. For the prediction “Mining”, the declared industry is 0% “Educational Services”; 0% “Accommodation and Food Services”; 91% “Mining”; and 9% “Real Estate and Rental and Leasing”, with the sum being 100%. For the prediction “Real Estate and Rental and Leasing”, the declared industry is 50% “Educational Services”; 24% “Accommodation and Food Services”; 12% “Mining”; and 14% “Real Estate and Rental and Leasing”, with the sum being 100%. The term “merchant” as used with respect to steps110-150is not limited to a single merchant but can also include multiple merchants. Similarly, the term “merchant's industry” can refer to the respective industry of each merchant if multiple merchants are involved. As discussed in more detail below, after the first and second probability matrices are generated as described with respect to steps140and150, they are deployed in a runtime environment to generate probable industry data for a merchant (i.e., subject merchant) based on that merchant's financial document data. In one or more embodiments, the method100includes the step160of obtaining a declared industry of a subject merchant in the runtime environment. The subject merchant can be a merchant that has been previously identified as conducting business in one or more industries and may have a unique merchant identifier. In example embodiments, the declared industry associated with the subject merchant can be obtained/extracted using known techniques from financial data documents of the subject merchant that include, but are not limited to, invoices generated by the merchant; invoices received by the merchant; estimates provided by the merchant; inventory documents associated with the merchant; revenue documents associated with the merchant; accounting documents associated with the merchant; correspondence documents associated with the merchant; social media postings associated with the merchant; website postings associated with the merchant; domain names associated with the merchant; email addresses associated with the merchant; phone numbers associated with the merchant; addresses associated with the merchant; and any other document or business related document data associated with a merchant as discussed herein, known in the art at the time of filing, or as becomes known after the time of filing. In one or more embodiments, OCR (optical character recognition) techniques are used to identify and extract the declared industry from financial documents associated with the subject merchant. Various OCR systems and techniques are well known to those of skill in the art. Consequently, a more detailed description of the operation of any specific OCR technique used to identify and extract the declared industry associated with each of the financial documents is omitted here to avoid detracting from the invention. In another example embodiment, JSON (JavaScript Object Notation) can be used as an open-standard file format that uses human readable text to transmit data objects consisting of attribute-value pairs and array data types. Importantly, when text is converted into JSON file format each object in the text is described as an object at a very precise location in the text document. Consequently, when text data, such as subject merchant's financial documents data is converted into JSON file format, the declared industry of the subject merchant can be indicated as the object and the precise location of the object and data associated with that object in the vicinity of the object is indicated. Consequently, by converting merchant financial documents data into a JSON file format, the identification of the declared industry within the merchant financial document data is a relatively trivial task. JSON is well known to those of skill in the art, therefore a more detailed discussion of JSON, and JSON file formatting, is omitted here to avoid detracting from the invention. In one or more embodiments, the method100includes the step170of determining, via the machine learning model, a predicted industry for the subject merchant. In various embodiments, the predicted industry represents one or more business codes determined to be associated with the subject merchant's financial documents data such as a North American Industry Classification System (NAICS) code, a Merchant Category Code system (MCC) code, or any code used with any standardized business segment classification systems as discussed herein or known in the art at the time of filing, or as become known after the time of filing. In one or more embodiments, the method100includes the step180of obtaining, based on the declared industry (determined in step160) and the predicted industry (determined step170) of the subject merchant, a first value from the first probability matrix and a second value from the second probability matrix. As an example, if in step160, the declared industry of the subject merchant is “Accommodation and Food Services” and the predicted industry is “Educational Services”, the first value obtained in step180is 30% and the second value obtained in step180is 23%. As another example, if in step160, the declared industry of the subject merchant is “Educational Services” and the predicted industry is “Accommodation and Food Services”, the first value obtained in step180is 5% and the second value obtained in step180is 1%. In one or more embodiments, the method100includes the step190of labeling the subject merchant for further investigation based on a comparison of the first value to a first threshold and a comparison of the second value to a second threshold. For example, the subject merchant can be labeled for further investigation when the first value is lower than a first threshold and/or the second value is lower than a second threshold. In some embodiments, the first threshold and the second thresholds can be the same. In other embodiments, the first and/or second thresholds can vary based on the industry of the prediction (for e.g., educational services can have 25% as the threshold, but mining may have 10% as the threshold). In an example embodiment, the first and second thresholds can both be set to 25%. As noted in the above example, with the predicted industry as educational services, the first value is 30% (>25%) and the second value is 23% (<25%). Therefore, the subject merchant is labeled for further investigation because one of the two values, i.e., the second value=23% is lower than the threshold 25%. In another example embodiment, with the same numbers provided in the previous example (the first and second thresholds can both be set to 25%, the predicted industry as educational services, the first value is 30% and the second value is 23%), labeling may not occur because both values (30% and 23%) are not lower than the threshold (25%). This is because, in this embodiment, labeling the subject merchant for further investigation will only occur when both the first value (30%) is lower than a first threshold (25%) and the second value (23%) is lower than a second threshold (25%). The labeling for further investigation can be used to identify and prevent fraudulent or other criminal activity. The protective actions to prevent such activity can include, but are not limited to, contacting the merchant to clarify the discrepancy in industry assignment; suspending all merchant activity within a data management system used by the merchant until the discrepancy in in the industry is resolved; sending financial document data associated with the merchant to a fraud/criminal activity specialist for analysis; closing down any accounts within a data management system used by the merchant; or any other protective action as discussed herein, or known at the time of filing, or that become known after the time of filing. Some or all of the aforementioned embodiments of the method100can be directed to various software/products/services such as catalog services, order services, subscription services, billing services, account services, entitlement services for tax preparation software product or software service, financial management software product or software service, payroll software product or software service, accounting software product or software service, etc. FIG.4is a block diagram illustrating an example computing system400for detecting fraud upon which any one or more of the methodologies (e.g., method100) herein discussed may be run according to an example described herein. Computer system400may be embodied as a computing device, providing operations of the components featured in the various figures, including components of the method100, or any other processing or computing platform or component described or referred to herein. In alternative embodiments, the computing system400can operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the computing system400may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments. Example computer system400includes a processor402(e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), a main memory404and a static memory406, which communicate with each other via an interconnect408(e.g., a link, a bus, etc.). The computer system400may further include a video display unit410, an input device412(e.g., keyboard) and a user interface (UI) navigation device414(e.g., a mouse). In one embodiment, the video display unit410, input device412and UI navigation device414are a touch screen display. The computer system400may additionally include a storage device416(e.g., a drive unit), a signal generation device418(e.g., a speaker), an output controller432, and a network interface device420(which may include or operably communicate with one or more antennas430, transceivers, or other wireless communications hardware), and one or more sensors428. The storage device416includes a machine-readable medium422on which is stored one or more sets of data structures and instructions424(e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions424may also reside, completely or at least partially, within the main memory404, static memory406, and/or within the processor402during execution thereof by the computer system400, with the main memory404, static memory406, and the processor402constituting machine-readable media. While the machine-readable medium422(or computer-readable medium) is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple medium (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions424. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, magnetic media or other non-transitory media. Specific examples of machine-readable media include non-volatile memory, including, by way of example, semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The instructions424may further be transmitted or received over a communications network426using a transmission medium via the network interface device420utilizing any one of several well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), wide area network (WAN), the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that can store, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. Other applicable network configurations may be included within the scope of the presently described communication networks. Although examples were provided with reference to a local area wireless network configuration and a wide area Internet network connection, it will be understood that communications may also be facilitated using any number of personal area networks, LANs, and WANs, using any combination of wired or wireless transmission mediums. The embodiments described above may be implemented in one or a combination of hardware, firmware, and software. For example, the features in the system architecture400of the processing system may be client-operated software or be embodied on a server running an operating system with software running thereon. While some embodiments described herein illustrate only a single machine or device, the terms “system”, “machine”, or “device” shall also be taken to include any collection of machines or devices that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Examples, as described herein, may include, or may operate on, logic or several components, modules, features, or mechanisms. Such items are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module, component, or feature. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as an item that operates to perform specified operations. In an example, the software may reside on a machine readable medium. In an example, the software, when executed by underlying hardware, causes the hardware to perform the specified operations. Accordingly, such modules, components, and features are understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all operations described herein. Considering examples in which modules, components, and features are temporarily configured, each of the items need not be instantiated at any one moment in time. For example, where the modules, components, and features comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different items at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular item at one instance of time and to constitute a different item at a different instance of time. Additional examples of the presently described method, system, and device embodiments are suggested according to the structures and techniques described herein. Other non-limiting examples may be configured to operate separately or can be combined in any permutation or combination with any one or more of the other examples provided above or throughout the present disclosure. It will be appreciated by those skilled in the art that the present disclosure can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The presently disclosed embodiments are therefore considered in all respects to be illustrative and not restricted. The scope of the disclosure is indicated by the appended claims rather than the foregoing description and all changes that come within the meaning and range and equivalence thereof are intended to be embraced therein.
25,738
11861733
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding. One or more embodiments may be practiced without these specific details. Features described in one embodiment may be combined with features described in a different embodiment. In some examples, well-known structures and devices are described with reference to a block diagram form in order to avoid unnecessarily obscuring the present invention.1. GENERAL OVERVIEW2. ARCHITECTURAL OVERVIEW3. EXPENSE REPORT SUBMISSION4. MACHINE LEARNING BASED QUERY PROCESSING5. ILLUSTRATIVE EXAMPLES6. HARDWARE OVERVIEW7. COMPUTER NETWORKS AND CLOUD NETWORKS8. MICROSERVICE APPLICATIONS9. MISCELLANEOUS; EXTENSIONS 1. General Overview Embodiments described herein improve the expense reporting experience for employees by providing, in a graphical user interface, responses to user queries relating to expense reporting. Employees may request whether expenses are reimbursable. Alternatively or additionally, the system may generate expense descriptions, generate expense reports, and/or submit expense reports responsive to user queries, optionally in a user-independent mode. Alternatively or additionally, some embodiments provide feedback about the employee's expense reporting behavior, including one or more system performance metrics. A system performance metric may be associated with operations performed by the system in a user-independent mode, thus encouraging the employee to take advantage of features in the system that permit user-independent expense reporting operations. The expense reporting system may further leverage machine learning to facilitate and automate various aspects of processing expense reports and queries. In some embodiments, the expense reporting system learns how to classify and process expenses and activities based on a set of training examples. The expense reporting system may automatically learn what patterns are predictive of the likelihood that an activity incurs a reimbursable expense even though the patterns are not hard-coded into the expense reporting system. When a user submits a query about a new expense or activity, the expense reporting system may estimate an unknown label or classification for the newly queried example based on the learned patterns. The expense reporting system may generate response to the user query based on the estimated label. In some embodiments, the techniques described herein are implemented by or interface with an intelligent agent, such as a virtual assistant persona. Users may submit natural language queries to the intelligent agent about whether expenses are reimbursable before the user incurs the expense. The intelligent agent may leverage natural language processing to map the query to intents and determine how to respond. The intelligent agent may further leverage the machine learning techniques described herein to predict whether an expense is reimbursable and formulate the natural language response. Additionally or alternatively, the intelligent agent may proactively provide suggestions to a user on expenses that may be reimbursed. The intelligent agent may automatically generate electronic expense report files and add entries to the files in a user-independent mode. Thus, expense reports may be generated with little to no user input, allowing for a “hands-free” expense reporting experience by the user. One or more embodiments described in this Specification and/or recited in the claims may not be included in this General Overview section. 2. Architectural Overview FIG.1illustrates a system100in accordance with some embodiments. As illustrated inFIG.1, system100includes a submitter interface102, an approver interface106, an auditor interface108, an administrator interface110, an expense reporting service112, a data repository128, an external data source146, a reimbursement service148, and various components thereof. In some embodiments, the system100may include more or fewer components than the components illustrated inFIG.1. The components illustrated inFIG.1may be local to or remote from each other. The components illustrated inFIG.1may be implemented in software and/or hardware. Each component may be distributed over multiple applications and/or machines. Multiple components may be combined into one application and/or machine. Operations described with respect to one component may instead be performed by another component. Additional embodiments and/or examples relating to computer networks are described below. In some embodiments, each of submitter interface102, approver interface106, auditor interface108, and administrator interface110refers to hardware and/or software configured to facilitate communications between a user and an expense reporting service112. A submitter interface102may be used by a user, such as an employee, who is responsible for preparing and submitting expense descriptions and/or expense reports. The submitter interface102may be associated with one or more devices for obtaining visual media that represents a receipt for an expense, such as a scanner104, a camera, a video device, or any other kind of device configured to capture visual media. An approver interface106may be used by a user, such as an employee in a managerial role, who is responsible for approving expense reports prior to submission for reimbursement. In some embodiments, expense reports are not subject to managerial approval prior to submission for reimbursement. An auditor interface108may be used by a user, such as an employee in an auditor role, who is responsible for auditing expense reports. An administrator interface110may be used by a user, such as an employee in an administrative role, who is responsible for determining and/or configuring parameters, rules, etc., that are used by an expense reporting service112. One or more of a submitter interface102, approver interface106, auditor interface108, and administrator interface110may be the same interface. A user may have multiple roles corresponding to submitter, approver, auditor, and/or administrator. For example, an employee who audits expense reports may also submit their own expense reports. In some embodiments, a user interface (e.g., submitter interface102, approver interface106, auditor interface108, and/or administrator interface110) renders user interface elements and receives input via user interface elements. Examples of interfaces include a graphical user interface (GUI), a command line interface (CLI), a haptic interface, and a voice command interface. Examples of user interface elements include checkboxes, radio buttons, dropdown lists, list boxes, buttons, toggles, text fields, date and time selectors, command lines, sliders, pages, and forms. In some embodiments, different components of a user interface (e.g., submitter interface102, approver interface106, auditor interface108, and/or administrator interface110) are specified in different languages. The behavior of user interface elements is specified in a dynamic programming language, such as JavaScript. The content of user interface elements is specified in a markup language, such as hypertext markup language (HTML) or XML User Interface Language (XUL). The layout of user interface elements is specified in a style sheet language, such as Cascading Style Sheets (CSS). Alternatively, a user interface may be specified in one or more other languages, such as Java, C, or C++. In some embodiments, an expense reporting service112includes an expense report generation engine114. An expense report generation engine114refers to hardware and/or software configured to perform operations described herein (including such operations as may be incorporated by reference) for generating expense reports. In some embodiments, an expense reporting service112includes an expense recommendation engine116. An expense recommendation engine116refers to hardware and/or software configured to perform operations described herein (including such operations as may be incorporated by reference) for recommending expenses. In some embodiments, an expense reporting service112includes an expense report auditing engine118. An expense report auditing engine118refers to hardware and/or software configured to perform operations described herein (including such operations as may be incorporated by reference) for auditing expense descriptions and/or expense reports. In some embodiments, an expense reporting service112includes a receipt processing engine120. A receipt processing engine120refers to hardware and/or software configured to perform operations described herein (including such operations as may be incorporated by reference) for processing expense receipts. In some embodiments, an expense reporting service112includes a user support engine122. A user support engine122refers to hardware and/or software configured to perform operations described herein (including such operations as may be incorporated by reference) for processing and responding to user queries submitted to the expense reporting service112. In some embodiments, one or more components of the expense reporting service use a machine learning engine124. Machine learning includes various techniques in the field of artificial intelligence that deal with computer-implemented, user-independent processes for solving problems that have variable inputs. In embodiment, the machine learning engine124trains a machine learning model126to perform one or more operations. Training a machine learning model126uses training data to generate a function that, given one or more inputs to the machine learning model126, computes a corresponding output. The output may correspond to a prediction based on prior machine learning. In some embodiments, the output includes a label, classification, and/or categorization assigned to the provided input(s). The machine learning model126corresponds to a learned model for performing the desired operation(s) (e.g., labeling, classifying, and/or categorizing inputs). An expense reporting service112may use multiple machine learning engines124and/or multiple machine learning models126for different purposes. In some embodiments, the machine learning engine124may use supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, and/or another training method or combination thereof. In supervised learning, labeled training data includes input/output pairs in which each input is labeled with a desired output (e.g., a label, classification, and/or categorization), also referred to as a supervisory signal. In semi-supervised learning, some inputs are associated with supervisory signals and other inputs are not associated with supervisory signals. In unsupervised learning, the training data does not include supervisory signals. Reinforcement learning uses a feedback system in which the machine learning engine124receives positive and/or negative reinforcement in the process of attempting to solve a particular problem (e.g., to optimize performance in a particular scenario, according to one or more predefined performance criteria). In some embodiments, the machine learning engine124initially uses supervised learning to train the machine learning model126and then uses unsupervised learning to update the machine learning model126on an ongoing basis. In some embodiments, a machine learning engine124may use many different techniques to label, classify, and/or categorize inputs. A machine learning engine124may transform inputs into feature vectors that describe one or more properties (“features”) of the inputs. The machine learning engine124may label, classify, and/or categorize the inputs based on the feature vectors. Alternatively or additionally, a machine learning engine124may use clustering (also referred to as cluster analysis) to identify commonalities in the inputs. The machine learning engine124may group (i.e., cluster) the inputs based on those commonalities. The machine learning engine124may use hierarchical clustering, k-means clustering, and/or another clustering method or combination thereof. In some embodiments, a machine learning engine124includes an artificial neural network. An artificial neural network includes multiple nodes (also referred to as artificial neurons) and edges between nodes. Edges may be associated with corresponding weights that represent the strengths of connections between nodes, which the machine learning engine124adjusts as machine learning proceeds. Alternatively or additionally, a machine learning engine124may include a support vector machine. A support vector machine represents inputs as vectors. The machine learning engine124may label, classify, and/or categorizes inputs based on the vectors. Alternatively or additionally, the machine learning engine124may use a naïve Bayes classifier to label, classify, and/or categorize inputs. Alternatively or additionally, given a particular input, a machine learning model may apply a decision tree to predict an output for the given input. Alternatively or additionally, a machine learning engine124may apply fuzzy logic in situations where labeling, classifying, and/or categorizing an input among a fixed set of mutually exclusive options is impossible or impractical. The aforementioned machine learning model126and techniques are discussed for exemplary purposes only and should not be construed as limiting some embodiments. In some embodiments, as a machine learning engine124applies different inputs to a machine learning model126, the corresponding outputs are not always accurate. As an example, the machine learning engine124may use supervised learning to train a machine learning model126. After training the machine learning model126, if a subsequent input is identical to an input that was included in labeled training data and the output is identical to the supervisory signal in the training data, then output is certain to be accurate. If an input is different from inputs that were included in labeled training data, then the machine learning engine124may generate a corresponding output that is inaccurate or of uncertain accuracy. In addition to producing a particular output for a given input, the machine learning engine124may be configured to produce an indicator representing a confidence (or lack thereof) in the accuracy of the output. A confidence indicator may include a numeric score, a Boolean value, and/or any other kind of indicator that corresponds to a confidence (or lack thereof) in the accuracy of the output. In some embodiments, a data repository128is any type of storage unit and/or device (e.g., a file system, database, collection of tables, or any other storage mechanism) for storing data. Further, a data repository128may include multiple different storage units and/or devices. The multiple different storage units and/or devices may or may not be of the same type or located at the same physical site. Further, a data repository128may be implemented or may execute on the same computing system as one or more other components of the system100. Alternatively or additionally, a data repository128may be implemented or executed on a computing system separate from one or more other components of the system100. A data repository128may be communicatively coupled to one or more other components of the system100via a direct connection or via a network. In some embodiments, a data repository128is configured to store historical expense data130. Historical expense data130may include any kind of data that the expense reporting service112has previously received and/or generated in association with expenses. Specifically, the historical expense data130may include expense reports, expense descriptions, metadata associated with expenses (e.g., geotags, dates and times, explanatory notes, and/or another kind of metadata or combination thereof), and/or any other kind of data or combination thereof associated with expenses. Historical expense data130may include data that is associated with one or more employees' target activity, which may also be associated (directly or indirectly) with one or more expenses. For example, historical expense data130may include one or more itineraries, location check-ins, phone records, emails, social media messages, calendar appointments, and/or any other kind of data or combination thereof associated with business-related activity. In some embodiments, a data repository128is configured to store one or more expense preferences131. An expense preference131includes one or move values that indicates one or more employees' preferences related to expenses that the employee(s) may incur during target activity. For example, an expense preference131may indicate that an employee prefers ride sharing over public transportation. As another example, an expense preference131may indicate that an employee has a dietary restriction (e.g., vegetarian, vegan, kosher, etc.). As another example, an expense preference131may indicate that an employee likes or dislikes a particular restaurant, hotel, or other establishment. An embodiment, an expense reporting service112uses a machine learning engine124to infer one or more employee preferences131from historical expense data130. One or more triggers described herein may be based, at least in part, on one or more expense preferences131. In some embodiments, a data repository128is configured to store one or more expense policies132. An expense policy132may be a set of one or more codified rules corresponding to criteria for reimbursable expenses. For example, example, an expense policy132may define one or more expense categories that are used to categorize reimbursable expenses (e.g., meals, transportation, incidentals, equipment, etc.). As another example, an expense policy132may define an expense limit that is applicable to one or more employees and/or one or more expense categories for a particular unit of time (e.g., day, week, month, year, etc.). As another example, an expense policy132may identify one or more kinds of expenses and/or establishments (e.g., particular stores or restaurants) for which expenses are not reimbursable. Many different kinds of expense policy132may be defined. An expense policy132may apply the level of an entire organization, a business unit, a team, an individual, or any other set of one or more employees or combination thereof. In some embodiments, a data repository128is configured to store one or more expense guidelines134. An expense guideline134may be a set of one or more codified rules corresponding to best practices for expenses and/or responsible spending guidelines. An expense guideline134may be more restrictive than an expense policy132. For example, a particular expense that satisfies an expense policy132may fail to satisfy an expense guideline134because, even though the expense is within an allowable limit under the expense policy132, the expense is inconsistent with responsible spending guidelines. An expense guideline134may apply the level of an entire organization, a business unit, a team, an individual, or any other set of one or more employees or combination thereof. In some embodiments, a data repository128is configured to store one or more expense patterns136. An expense pattern136identifies a typical and/or expected arrangement of expenses associated with target activity. An expense pattern136may be associated with target activity having one or more shared characteristics (e.g., a certain kind of business trip, business-related activity for a particular category of employees, or any other kind of shared characteristic or combination thereof). An expense pattern136may identify expenses that are typical for target activity having the shared characteristic(s). In one example, an expense pattern136identifies that international business travel typically includes: (1) airfare to and from the destination; (2) a rental car, public transportation, and/or ride sharing at the destination; (3) a hotel for the duration of the trip; (4) an international data roaming plan; and (5) three meals per day at the destination. An expense reporting system112may use an expense pattern136to identify reimbursable expenses for which an employee may have neglected to submit an expense report (e.g., based on a gap or difference between reported expenses and the expense pattern136), and/or recommended reimbursable expenses that an employee might otherwise overlook. In some embodiments, an expense reporting service112uses a machine learning engine124to infer one or more expense patterns136, based at least in part on historical expense data130. In some embodiments, a data repository128is configured to store one or more expense triggers138. An expense trigger138is a codified set of rules and/or a set of automatically learned patterns that capture one or more conditions for identifying expenses associated with one or more employees' business-related activity. An expense identified by an expense trigger may be an expense for which an employee has not yet prepared and/or submitted an expense report. In some embodiments, an expense trigger138is based, at least in part, on data corresponding to business-related activity of an employee and/or historical expense data130. As one example, an expense trigger138identifies that a transportation expense may be available when an employee travels from one location to another (e.g., from the employee's home or office to the airport). As another example, an expense trigger138identifies that a hotel expense may be available when geolocation data (e.g., from a global positioning system (GPS), a social media check-in, and/or any other kind of data source that supplies geolocation data) indicates that the user has arrived at a hotel or is leaving a hotel. As another example, an expense trigger138identifies that a meal expense may be available when geolocation data (e.g., from a global positioning system (GPS), a social media check-in, and/or any other kind of data source that supplies geolocation data) indicates that the user has visited a restaurant. In some embodiments, when an expense trigger138identifies an expense for travel to a location, where return travel is also expected, the expense trigger138identifies an expense for the return travel. For example, if an employee prepares an expense description for a taxi to an airport, an expense trigger138may identify (e.g., based on an expense pattern136for international business travel), a corresponding expense for return travel from the airport. In some embodiments, an expense trigger138is based, at least in part, on one or more expense descriptions prepared by one or more other employees who are traveling with the employee in question. In one example, three employees are participating in the same business trip and two of the employees prepare expense descriptions for a business meal at a particular restaurant. In this example, an expense trigger138identify that a corresponding expense at the same restaurant may also apply to the third employee. In some embodiments, an expense trigger138is based, at least in part, on one or more credit card statements for one or more employees. The expense trigger138may determine that a particular credit card charge is associated (e.g., corresponds in time and/or geographic location) with an employee's business-related activity. Based on the association between the credit card charge and the employee's business-related activity, the expense trigger138may identify the credit card charge as a potentially reimbursable expense. In some embodiments, an expense trigger138is based, at least in part, on a typical and/or expected pairing between two or more different kinds of expenses. In one example, an employee purchases gas at a gas station. However, the employee has not entered an expense description corresponding to a car rental. Based on a typical and expected pairing between gasoline and car rental, an expense trigger138may identify a car rental as an available expense for the employee. In some embodiments, an expense trigger138identifies similar expenses over time and identifies an opportunity to enter a recurring expense. As one example, an employee who travels frequently for business submits expense reports each month that include expense descriptions corresponding to an international data roaming plan. An expense trigger138may identify the international data roaming plan as a recurring expense. Based on identifying the international data roaming plan as a recurring expense, the expense reporting service112may present a message to the employee offering to make the charge a recurring expense, so that the employee does not need to enter the expense description each month. Many different kinds of expense triggers138may be defined. In some embodiments, an expense reporting service112uses a machine learning engine124to determine an expense trigger138as part of a machine learning model126. Machine learning engine124may automatically infer expense triggers even though the exact pattern may not have been seen before. Further, machine learning engine124may learn different patterns of behavior that qualify as an expense trigger138depending on context. For example, expense triggers may differ depending on employee attributes, such as employee title, clearance level, job responsibilities. Additionally or alternatively, expense triggers may vary between different groups of employees, such as between different companies or organizational departments within the same company. Additionally or alternatively, expense triggers may vary for different temporal patterns, and/or geographic patterns of incurred expenses. In some embodiments, a data repository128is configured to store one or more expense recommendation triggers139. An expense recommendation trigger139is a codified set of rules and/or a set of automatically learned patterns that capture one or more conditions for identifying recommended expenses that are known or expected to be reimbursable. A recommended expense may be an expense that the employee has not yet incurred. In some embodiments, an expense reporting service112uses a machine learning engine124to determine an expense recommendation trigger139as part of a machine learning model126. In some embodiments, an expense recommendation trigger139is based, at least in part, on data corresponding to business-related activity of an employee and/or historical expense data130. For example, an expense recommendation trigger139may recommend less expensive spending options to an employee who has a tendency to spend above expense limits and/or above expense guidelines. As another example, an expense recommendation trigger139may recommend expenses that are popular among similarly situated employees, such as a particular restaurant that other employees have frequented and for which expenses tended to be reimbursed. As another example, an expense recommendation trigger139may recommend against frequenting a particular establishment for which expenses tended to be declined. In some embodiments, an expense recommendation trigger139is based, at least in part, on one or more expense preferences131. For example, an expense recommendation trigger139may identify a recommended restaurant for an employee who is vegan or who is meeting with a client who is vegan. As another example, an expense recommendation trigger139may identify a recommended restaurant or mode of transportation for an employee who prefers healthy options. In some embodiments, expense recommendation trigger139is based, at least in part, on an expense policy132and/or an expense guideline134. For example, an expense recommendation trigger139may identify recommended expenses that increase responsible spending behavior, for example by reducing spending, taking advantage of deals, earning rewards, etc. In some embodiments, an expense recommendation trigger139is based, at least in part, on a determination that one expense is less expensive and/or more likely to be reimbursable than another expense. Recommending less expensive options may reduce expenses for an organization and decrease the incidence of expenses that need to be audited and/or are declined for reimbursement. In some embodiments, an expense recommendation trigger139is based, at least in part, on an employee's spending score. An employee's spending score may be based, at least in part, on historical expense data130associated with the employee. For example, the employee spending score may be based on one or more of: whether the employee tends to be below spending limits; an average time that the employee takes to prepare expense descriptions for expenses that have already been incurred; an audit history of the employee (e.g., a history of allowed and/or rejected expense descriptions, which may be expressed as a ratio or some other metric); a comparison of the employee's past spending with a expense policy (e.g., a spending limit); and/or any other kind of data or combination thereof associated with the employee's spending. In some embodiments, employees with ‘better’ spending scores are at lower risk of audits than employees with ‘worse’ spending scores. An expense recommendation trigger139may identify less expensive options for employees with ‘worse’ spending scores than for employees with ‘better’ spending scores. In some embodiments, an expense recommendation trigger139is based on one or more attributes of past, present, and/or planned business-related activity of an employee (e.g., a business trip or another kind of business-related activity). For example, trips of at least a threshold duration may qualify for certain reimbursable expenses (e.g., dry cleaning). As another example, flights of at least a threshold duration may qualify for a reimbursable seat upgrade. As another example, travel to international destinations may qualify for reimbursable international data roaming charges. In some embodiments, an expense recommendation trigger139is based, at least in part, on an expense limit for a trip compared with an amount of expenses already incurred for the trip. For example, an expense recommendation trigger139may identify recommended expenses that are less expensive than other options, for an employee who is running out of expense budget on a trip. The expense recommendation trigger139may compare a remaining budget with a remaining time on the trip and recommend expenses that allocate the remaining budget across the remaining time. In some embodiments, an expense recommendation trigger139is based, at least in part, on information about employees who are participating in the same business-related activities. For example, an expense recommendation trigger139may identify ride-sharing and/or other expense sharing opportunities for employees traveling to the same destination. The system100may present the recommended expense to one or more of those employees, to help encourage savings available by sharing expenses. In some embodiments, a data repository128is configured to store one or more approval triggers140. An approval trigger140is a codified set of rules and/or a set of one or more automatically learned patterns that capture one or more conditions for requiring approval of an expense description and/or expense report before submitting the expense description and/or expense report for reimbursement. An approval trigger140may be based, at least in part, on data corresponding to business-related activity of an employee and/or historical expense data130. For example, an approval trigger140may indicate that all expense description requires approval if the expense exceeds or is within a certain amount of an expense limit. As another example, an approval trigger140may indicate that all expense descriptions in a particular category, and/or all expense descriptions prepared for a particular employee, require approval. As another example, expense descriptions that violate an expense policy132and/or an expense guideline134may require approval. As another example, employees themselves may be required to approve expense descriptions that are generated by the expense reporting service112in a user-independent mode (e.g., based on an expense trigger138). Many different kinds of approval triggers140may be defined. In some embodiments, an expense reporting service112uses a machine learning engine124to determine an approval trigger140as part of a machine learning model126. In some embodiments, a data repository128is configured to store one or more audit triggers142. An audit trigger142is a codified set of rules and/or a set of automatically learned patterns that capture one or more conditions for requiring auditing of an expense report, and/or for determining that an expense report or description is at risk of being audited. An audit trigger142may be based, at least in part, on data corresponding to business-related activity of an employee and/or historical expense data130. In some embodiments, an audit trigger142is based, at least in part, on an audit risk score associated with a particular expense description. An audit trigger142may be satisfied when an audit risk score satisfies one or more threshold criteria (e.g., the audit risk score may be above or below a threshold number, or any other kind of threshold criteria or combination thereof). In some embodiments, an expense reporting service112uses a machine learning engine124to determine an audit trigger142as part of a machine learning model126. In some embodiments, a data repository128is configured to store one or more user credentials144. An expense reporting service112may use a user credential144to access an external data source146and obtain data from the external data source146. A user credential144may include a username, user identifier (ID), password, private key, public key, and/or any other kind of credential or combination thereof. In some embodiments, an employee supplies a user credential144to an expense reporting system122via a graphical user interface. For example, the expense reporting service112may use three-party authentication to obtain a user credential144from an employee. In some embodiments, user data that is input into machine learning engine124is anonymized. Personal identifying information (PII) and other sensitive information may be replaced with an anonymous identifier, such as a cryptographic hash of the user data. Machine learning engine124may use the anonymized data to learn patterns and make predictions for different employees, within the same or different organizations, having similar attributes without compromising or revealing sensitive employee data. Information describing one or more components that are illustrated here within a data repository128may be implemented across any of components within the system100. However, this information is illustrated within the data repository128for purposes of clarity and explanation. In some embodiments, an expense reporting service112is configured to receive data from one or more external data sources146. An external data source146refers to hardware and/or software operating independent of the expense reporting service112, i.e., under control of a different entity (e.g., a different company or other kind of organization) than an entity that controls the expense reporting service112. An external data source146may supply data associated with an employee's business-related activity, such as travel, dining, meals, itineraries, appointments, emails, phone data, social media messages, credit card statements (e.g., for a business-provided credit card), and/or any other kind of target activity or combination thereof. The data may include information associated with an employee's expenses, which may or may not be reimbursable. Some examples of an external data source146supplying data to an expense reporting service112include, but are not limited to: an airline or travel agency supplying data associated with an itinerary and/or ticket purchase; a food ordering application supplying data associated with a food order; a ride sharing service (e.g., Uber™, Lyft™, or another ride sharing service) supplying data associated with an instance of ride sharing; and a social media application (e.g., Facebook™, Foursquare™, or another social media application) supplying data corresponding to a check-in at a location (e.g., a restaurant, hotel, entertainment venue, or other location). Many different kinds of external data sources146may supply many different kinds of data. In some embodiments, an expense reporting service112is configured to retrieve data from an external data source146by ‘pulling’ the data via an application programming interface (API) of the external data source146, using user credentials144that a user has provided for that particular external data source146. Alternatively or additionally, an external data source146may be configured to ‘push’ data to the expense reporting service112via an API of the expense reporting service, using an access key, password, and/or other kind of credential that a user has supplied to the external data source146. An expense reporting service112may be configured to receive data from an external data source146in many different ways. In some embodiments, a reimbursement service148refers to hardware and/or software configured to perform operations for reimbursing approved expenses. For example, the reimbursement service148may be part of an accounting service that applies reimbursements for approved expenses to employee's paychecks and/or separate reimbursement checks, which may be mailed to employees and/or direct-deposited into employee's bank accounts. Many different techniques for reimbursing approved expenses exist. In some embodiments, an expense reporting service112includes or interfaces with an intelligent agent. An intelligent agent may comprise an autonomous virtual persona that interacts via natural language with one or more users. For example, users may provide natural language queries by speaking, which may be captured through the microphone of a smart speaker or other microphone-enabled network device. As another example, users may type and submit natural language queries via a chatbot application or web interface. The intelligent agent may use natural language processing and machine learning techniques described further herein to process the queries and provide relevant responses. The responses may be output via a speaker, display, or other user interface. In some embodiments, one or more components of the system100implemented on one or more digital devices. The term “digital device” generally refers to any hardware device that includes a processor. A digital device may refer to a physical device executing an application or a virtual machine. Examples of digital devices include a computer, a tablet, a laptop, a desktop, a netbook, a server, a web server, a network policy server, a proxy server, a generic machine, a function-specific hardware device, a hardware router, a hardware switch, a hardware firewall, a hardware firewall, a hardware network address translator (NAT), a hardware load balancer, a mainframe, a television, a content receiver, a set-top box, a printer, a mobile handset, a smartphone, a personal digital assistant (“PDA”), a wireless receiver and/or transmitter, a base station, a communication management device, a router, a switch, a controller, an access point, and/or a client device. 3. Expense Report Submission FIG.2illustrates an example set of operations for expense report submission in accordance with some embodiments. One or more operations illustrated inFIG.2may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated inFIG.2should not be construed as limiting the scope of one or more embodiments. In some embodiments, a system (e.g., one or more components of system100illustrated inFIG.1) trains a machine learning model to evaluate data associated with expenses against one or more expense policy rules (Operation202). The expense reporting rule(s) may include one or more expense policies. Alternatively or additionally, the expense reporting rule(s) may include one or more expense guidelines. The system may train the machine learning model using labeled training data, which may include expense-related data that is labeled to indicate whether or not the expense-related data satisfies the expense reporting rule(s). In some embodiments, the system receives a user query, from an employee, requesting whether an expense is allowed (Operation204). The system may receive the user query as text data, voice data, or any other kind of user query. The expense may be an expense that the employee has already incurred. Alternatively, the expense may be a planned or anticipated expense that the employee has not yet incurred. The user query may be in a natural language format. For example, the user may submit a query asking, “Can I expense international data charges?” In an embodiment, the system applies the user query (which may optionally be subjected to semantic analysis) to the machine learning model (Operation206). Based at least in part on output of the machine learning model, the system determines whether the user query satisfies the expense reporting rule(s) (Operation208). Alternatively or additionally, the system may apply data other than the user query to the machine learning model. In one example, the user query corresponds to a question about reimbursable expenses associated with a business trip. The system may apply data associated with the business trip (e.g., dates of travel, origin and/or destination(s), parties in attendance, purpose of the trip, and/or any other kind of data or combination thereof associated with the business trip). In an embodiment, an expense is reimbursable if it satisfies the expense reporting rule(s) and is not reimbursable if it does not satisfy the expense reporting rule(s). In an embodiment, if the user query does not satisfy the expense reporting rule(s), the system generates a negative response to the user query (Operation218). The negative response indicates that the expense is not allowed. In addition, the negative response may include an explanation of why the expense is not allowed, based on the relevant reporting rule(s). The negative response may include information about expenses that are reimbursable, even if the expense indicated by the user query is not reimbursable. In one example, a user query asks, “Can I expense a meal at The French Laundry?” In this example, the system responds, “No, but you can expense a meal at Panera,” because any meal the employee might purchase at The French Laundry would exceed the employee's expense limit, while a typical meal at Panera would not exceed the employee's expense limit. In an embodiment, if the user query satisfies the expense reporting rule(s), the system generates an affirmative response to the user query (Operation210). The affirmative response indicates that the expense indicated by the user query is not reimbursable. Even if the expense is allowed, the affirmative response may include a warning indicating that the expense is likely to trigger an audit and/or may otherwise not be approved, despite satisfying the expense reporting rule(s). The response may include information to assist the employee in avoiding an audit. In one example, a user query asks, “Can I expense a meal at Panera?” In this example, the system responds, “Yes, as long as you do not spend more than $25.” In an embodiment, based at least in part on information in a user query, the system generates an expense description (Operation212). The system may receive an initial user query requesting whether an expense is allowed, and a subsequent user query instructing the system to generate the expense description that the system indicated was allowed. In general, the system may generate the expense description responsive to a series of user queries and not only a single user query. The system may generate the expense description in user-independent mode, i.e., without requiring or requesting any user input corresponding to an instruction to generate the expense description. In one example, a user query asks, “Can I expense data roaming for my London trip?” In this example, the system responds, “Yes. I've gone ahead and added data roaming to your expense report for your London trip,” even though the employee did not explicitly instruct the system to generate an expense description for data roaming. The system may generate the expense description based on historical expense data and/or other data that supplies information for populating the fields of the corresponding expense description. In an embodiment, the system generates an expense report that includes the expense description (Operation214). The system may generate the expense report in user-independent mode, i.e., without requiring or requesting any user input corresponding to an instruction to generate the expense report. In one example, a user query asks, “Can I expense my meals from my last trip?” In this example, the system responds, “Yes. I've gone ahead and prepared an expense report for all the meals from your last trip,” even though the employee did not explicitly instruct the system to generate an expense report for the meals. In an embodiment, the system submits the expense report (Operation216). The system may submit the expense report in user-independent mode, i.e., without requiring or requesting any user input corresponding to an instruction to submit the expense report. In an embodiment, the system presents the response to the user query (Operation220) in a graphical user interface (GUI). As discussed above, the response may be positive or negative. The response may be in a natural language format. The response may prompt the user for additional information to be supplied in a subsequent user query. As noted above, one or more operations performed by the system may be performed responsive to a series of user queries. A user and the system may engage in a series of queries and responses resembling a conversation. In an embodiment, the system presents the response as coming from a virtual assistant persona, such that the user query and response have the appearance of a natural language conversation between the employee and the virtual assistant persona. In an embodiment, the system presents a system performance metric (Operation222) in a graphical user interface (GUI). The system performance metric may indicate how long it took for the system to complete one or more expense reporting operations, responsive to user input and/or in a user-independent mode. For example, the system performance metric may indicate how long the system took to generate an expense description (optionally in a user-independent mode), and/or generate an expense report (optionally in a user-independent mode), and/or submit an expense report (optionally in a user-independent mode). The system may present many different kinds of system performance metrics. 4. Machine Learning Based Query Processing As previously indicated, the expense reporting service112may leverage machine learning to respond (e.g., via an intelligent agent interface) to expense queries. Machine learning allows expense reporting service112to perform tasks and capture patterns that are not hard-coded or otherwise explicitly programmed into the system. Machine learning further allows expense reporting service112to adapt to different application use-cases and evolve over time without requiring complex reprograming or other changes in the underlying application code. FIG.3illustrates an example set of operations for training a machine learning model to estimate unknown labels for expense patterns in accordance with some embodiments. One or more operations illustrated inFIG.3may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated inFIG.3should not be construed as limiting the scope of one or more embodiments. In some embodiments, a system (e.g., one or more components of system100illustrated inFIG.1) receives a set of labeled examples of target activity and/or expenses for training a machine learning model (Operation302). An example in the training dataset may include one or more labels, where a label corresponds to a classification for one or more activities and/or one or more expenses. For example, a label may indicate whether an activity or set of activities incurred reimbursable expenses or not. As another example, a label may indicate whether an expense required approval or not from another user before reimbursement. As yet another example, a label may indicate how an expense was categorized. In some embodiments, examples in the training set include multiple expenses and/or activities that are related. For instance, a single example may include a set of expenses and/or activities that were incurred by an employee on a single business trip. In this instance, the expenses and activities may be related (a) temporally since the expenses are likely to have occurred within a relatively short timeframe of the trip; (b) geographically since the trip was likely constrained to a limited number of locations; and (c) by entity since the expenses were incurred by the same employee. In some embodiments, the system generates a set of feature vectors for the labeled examples (Operation304). A feature vector for an example may be n-dimensional, where n represents the number of features in the vector. The number of features that are selected may vary depending on the particular implementation. The features may be curated in a supervised approach or automatically selected from extracted attributes during model training and/or tuning. Example features include information about the employee that incurred an expense (e.g., employee job title, clearance level, department), geographic information about where an expense or activity occurred (e.g., continent, country, state, city), temporal information about when an expense or activity occurred (e.g., date and time), categorical information about what type of an expense was incurred or activity performed (e.g., vendor identifier, vendor category, product identifier, product category, activity name, activity patterns), and the expense amount. Additionally or alternatively, the feature vector may include values associated with an expense policy of an organization, such as rules about what types of expenses are not permissible and/or the conditions under which an expenses is reimbursable. In some embodiments, a feature within a feature vector is represented numerically by one or more bits. The system may convert categorical attributes to numerical representations using an encoding scheme, such as one hot encoding. In some embodiments, the system generates one or more candidate machine learning models that apply weights as a function of extracted features (Operation306). In some cases, the system may generate and train a candidate recurrent neural network model, such as a long short-term memory (LSTM) model. With recurrent neural networks, one or more network nodes or “cells” may include a memory. A memory allows individual nodes in the neural network to capture dependencies based on the order in which feature vectors are fed through the model. The weights applied to a feature vector representing one expense or activity may depend on its position within a sequence of feature vector representations. Thus, the nodes may have a memory to remember relevant temporal dependencies between different expenses and/or activities. For example, a dinner expense in isolation may have a first set of weights applied by nodes as a function of the respective feature vector for the expense. However, if the dinner expense is immediately preceded by an earlier dinner expense, then a different set of weights may be applied by one or more nodes based on the memory of the preceding expense. In this case, whether the second dinner expense is reimbursable or not may be affected by the first dinner expense. As another example, one or more nodes may apply different weights if an expense is unique or a duplicate of another expense on the same day. In this case, the trained machine learning model may automatically filter out and reject duplicate expenses made on the same day while recurring expenses (e.g., monthly subscriptions) may be permitted. Additionally or alternatively, the system may generate and train other candidate models, such as support vector machines, decision trees, Bayes classifiers, and/or fuzzy logic models, as previously described. In some embodiments, the system compares the labels estimated through the one or more candidate models with observed labels to determine an estimation error (Operation308). The system may perform this comparison for a test set of examples, which may be a subset of examples in the training dataset that were not used to generate and fit the candidate models. The total estimation error for a candidate may be computed as a function of the magnitude of the difference and/or the number of examples for which the estimated label was wrongly predicted. In some embodiments, the system determines whether to adjust the weights and/or other model parameters based on the estimation error (Operation310). Adjustments may be made until a candidate model that minimizes the estimation error or otherwise achieves a threshold level of estimation error is identified. The process may return to Operation308to make adjustments and continue training the machine learning model. In some embodiments, the system selects a candidate machine learning model parameters based on the estimation error (Operation312). For example, the system may select a machine learning model having weights and other model parameters (e.g., selected feature combinations used to form the feature vectors) that yield the lowest estimation error for the test dataset. In some embodiments, the system trains a neural network using backpropagation. Backpropagation is a process of updating cell states in the neural network based on gradients determined as a function of the estimation error. With backpropagation, nodes are assigned a fraction of the estimated error based on the contribution to the output and adjusted based on the fraction. In recurrent neural networks, time is also factored into the backpropagation process. As previously mentioned, a given example may include a sequence of related expenses and/or activities incurred on a trip. Each expense or activity may be processed as a separate discrete instance of time. For instance, an example may include expenses e1, e2, and e3corresponding to times t, t+1, and t+2, respectively. Backpropagation through time may perform adjustments through gradient descent starting at time t+2 and moving backward in time to t+1 and then to t. Further, the backpropagation process may adjust the memory parameters of a cell such that a cell remembers contributions from previous expenses in the sequence of expenses. For example, a cell computing a contribution for e3may have a memory of the contribution of e2, which has a memory of e1. The memory may serve as a feedback connection such that the output of a cell at one time (e.g., t) is used as an input to the next time in the sequence (e.g., t+1). The gradient descent techniques may account for these feedback connections such that the contribution of one expense or activity to a cell's output may affect the contribution of the next expense or activity in the cell's output. Thus, the contribution of e1may affect the contribution of e2, etc. Additionally or alternatively, the system may train other types of machine learning models. For example, the system may adjust the boundaries of a hyperplane in a support vector machine or node weights within a decision tree model to minimize estimation error. Once trained, the machine learning model may be used to estimate labels for new examples of expenses. FIG.4illustrates an example set of operations for applying queries to a trained machine learning model in accordance with some embodiments. One or more operations illustrated inFIG.4may be modified, rearranged, or omitted all together. Accordingly, the particular sequence of operations illustrated inFIG.4should not be construed as limiting the scope of one or more embodiments. In some embodiments, the system (e.g., one or more components of system100illustrated inFIG.1) receives a new query about an expense (Operation402). The query may be written in a natural language or conform to a query language syntax. The query may be submitted via an intelligent agent interface, such as via a smart speaker or chatbot application. In some embodiments, the system extracts expense attributes based on the new query (Operation404). An intelligent agent may use natural language processing to extract expense attributes, such as the expense amount, expense category, and expense location. One or more of the expense attributes may be extracted from metadata associated with the query. For example, an expense location may be extracted from a geo-tag provided via a mobile application. Additionally or alternatively, an intelligent agent may map the query to one or more intents, where an intent represents an available action that the querying entity intends to be executed. The intelligent agent may determine what attributes to extract based on the one or more intents. In some embodiments, the system extracts contextual attributes associated with the query (Operation406). Contextual attributes may include attributes about the user that submitted the query, such as the employer job title, spending score, and audit risk. Additionally or alternatively, contextual attributes may include attributes about other expenses that have been incurred by the user, such as information about the expense amounts, categories, and geographic locations of expenses incurred within a threshold timeframe (e.g., within a given week, month, or year, or on a particular trip). Additionally or alternatively, contextual attributes may include attributes about expense policies defined by the employer or other organization which employs the user. In some embodiments, the system generates a set of one or more feature vectors based on the query and contextual attributes (Operation408). In some embodiments, the system uses the same combinations of features used to train the machine learning model. The system may generate a set of features vectors where one or more feature vectors represent expenses incurred by the employee within a threshold timeframe and another feature vector represents a proposed expense queried about by the user. In other embodiments, the proposed expense may be applied to the machine learning model in isolation of any expenses previously incurred by the employee. The one or more feature vectors may be a unique example such that the combination of feature values and/or sequence of feature vectors was not included in the training dataset. In some embodiments, the system inputs the set of one or more feature vectors to the trained machine learning model to estimate a label for the expense that is the subject of the query (Operation410). In the case of a recurrent neural network, for example, the system may perform forward propagation using a sequence of feature vectors representing different expenses and/or activities in the order that the expenses and/or activities occurred. As another example, in the case of a support vector machine, the system may compute a location in the hyperplane for the feature vector relative to the hyperplane boundaries. As another example, the system may follow a decision tree as a function of the input set of one or more feature vectors. In some embodiments, the estimated label corresponds to a classification for an expense or activity queried about by the user. The estimated label may be output by the machine learning model as a function of the one or more input feature vector and the patterns learned from the training dataset. For example, the trained machine learning model may classify an expense as “reimbursable” or “non-reimbursable”. As another example, the trained machine learning model may classify an activity, queried about by the user, as an expense trigger or not an expense trigger. Additionally or alternatively, the trained machine learning model may map an activity or expense to a category, such as travel, dining, continuing learning education, office supplies, software licenses, promotional material, etc. Additionally or alternatively, the trained machine learning model may output other classifications depending on the labels that are input. In some embodiments, a label includes a numerical value. For example, a machine learning model may be trained to estimate a percentage or amount of an expense that is reimbursable for a given expense. The corresponding feature vector may be fed as input to the trained model, which may output an estimated percentage or amount based on patterns learned from the training dataset. In some embodiments, the system generates and presents a query response based on the estimated label (Operation412). For example, if the estimated label indicates that a queried about expense is reimbursable or not reimbursable, then an intelligent agent may notify the user, such as via a smart speaker or chatbot interface. The intelligent agent may further provide reasons why the expense was classified as reimbursable or not reimbursable based on the learned patterns. For example, the intelligent agent may indicate that the expense is not reimbursable by employees having a score lower than a threshold amount, with a particular job title, or with a recent pattern of spending, depending on the application of the query to the machine learning model. Additionally or alternatively, the system may perform other automated actions based on the estimated label. For example, the system may automatically add the expense to an expense report, such as previously described, if the estimated label indicates that the expense is reimbursable. If the estimated label indicates that the expense is not reimbursable, then the system may prevent the expense from being added to the electronic expense report. Additionally or alternatively, the system may present alternative expense options with similar feature vectors that would be classified as reimbursable. 5. Illustrative Examples A detailed example is described below for purposes of clarity. Components and/or operations described below should be understood as one specific example which may not be applicable to certain embodiments. Accordingly, components and/or operations described below should not be construed as limiting the scope of any of the claims. FIGS.5A-5Billustrate examples in accordance with some embodiments.FIG.5Aillustrates a dialogue between an employee and a virtual assistant persona. The employee enters user input corresponding to user queries, and the system generates and presents responses to each user query. In this example, the employee learns that they can expense international data charges and instructs the system to add one or more expense descriptions corresponding to $10 per day. The system detects that the employee is going on a trip to London, infers that the charges are for that trip, and generates one or more expense descriptions corresponding to $10 per day in international data charges for the duration of the trip. In addition, responsive to a user query, the system informs the employee that the expense policy for their organizational role (i.e., senior director) allows them to expense an upgrade to economy plus. While not shown inFIG.5A, the system may generate an expense description corresponding to the upgrade to economy plus, based on flight data available to the system. The system may also request the upgrade on behalf of the employee. In addition, the system may add one or more expense descriptions to an expense report for the employee (optionally without further input from the user), and may submit the expense report (optionally without further input from the user). FIG.5Billustrates examples of system performance metrics. In addition, the example illustrated inFIG.5Billustrates an employee's spending score. In this example, the employee's spending score is 72, which the system classifies as “fair.” The system presents multiple spending scores, corresponding to different types of expense reporting behavior: “in policy,” referring to the employee's tendency to submit expense descriptions that satisfy expense policies; “reasonable,” referring to the employee's tendency to submit descriptions that satisfy expense guidelines (in this example, the employee has submitted some number of expense descriptions that satisfy a policy but exceed a guideline); and “reporting,” corresponding to the timeliness and/or accuracy of the employee's expense reporting. In this example, the system performance metric is for “auto-processing performance,” referring to expense reporting operations completed by the system in a user-independent mode. The system performance metric is 70%, which the system classifies as “good.” In addition the system presents information about how the employee can improve the system performance metric, in this example by adding information about payment methods and/or merchants to permit the system to perform more expense reporting operations in a user-independent mode. 6. Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices (i.e., computing devices specially configured to perform certain functionality). The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or network processing units (NPUs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, FPGAs, or NPUs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.6is a block diagram that illustrates a computer system600upon which an embodiment of the invention may be implemented. Computer system600includes a bus602or other communication mechanism for communicating information, and a hardware processor604coupled with bus602for processing information. Hardware processor604may be, for example, a general purpose microprocessor. Computer system600also includes a main memory606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus602for storing information and instructions to be executed by processor604. Main memory606also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor604. Such instructions, when stored in non-transitory storage media accessible to processor604, render computer system600into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system600further includes a read only memory (ROM)608or other static storage device coupled to bus602for storing static information and instructions for processor604. A storage device610, such as a magnetic disk or optical disk, is provided and coupled to bus602for storing information and instructions. Computer system600may be coupled via bus602to a display612, such as a liquid crystal display (LCD), plasma display, electronic ink display, cathode ray tube (CRT) monitor, or any other kind of device for displaying information to a computer user. An input device614, including alphanumeric and other keys, may be coupled to bus602for communicating information and command selections to processor604. Alternatively or in addition, the computer system600may receive user input via a cursor control616, such as a mouse, a trackball, a trackpad, a touchscreen, or cursor direction keys for communicating direction information and command selections to processor604and for controlling cursor movement on display612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. The display612may be configured to receive user input via one or more pressure-sensitive sensors, multi-touch sensors, and/or gesture sensors. Alternatively or in addition, the computer system600may receive user input via a microphone, video camera, and/or some other kind of user input device (not shown). Computer system600may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system600to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system600in response to processor604executing one or more sequences of one or more instructions contained in main memory606. Such instructions may be read into main memory606from another storage medium, such as storage device610. Execution of the sequences of instructions contained in main memory606causes processor604to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device610. Volatile media includes dynamic memory, such as main memory606. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a programmable read-only memory (PROM), and erasable PROM (EPROM), a FLASH-EPROM, non-volatile random-access memory (NVRAM), any other memory chip or cartridge, content-addressable memory (CAM), and ternary content-addressable memory (TCAM). Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor604for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network, via a network interface controller (NIC), such as an Ethernet controller or Wi-Fi controller. A NIC local to computer system600can receive the data from the network and place the data on bus602. Bus602carries the data to main memory606, from which processor604retrieves and executes the instructions. The instructions received by main memory606may optionally be stored on storage device610either before or after execution by processor604. Computer system600also includes a communication interface618coupled to bus602. Communication interface618provides a two-way data communication coupling to a network link620that is connected to a local network622. For example, communication interface618may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface618may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface618sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link620typically provides data communication through one or more networks to other data devices. For example, network link620may provide a connection through local network622to a host computer624or to data equipment operated by an Internet Service Provider (ISP)626. ISP626in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”628. Local network622and Internet628both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link620and through communication interface618, which carry the digital data to and from computer system600, are example forms of transmission media. Computer system600can send messages and receive data, including program code, through the network(s), network link620and communication interface618. In the Internet example, a server630might transmit a requested code for an application program through Internet628, ISP626, local network622and communication interface618. The received code may be executed by processor604as it is received, and/or stored in storage device610, or other non-volatile storage for later execution. 7. Computer Networks and Cloud Networks In some embodiments, a computer network provides connectivity among a set of nodes running software that utilizes techniques as described herein. The nodes may be local to and/or remote from each other. The nodes are connected by a set of links. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, an optical fiber, and a virtual link. A subset of nodes implements the computer network. Examples of such nodes include a switch, a router, a firewall, and a network address translator (NAT). Another subset of nodes uses the computer network. Such nodes (also referred to as “hosts”) may execute a client process and/or a server process. A client process makes a request for a computing service (such as, execution of a particular application, and/or storage of a particular amount of data). A server process responds by executing the requested service and/or returning corresponding data. A computer network may be a physical network, including physical nodes connected by physical links. A physical node is any digital device. A physical node may be a function-specific hardware device, such as a hardware switch, a hardware router, a hardware firewall, and a hardware NAT. Additionally or alternatively, a physical node may be any physical resource that provides compute power to perform a task, such as one that is configured to execute various virtual machines and/or applications performing respective functions. A physical link is a physical medium connecting two or more physical nodes. Examples of links include a coaxial cable, an unshielded twisted cable, a copper cable, and an optical fiber. A computer network may be an overlay network. An overlay network is a logical network implemented on top of another network (such as, a physical network). Each node in an overlay network corresponds to a respective node in the underlying network. Hence, each node in an overlay network is associated with both an overlay address (to address to the overlay node) and an underlay address (to address the underlay node that implements the overlay node). An overlay node may be a digital device and/or a software process (such as, a virtual machine, an application instance, or a thread) A link that connects overlay nodes is implemented as a tunnel through the underlying network. The overlay nodes at either end of the tunnel treat the underlying multi-hop path between them as a single logical link. Tunneling is performed through encapsulation and decapsulation. In an embodiment, a client may be local to and/or remote from a computer network. The client may access the computer network over other computer networks, such as a private network or the Internet. The client may communicate requests to the computer network using a communications protocol, such as Hypertext Transfer Protocol (HTTP). The requests are communicated through an interface, such as a client interface (such as a web browser), a program interface, or an application programming interface (API). In an embodiment, a computer network provides connectivity between clients and network resources. Network resources include hardware and/or software configured to execute server processes. Examples of network resources include a processor, a data storage, a virtual machine, a container, and/or a software application. Network resources are shared amongst multiple clients. Clients request computing services from a computer network independently of each other. Network resources are dynamically assigned to the requests and/or clients on an on-demand basis. Network resources assigned to each request and/or client may be scaled up or down based on, for example, (a) the computing services requested by a particular client, (b) the aggregated computing services requested by a particular tenant, and/or (c) the aggregated computing services requested of the computer network. Such a computer network may be referred to as a “cloud network.” In an embodiment, a service provider provides a cloud network to one or more end users. Various service models may be implemented by the cloud network, including but not limited to Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). In SaaS, a service provider provides end users the capability to use the service provider's applications, which are executing on the network resources. In PaaS, the service provider provides end users the capability to deploy custom applications onto the network resources. The custom applications may be created using programming languages, libraries, services, and tools supported by the service provider. In IaaS, the service provider provides end users the capability to provision processing, storage, networks, and other fundamental computing resources provided by the network resources. Any applications, including an operating system, may be deployed on the network resources. In an embodiment, various deployment models may be implemented by a computer network, including but not limited to a private cloud, a public cloud, and a hybrid cloud. In a private cloud, network resources are provisioned for exclusive use by a particular group of one or more entities (the term “entity” as used herein refers to a corporation, organization, person, or other entity). The network resources may be local to and/or remote from the premises of the particular group of entities. In a public cloud, cloud resources are provisioned for multiple entities that are independent from each other (also referred to as “tenants” or “customers”). The computer network and the network resources thereof are accessed by clients corresponding to different tenants. Such a computer network may be referred to as a “multi-tenant computer network.” Several tenants may use a same particular network resource at different times and/or at the same time. The network resources may be local to and/or remote from the premises of the tenants. In a hybrid cloud, a computer network comprises a private cloud and a public cloud. An interface between the private cloud and the public cloud allows for data and application portability. Data stored at the private cloud and data stored at the public cloud may be exchanged through the interface. Applications implemented at the private cloud and applications implemented at the public cloud may have dependencies on each other. A call from an application at the private cloud to an application at the public cloud (and vice versa) may be executed through the interface. In an embodiment, tenants of a multi-tenant computer network are independent of each other. For example, one tenant (through operation, tenant-specific practices, employees, and/or identification to the external world) may be separate from another tenant. Different tenants may demand different network requirements for the computer network. Examples of network requirements include processing speed, amount of data storage, security requirements, performance requirements, throughput requirements, latency requirements, resiliency requirements, Quality of Service (QoS) requirements, tenant isolation, and/or consistency. The same computer network may need to implement different network requirements demanded by different tenants. In some embodiments, in a multi-tenant computer network, tenant isolation is implemented to ensure that the applications and/or data of different tenants are not shared with each other. Various tenant isolation approaches may be used. In an embodiment, each tenant is associated with a tenant ID. Each network resource of the multi-tenant computer network is tagged with a tenant ID. A tenant is permitted access to a particular network resource only if the tenant and the particular network resources are associated with a same tenant ID. In an embodiment, each tenant is associated with a tenant ID. Each application, implemented by the computer network, is tagged with a tenant ID. Additionally or alternatively, each data structure and/or dataset, stored by the computer network, is tagged with a tenant ID. A tenant is permitted access to a particular application, data structure, and/or dataset only if the tenant and the particular application, data structure, and/or dataset are associated with a same tenant ID. As an example, each database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular database. As another example, each entry in a database implemented by a multi-tenant computer network may be tagged with a tenant ID. Only a tenant associated with the corresponding tenant ID may access data of a particular entry. However, the database may be shared by multiple tenants. In an embodiment, a subscription list indicates which tenants have authorization to access which applications. For each application, a list of tenant IDs of tenants authorized to access the application is stored. A tenant is permitted access to a particular application only if the tenant ID of the tenant is included in the subscription list corresponding to the particular application. In an embodiment, network resources (such as digital devices, virtual machines, application instances, and threads) corresponding to different tenants are isolated to tenant-specific overlay networks maintained by the multi-tenant computer network. As an example, packets from any source device in a tenant overlay network may only be transmitted to other devices within the same tenant overlay network. Encapsulation tunnels are used to prohibit any transmissions from a source device on a tenant overlay network to devices in other tenant overlay networks. Specifically, the packets, received from the source device, are encapsulated within an outer packet. The outer packet is transmitted from a first encapsulation tunnel endpoint (in communication with the source device in the tenant overlay network) to a second encapsulation tunnel endpoint (in communication with the destination device in the tenant overlay network). The second encapsulation tunnel endpoint decapsulates the outer packet to obtain the original packet transmitted by the source device. The original packet is transmitted from the second encapsulation tunnel endpoint to the destination device in the same particular overlay network. 8. Microservice Applications According to some embodiments, the techniques described herein are implemented in a microservice architecture. A microservice in this context refers to software logic designed to be independently deployable, having endpoints that may be logically coupled to other microservices to build a variety of applications. Applications built using microservices are distinct from monolithic applications, which are designed as a single fixed unit and generally comprise a single logical executable. With microservice applications, different microservices are independently deployable as separate executables. Microservices may communicate using Hypertext Transfer Protocol (HTTP) messages and/or according to other communication protocols via API endpoints. Microservices may be managed and updated separately, written in different languages, and be executed independently from other microservices. Microservices provide flexibility in managing and building applications. Different applications may be built by connecting different sets of microservices without changing the source code of the microservices. Thus, the microservices act as logical building blocks that may be arranged in a variety of ways to build different applications. Microservices may provide monitoring services that notify a microservices manager (such as If-This-Then-That (IFTTT), Zapier, or Oracle Self-Service Automation (OSSA)) when trigger events from a set of trigger events exposed to the microservices manager occur. Microservices exposed for an application may alternatively or additionally provide action services that perform an action in the application (controllable and configurable via the microservices manager by passing in values, connecting the actions to other triggers and/or data passed along from other actions in the microservices manager) based on data received from the microservices manager. The microservice triggers and/or actions may be chained together to form recipes of actions that occur in optionally different applications that are otherwise unaware of or have no control or dependency on each other. These managed applications may be authenticated or plugged in to the microservices manager, for example, with user-supplied application credentials to the manager, without requiring reauthentication each time the managed application is used alone or in combination with other applications. In some embodiments, microservices may be connected via a GUI. For example, microservices may be displayed as logical blocks within a window, frame, other element of a GUI. A user may drag and drop microservices into an area of the GUI used to build an application. The user may connect the output of one microservice into the input of another microservice using directed arrows or any other GUI element. The application builder may run verification tests to confirm that the output and inputs are compatible (e.g., by checking the datatypes, size restrictions, etc.) Triggers The techniques described above may be encapsulated into a microservice, according to some embodiments. In other words, a microservice may trigger a notification (into the microservices manager for optional use by other plugged in applications, herein referred to as the “target” microservice) based on the above techniques and/or may be represented as a GUI block and connected to one or more other microservices. The trigger condition may include absolute or relative thresholds for values, and/or absolute or relative thresholds for the amount or duration of data to analyze, such that the trigger to the microservices manager occurs whenever a plugged-in microservice application detects that a threshold is crossed. For example, a user may request a trigger into the microservices manager when the microservice application detects a value has crossed a triggering threshold. In one embodiment, the trigger, when satisfied, might output data for consumption by the target microservice. In another embodiment, the trigger, when satisfied, outputs a binary value indicating the trigger has been satisfied, or outputs the name of the field or other context information for which the trigger condition was satisfied. Additionally or alternatively, the target microservice may be connected to one or more other microservices such that an alert is input to the other microservices. Other microservices may perform responsive actions based on the above techniques, including, but not limited to, deploying additional resources, adjusting system configurations, and/or generating GUIs. Actions In some embodiments, a plugged-in microservice application may expose actions to the microservices manager. The exposed actions may receive, as input, data or an identification of a data object or location of data, that causes data to be moved into a data cloud. In some embodiments, the exposed actions may receive, as input, a request to increase or decrease existing alert thresholds. The input might identify existing in-application alert thresholds and whether to increase or decrease, or delete the threshold. Additionally or alternatively, the input might request the microservice application to create new in-application alert thresholds. The in-application alerts may trigger alerts to the user while logged into the application, or may trigger alerts to the user using default or user-selected alert mechanisms available within the microservice application itself, rather than through other applications plugged into the microservices manager. In some embodiments, the microservice application may generate and provide an output based on input that identifies, locates, or provides historical data, and defines the extent or scope of the requested output. The action, when triggered, causes the microservice application to provide, store, or display the output, for example, as a data model or as aggregate data that describes a data model. 9. Miscellaneous; Extensions Embodiments are directed to a system with one or more devices that include a hardware processor and that are configured to perform any of the operations described herein and/or recited in any of the claims below. In an embodiment, a non-transitory computer readable storage medium comprises instructions which, when executed by one or more hardware processors, causes performance of any of the operations described herein and/or recited in any of the claims. Any combination of the features and functionalities described herein may be used in accordance with one or more embodiments. In the foregoing specification, embodiments have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
92,203
11861734
DETAILED DESCRIPTION OF ILLUSTRATED EMBODIMENTS Embodiments of the present invention are directed to methods, systems and articles of manufacture for efficiently calculating an electronic tax return, such as within a tax return preparation system. In general, a computerized tax return preparation system accesses taxpayer-specific tax data from a shared data store configured to store therein taxpayer-specific tax data for a taxpayer. The system then executes a tax calculation engine configured to read the taxpayer-specific tax data and write calculated tax data to the shared data store, and also configured to perform a plurality of tax calculations based on a tax calculation graph. As explained below in more detail, the tax calculation graph semantically represents the tax legislation/tax rules for the tax return and the data structures that capture the conditions necessary to complete the computations that are required to calculate an electronic tax return. The complete tax calculation graph may have hundreds or even thousands of calculations depending on the complexity of the tax code for the tax return and the tax situation of the taxpayer. In addition, the tax preparation applications typically perform the tax calculations periodically, such as when new taxpayer-specific tax data has been received, or after receiving complete information for a tax topic, or after a certain time period has elapsed during preparation of the tax return, or other suitable point in the process of preparing a tax return. Accordingly, in order to efficiently perform the tax calculations, which also efficiently utilizes the computing power of the system, the system is configured to perform only the calculations which are changed by the new taxpayer-specific tax data received since the previous tax calculation executed by the tax calculation engine. Thus, if a calculation in the tax calculation graph is not changed by the new taxpayer-specific tax data received since the previous tax calculation. The system may also determine whether the new taxpayer-specific tax data does, or does not change the calculated tax return and the reason why. This information may be of interest to a user for various reasons, such as tax planning, error checking and/or other reasons. Tax preparation is a time-consuming and laborious process. It is estimated that individuals and businesses spend around 6.1 billion hours per year complying with the filing requirements of the Internal Revenue Code. Tax return preparation software has been commercially available to assist taxpayers in preparing their tax returns. Tax return preparation software is typically run on a computing device such as a computer, laptop, tablet, or mobile computing device such as a Smartphone. Traditionally, a user has walked through a set of rigidly defined user interface interview screens that selectively ask questions that are relevant to a particular tax topic or data field needed to calculate a taxpayer's tax liability. In contrast to the rigidly defined user interface screens used in prior iterations of tax preparation software, the current invention provides tax preparation software100that may run on computing devices102that operate on a new construct in which tax rules and the calculations based thereon are established in declarative data-structures, namely, completeness graph(s) and tax calculation graph(s). Use of these data-structures permits the user interface to be loosely connected or even divorced from the tax calculation engine and the data used in the tax calculations. Tax calculations are dynamically calculated based on tax data derived from sourced data, estimates, or user input. A smart tax logic agent running on a set of rules can review current run time data and evaluate missing data fields and propose suggested questions to be asked to a user to fill in missing blanks. This process can be continued until completeness of all tax topics has occurred. An electronic return can then be prepared and filed with respect to the relevant taxing jurisdictions. FIG.1illustrates graphically how tax legislation/tax rules10are broken down into a completeness graph12and a tax calculation graph14. In one aspect of the invention, tax legislation or rules10are parsed or broken into various topics. For example, there may be nearly one hundred topics that need to be covered for completing a federal tax return. When one considers both federal and state tax returns, there can be well over one hundred tax topics that need to be covered. When tax legislation or tax rules10are broken into various topics or sub-topics, in one embodiment of the invention, each particular topic (e.g., topics A, B) may each have their own dedicated completeness graph12A,12B and tax calculation graph14A,14B as seen inFIG.1. Note that inFIG.1, the completeness graph12and the tax calculation graph14are interdependent as illustrated by dashed line16. That is to say, some elements contained within the completeness graph12are needed to perform actual tax calculations using the tax calculation graph14. Likewise, aspects within the tax calculation graph14may be needed as part of the completion graph12. Taken collectively, the completeness graph12and the tax calculation graph14represent data structures that capture all the conditions necessary to complete the computations that are required to complete a tax return that can be filed. Individual combinations of completeness graphs12and tax calculation graphs14that relate to one or more topics can be used complete the computations required for some sub-calculation. In the context of a tax setting, for example, a sub-selection of topical completeness graphs12and tax calculation graphs14can be used for intermediate tax results such as Adjusted Gross Income (AGI) or Taxable Income (TI). The completeness graph12and the tax calculation graph14represent data structures that can be constructed in the form of tree.FIG.2illustrates a completeness graph12in the form of a tree with nodes20and arcs22representing a basic or general version of a completeness graph12for the topic of determining whether a child qualifies as a dependent for federal income tax purposes. A more complete flow chart-based representation of questions related to determining a “qualified child” may be found in U.S. patent application Ser. No. 14/097,057, which is incorporated by reference herein. Each node20contains a condition that in this example is expressed as a Boolean expression that can be answered in the affirmative or negative. The arcs22that connect each node20illustrate the dependencies between nodes20. The combination of arcs22in the completeness graph12illustrates the various pathways to completion. A single arc22or combination of arcs22that result in a determination of “Done” represent a pathway to completion. As seen inFIG.2, there are several pathways to completion. For example, one pathway to completion is where an affirmative (True) answer is given to the question of whether you or a spouse can be claimed on someone else's tax return. If such a condition is true, your child is not a qualifying dependent because under IRS rules you cannot claim any dependents if someone else can claim you as a dependent. In another example, if you had a child and that child did not live with you for more than 6 months of the year, then your child is not a qualifying dependent. Again, this is a separate IRS requirement for a qualified dependent. As one can imagine given the complexities and nuances of the tax code, many tax topics may contain completeness graphs12that have many nodes with a large number of pathways to completion. However, by many branches or lines within the completeness graph12can be ignored, for example, when certain questions internal to the completeness graph12are answered that eliminate other nodes20and arcs22within the completeness graph12. The dependent logic expressed by the completeness graph12allows one to minimize subsequent questions based on answers given to prior questions. This allows a minimum question set that can be generated that can be presented to a user as explained herein. FIG.3illustrates another example of a completeness graph12that includes a beginning node20a(Node A), intermediate nodes20b-g(Nodes B-G) and a termination node20y(Node “Yes” or “Done”). Each of the beginning node20aand intermediate nodes20a-grepresents a question. Inter-node connections or arcs22represent response options. In the illustrated embodiment, each inter-node connection22represents an answer or response option in binary form (Y/N), for instance, a response to a Boolean expression. It will be understood, however, that embodiments are not so limited, and that a binary response form is provided as a non-limiting example. In the illustrated example, certain nodes, such as nodes A, B and E, have two response options22, whereas other nodes, such as nodes D, G and F, have one response option22. As explained herein, the directed graph or completion graph12that is illustrated inFIG.3can be traversed through all possible paths from the start node20ato the termination node20y. By navigating various paths through the completion graph12in a recursive manner can determine each path from the beginning node20ato the termination node20y. The completion graph12along with the pathways to completion through the graph can be converted into a different data structure or format. In the illustrated embodiment shown inFIG.4, this different data structure or format is in the form of a decision table30. In the illustrated example, the decision table30includes rows32(five rows32a-eare illustrated) based on the paths through the completion graph12. In the illustrated embodiment, the columns34a-gof the completion graph represent expressions for each of the questions (represented as nodes A-G inFIG.3) and answers derived from completion paths through the completion graph12and column34hindicates a conclusion, determination, result or goal34hconcerning a tax topic or situation, e.g., “Yes—your child is a qualifying child” or “No—your child is not a qualifying child.” Referring toFIG.4, each row32of the decision table30represents a tax rule. The decision table30, for example, may be associated with a federal tax rule or a state tax rule. In some instances, for example, a state tax rule may include the same decision table30as the federal tax rule. The decision table30can be used, as explained herein, to drive a personalized interview process for the user of tax preparation software100. In particular, the decision table30is used to select a question or questions to present to a user during an interview process. In this particular example, in the context of the completion graph fromFIG.3converted into the decision table30ofFIG.4, if the first question presented to the user during an interview process is question “A” and the user answers “Yes” rows32c-emay be eliminated from consideration given that no pathway to completion is possible. The tax rule associated with these columns cannot be satisfied given the input of “Yes” in question “A.” Note that those cell entries denoted by “?” represent those answers to a particular question in a node that is irrelevant to the particular pathway to completion. Thus, for example, referring to row34a, when an answer to QAis “Y” and a path is completed through the completion graph12by answering Question C as “N” then answers to the other questions in Nodes B and D-F are “?” since they are not needed to be answered given that particular path. After an initial question has been presented and rows are eliminated as a result of the selection, next, a collection of candidate questions from the remaining available rows32aand32bis determined. From this universe of candidate questions from the remaining rows, a candidate question is selected. In this case, the candidate questions are questions QCand QGin columns34c,34g, respectively. One of these questions is selected and the process repeats until either the goal34his reached or there is an empty candidate list. FIG.5illustrates another embodiment of a decision table30. In this embodiment, the decision table30includes additional statistical data36associated with each rule (e.g., rules R1-R6). For example, the statistical data36may represent a percentage or the like in which a particular demographic or category of user(s) satisfies this particular path to completion. The statistical data36may be mined from existing or current year tax filings. The statistical data36may be obtained from a proprietary source of data such as tax filing data owned by Intuit, Inc. The statistical data36may be third party data that can be purchased or leased for use. For example, the statistical data36may be obtained from a government taxing authority or the like (e.g., IRS). In one aspect, the statistical data36does not necessarily relate specifically to the individual or individuals preparing the particular tax return. For example, the statistical data36may be obtained based on a number of tax filers which is then classified one or more classifications. For example, statistical data36can be organized with respect to age, type of tax filing (e.g., joint, separate, married filing separately), income range (gross, AGI, or TI), deduction type, geographic location, and the like). FIG.5illustrates two such columns38a,38bin the decision table30that contain statistical data36in the form of percentages. For example, column38a(STAT1) may contain a percentage value that indicates taxpayers under the age of thirty-five where Rule1is satisfied. Column38b(STAT2) may contain a percentage value that indicates taxpayers over the age of thirty-five where Rule1is satisfied. Any number of additional columns38could be added to the decision table30and the statistics do not have to relate to an age threshold or grouping. The statistical data36may be used, as explained in more detail below, by the tax preparation software100to determine which of the candidate questions (QA-QG) should be asked to a taxpayer. The statistical data36may be compared to one or more known taxpayer data fields (e.g., age, income level, tax filing status, geographic location, or the like) such that the question that is presented to the user is most likely to lead to a path to completion. Candidate questions may also be excluded or grouped together and then presented to the user to efficiently minimize tax interview questions during the data acquisition process. For example, questions that are likely to be answered in the negative can be grouped together and presented to the user in a grouping and asked in the negative—for example, “we think these question do not apply to you, please confirm that this is correct.” This enables the elimination of many pathways to completion that can optimize additional data requests of the taxpayer. FIG.6illustrates an example of a tax calculation graph14. The tax calculation graph semantically describes the tax legislation/tax rules10. InFIG.6, various nodes24are leaf or input nodes. Examples of leaf nodes24in this particular example include data obtained from W-2 forms, data obtained from 1099-INT forms, data obtained from other investment income, filing status, and number of dependents. Typically, though not exclusively, leaf nodes24are populated with user inputs. That is to say the user taxpayer will enter this information from a user interface. In other embodiments, however, the leaf nodes24may be populated with information that is automatically obtained by the tax preparation software100. For example, in some embodiments, tax documents may be imaged or scanned with relevant data being automatically extracted using Object Character Recognition (OCR) techniques. In other embodiments, prior tax returns may be used by the tax preparation software100to extract information (e.g., name, potential dependents, address, and social security number) which can then be used to populate the leaf nodes24. Online resources such as financial services websites or other user-specific websites can be crawled and scanned to scrap or otherwise download tax related information that can be automatically populated into leaf nodes24. Additional third party information sources such as credit bureaus, government databases, and the like can also be used by the tax preparation software100to obtain information that can then be populated in to respective leaf nodes24. In still other embodiments, values for leaf nodes24may be derived or otherwise calculated. For example, while the number of dependents may be manually entered by a taxpayer, those dependents may not all be “qualifying” dependents for tax purposes. In such instances, the actual number of “qualified” dependents may be derived or calculated by the tax preparation software100. In still other embodiments, values for leaf nodes24may be estimated as described herein. Still other internal nodes26semantically represent a tax concept and may be calculated using a function node28. Some or all of these internal nodes26may be labeled as “tax concepts.” Interconnected nodes26containing tax concepts may be connected via “gist” functions that can be tagged and later be used or called upon to explain to the user the reasoning behind why a particular result was calculated or determined by the tax preparation software100program as explained in more detail below. Gists are well-defined functions to capture domain specific patterns and semantic abstractions used in tax calculations. Gists can be de-coupled from a specific narrow definition and instead be associated with one or more explanation. Examples of common “gists” found in tax legislation/rules include the concepts of “caps” or “exceptions” that are found in various portions of the tax code. The function node28may include any number of mathematical or other operations. Examples of functions28include summation, subtraction, multiplication, division, and look-ups of tables or values from a database30or library as is illustrated inFIG.6. It should be understood that nodes within completion graph12and the tax calculation graph14may be shared in some instances. For example, AGI is a re-occurring tax concept that occurs in many places in the tax code. AGI is used not only for the mathematical computation of taxes is also used, for example, to determine eligibility of certain tax deductions and credits. Thus, the AGI node is common to both the completion graph12and the tax calculation graph14. The calculation graph14also has a plurality of calculation paths connecting the nodes24,26and28, which define data dependencies between the nodes. A second node is considered to be dependent on a first node if a calculation (calculation includes any determination within the calculation graph, such as function, decisions, etc.) at the second node depends on a value of the first node. A second node has a direct dependency on the first node if it is directly dependent on the first node without any intervening nodes. A second node has an indirect dependency on the first node if it is dependent on a node which is directly dependent on the first node or an intervening node along a calculation path to the first node. Although there are many more calculation paths in the calculation graph14ofFIG.6,FIG.6shows two exemplary calculation paths27aand27b, which interconnect nodes having data dependencies. Some or all of the data dependencies may be gists, as described above. The two calculation paths27aand27bintersect at the “accumulator”28a, and are thereafter coincident as calculation path27c. FIG.7schematically illustrates a tax return preparation system40for calculating taxes using rules and calculations based on declarative data structures according to one embodiment. The system40include a shared data store42that contains therein a schema44or canonical model representative to the data fields utilized or otherwise required to complete a tax return. The shared data store42may be a repository, file, or database that is used to contain the tax-related data fields. The shared data store42is accessible by a computing device102,103as described herein. The shared data store42may be located on the computing device102,103running the tax preparation software100or it may be located remotely, for example, in cloud environment on another, remotely located computer. The schema44may include, for example, a schema based on the Modernized e-File (MeF) system developed by the Internal Revenue Service. The MeF is a web-based system that allows electronic filing of tax returns through the Internet. MeF uses extensible markup language (XML) format that is used when identifying, storing, and transmitting data. For example, each line or data element on a tax return is given an XML name tag as well as every instance of supporting data. Tax preparation software100uses XML schemas and business rules to electronically prepare and transmit tax returns to tax reporting agencies. Transmitters use the Internet to transmit electronic tax return data to the IRS MeF system. The IRS validates the transmitted files against the XML schemas and Business Rules in the MeF schema44. The schema44may be a modified version of the MeF schema used by the IRS. For example, the schema44may be an extended or expanded version (designated MeF++) of the MeF model established by government authorities. While the particular MeF schema44is discussed herein the invention is not so limited. There may be many different schemas44depending on the different tax jurisdiction. For example, Country A may have a tax schema44that varies from Country B. Different regions or states within a single country may even have different schemas44. The systems and methods described herein are not limited to a particular schema44implementation. The schema44may contain all the data fields required to prepare and file a tax return with a government taxing authority. This may include, for example, all fields required for any tax forms, schedules, and the like. Data may include text, numbers, and a response to a Boolean expression (e.g., True/False or Yes/No). As explained in more detail, the shared data store42may, at any one time, have a particular instance46of the MeF schema44(for MeF++ schema) stored therein at any particular time. For example,FIG.7illustrates several instances46of the MeF schema44(labeled as MeF1, MeF2, MeFN). These instances46may be updated as additional data is input into the shared data store42. As seen inFIG.7, the shared data store42may import data from one or more data sources48. A number of data sources48may be used to import or otherwise transfer tax related data to the shared data store42. The tax related data may include personal identification data such as a name, address, or taxpayer ID. Tax data may also relate to, for example, details regarding a taxpayer's employer(s) during a preceding tax year. This may include, employer name, employer federal ID, dates of employment, and the like. Tax related day may include residential history data (e.g., location of residence(s) in tax reporting period (state, county, city, etc.) as well as type of housing (e.g., rental unit or purchased home). Tax related information may also include dependent-related information such as the number of family members in a household including children. Tax related information may pertain to sources of income, including both earned and unearned income as well. Tax related information also include information that pertains to tax deductions or tax credits. For example, user input48ais one type of data source48. User input48amay take a number of different forms. For example, user input48amay be generated by a user using, for example, a input device such as keyboard, mouse, touchscreen display, voice input (e.g., voice to text feature) or the like to enter information manually into the tax preparation software100. For example, as illustrated inFIG.7, user interface manager82contains an import module89that may be used to select what data sources48are automatically searched for tax related data. Import module89may be used as a permission manager that includes, for example, user account numbers and related passwords. The UI control80enables what sources48of data are searched or otherwise analyzed for tax related data. For example, a user may select prior year tax returns48bto be searched but not online resources48c. The tax data may flow through the UI control80directly as illustrated inFIG.7or, alternatively, the tax data may be routed directly to the shared data store42. The import module89may also present prompts or questions to the user via a user interface presentation84generated by the user interface manager82. For example, a question may ask the user to confirm the accuracy of the data. The user may also be given the option of whether or not to import the data from the data sources48. User input48amay also include some form of automatic data gathering. For example, a user may scan or take a photographic image of a tax document (e.g., W-2 or 1099) that is then processed by the tax preparation software100to extract relevant data fields that are then automatically transferred and stored within the data store42. OCR techniques along with pre-stored templates of tax reporting forms may be called upon to extract relevant data from the scanned or photographic images whereupon the data is then transferred to the shared data store42. Another example of a data source48is a prior year tax return48b. A prior year tax return48bthat is stored electronically can be searched and data is copied and transferred to the shared data store42. The prior year tax return48bmay be in a proprietary format (e.g., .txf, .pdf) or an open source format. The prior year tax return48bmay also be in a paper or hardcopy format that can be scanned or imaged whereby data is extracted and transferred to the shared data store42. In another embodiment, a prior year tax return48bmay be obtained by accessing a government database (e.g., IRS records). An additional example of a data source48is an online resource48c. An online resource48cmay include, for example, websites for the taxpayer(s) that contain tax-related information. For example, financial service providers such as banks, credit unions, brokerages, investment advisors typically provide online access for their customers to view holdings, balances, transactions. Financial service providers also typically provide year-end tax documents to their customers such as, for instance, 1099-INT (interest income), 1099-DIV (dividend income), 1099-B (brokerage proceeds), 1098 (mortgage interest) forms. The data contained on these tax forms may be captured and transferred electronically to the shared data store42. Of course, there are additional examples of online resources48cbeyond financial service providers. For example, many taxpayers may have social media or similar accounts. These include, by way of illustration and not limitation, Facebook, Linked-In, Twitter, and the like. User's may post or store personal information on these properties that may have tax implications. For example, a user's Linked-In account may indicate that a person changed jobs during a tax year. Likewise, a posting on Facebook about a new home may suggest that a person has purchased a home, moved to a new location, changed jobs; all of which may have possible tax ramifications. This information is then acquired and transferred to the shared data store42, which can be used to drive or shape the interview process described herein. For instance, using the example above, a person may be asked a question whether or not she changed jobs during the year (e.g., “It looks like you changed jobs during the past year, is this correct?”. Additional follow-up questions can then be presented to the user. Still referring toFIG.7, another data source48includes sources of third party information48dthat may be accessed and retrieved. For example, credit reporting bureaus contain a rich source of data that may implicate one or more tax items. For example, credit reporting bureaus may show that a taxpayer has taken out a student loan or home mortgage loan that may be the source of possible tax deductions for the taxpayer. Other examples of sources of third party information48dinclude government databases. For example, the state department of motor vehicles may contain information relevant to tax portion of vehicle registration fees which can be deductible in some instances. Other government databases that may be accessed include the IRS (e.g., IRS tax return transcripts), and state taxing authorities. Still referring toFIG.7, the tax return preparation software100executed by the computing device102,103includes a tax calculation engine50that computes one or more tax calculations based on the tax calculation graph(s)14and the available data at any given instance within the schema44in the shared data store42. The tax calculation engine50may calculate a final tax due amount, a final refund amount, or one or more intermediary calculations (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like). The tax calculation engine50utilizes the one or more calculation graphs14as described previously in the context ofFIGS.1and6. In one embodiment, a series of different calculation graphs14are used for respective tax topics. These different calculation graphs14may be coupled together or otherwise compiled as a composite calculation graph14to obtain an amount of taxes due or a refund amount based on the information contained in the shared data store42. The tax calculation engine50reads the most current or up to date information contained within the shared data store42and then performs tax calculations. Updated tax calculation values are then written back to the shared data store42. As the updated tax calculation values are written back, new instances46of the canonical model46are created. The tax calculations performed by the tax calculation engine50may include the calculation of an overall tax liability or refund due. The tax calculations may also include intermediate calculations used to determine an overall tax liability or refund due (e.g., AGI calculation). Referring now toFIG.16, a method1210for efficiently performing the tax calculations using the tax calculation engine50and the tax calculation graph(s)14is shown. At step1212, the tax return preparation system40accesses the taxpayer-specific tax data from the shared data store, as described above. Upon receiving a certain amount of user-specific tax data, at step1214, the tax return preparation system performs a preliminary (i.e. not final) tax calculation, which may be referred to as a first tax calculation. The system is configured to perform one or more preliminary tax calculations at various stages of the tax return preparation process. For example, a preliminary tax calculation may be performed in order to give the user an estimate or preliminary indication of their final tax liability, tax refund and/or taxes owed. Accordingly, the system executes the tax calculation engine50to perform the first tax calculation based on the tax calculation graph(s)14and the taxpayer-specific tax data read from the shared data store. As explained above, executing the tax calculation engine50based on the tax calculation graph(s)14results in calculated tax data, referred to as “first calculated tax data,” which may include the calculated values for the complete tax calculation graph for which data is available (including estimated tax data as described herein), or any subset of the tax calculation graph. Thus, the first calculated tax data may include an overall tax liability, a tax refund (if any), a tax owed (if any), final tax due amount, a final refund amount, or one or more intermediary calculations used to determine any of the foregoing (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like). As an example using the sample calculation graph14ofFIG.6, let's say the user has input W2 information for the nodes24labeled W2, but has not yet input data for 1099 INT for the nodes24labeled 1099 INT or Investment Income data for the nodes24labeled Inv. Then, the first calculation would perform all of the calculations in the calculation graph14, or a subset of the calculation graph, for example, at least those calculations needed to obtain a desired amount of calculated tax data (e.g. an overall tax liability, a tax refund (if any), a tax owed (if any), final tax due amount, a final refund amount). Alternatively, the first calculation may only include those calculations for which the shared data store has taxpayer-specific tax data (and estimated tax data as explained below) which includes the calculation paths27aand27c, (i.e. the nodes including the f=Accumulator below the nodes24labeled W2 (i.e. each of the nodes having a direct or indirect data dependency on the nodes24labeled W2). The first calculation may not include any calculations along calculation paths for which there is no taxpayer-specific tax data in the shared data store (and for which there is no estimated tax data, as described below). For instance, in this example, there is no taxpayer-specific tax data in the shared data store for the nodes24labeled 1099INT or Inv. and therefore those calculation paths, including calculation path27b, may not be calculated during the first tax calculation. At step1216, the system40then receives additional taxpayer-specific tax data (referred to as “new taxpayer-specific tax data”) which was not utilized in the first tax calculation, by any of the processes described herein, such as responses received in response to the questions presented to the user by the user interface manager which questions are based on the suggestions output by the tax logic agent. After receiving some amount of new taxpayer-specific tax data, at step1218, the tax return preparation system executes the tax calculation engine to perform a second tax calculation only for these calculations in the tax calculation graph(s)14which are changed by the new taxpayer-specific tax data. For instance, the system can perform the second tax calculation after receiving each response or additional tax-payer specific data, or after receiving complete information for a tax topic, or after a certain elapsed time period during preparation of the tax return, or other suitable point in the process of preparing a tax return. In the second tax calculation, the tax calculation engine ONLY performs those calculations in the tax calculation graph(s)14which are changed by the new taxpayer-specific tax data from the calculation performed in the first tax calculation. Thus, this includes only those calculations at nodes along calculation paths27which have a direct or indirect data dependency on a node utilizing the new taxpayer-specific tax data and which the value of the node from which it depends is changed from its value obtained by the first tax calculation. Accordingly, if a calculation at a node in the tax calculation graph40is not changed by the new taxpayer-specific tax data, the calculation is not performed, because it does need to be performed since it will have no effect on the tax calculations. For instance, if the new taxpayer-specific tax data is related to a particular tax topic, then the calculations in only that tax topic part of the tax calculation graph may need to be calculated, as well as any other parts of the calculation graph changed by a change in that tax topic part of the tax calculation graph. Moreover, if the second calculation of the tax topic part of the tax calculation graph does not result in a change in any value(s) utilized by other parts of the calculation graph, then no other part of the tax calculation graph14needs to be calculated during the second tax calculation. The second calculation results in a second calculated tax data, which may include the same types of calculated data as described above for the first calculated tax data. Describing the second calculation in terms of the calculation paths27and nodes24,26and28, the tax calculation engine50only performs those calculation at nodes in the tax calculation graph having a direct or indirect data dependency on a node utilizing the new taxpayer-specific tax data, and which value of the node from which it directly depends is changed from its value obtained by the first tax calculation. Continuing the example usingFIG.6started above, let's say the new taxpayer-specific tax data includes 1099 INT information for one or more of the nodes24labeled 1099 INT. The second tax calculation will then include the calculations at nodes along the calculation path27b, because the nodes24labeled 1099 INT utilize the new taxpayer-specific tax data, and the calculations at nodes along the calculation path27bbelow the nodes24labeled 1099 INT are directly or indirectly dependent on the nodes24labeled 1099 INT. The second tax calculation will NOT include the calculations at nodes along the calculation path27abecause none of the nodes along the calculation path27aare directly or indirectly dependent on the nodes24labeled 1099 INT. In addition, if a value of a node along the calculation path27bis not changed by the second tax calculation from its value from the first tax calculation, then the calculations along the calculation path stop, because there will be no change in any nodes after that point. Similarly, if the value of the node at the end of the calculation path27b(the node26labeled totalInterest) is not changed from its value from the first tax calculation, then the tax calculation engine50does not execute any calculations for calculation paths depending from the node at the end of the calculation path27b, in this case, calculation path27c. On the other hand, if the value of the node at the end of the calculation path27bis changed from its value from the first tax calculation, then the tax calculation engine50continues and executes the calculation paths depending from such node, including calculation path27cbeginning with node28a. At step1220of method1210, the tax return preparation system40determines whether there is a difference between the first tax calculation data and the second tax calculation data. This can also be considered as a determination whether the new taxpayer-specific tax data causes a change in a result of the tax return, such as the total tax liability, tax return or tax owed. For instance, the first tax calculation data and second tax calculation data may include only the final tax result, such as the an overall tax liability, and a tax refund or tax owed. It may be of interest to the user to know whether the new taxpayer-specific tax data changed the final tax result. At step1222, the tax return preparation system40identifies a reason why there is, or is not a difference between the first and second tax calculation data, as determined at step1220. This identification is equivalent to identifying a reason why the new taxpayer-specific tax data causes, or does not cause, a change in the tax return. The system40may make this determination by any suitable method, such as by identifying and analyzing the node(s) and/or gist(s) at which there is a change or no change in the value between the first tax calculation and the second tax calculation. Each of these node(s) or gist(s) are referred to as “reason node” or “reason gist”, respectively. As explained above, a node may be interconnected to another node by a “gist” function. The tax return preparation system may use the reason node or reason gist to determine the reason why there is, or is not, a difference between the first and second tax calculation data. For instance, in the example started above, assume there is a gist function interconnecting the node labeled totalInterest to the node28ain which the gist function requires the totalInterest to be more than a minimum value in order to be included in the taxpayer's AGI (adjusted gross income) (note that this example is hypothetical). Thus, if the totalInterest is less the minimum value, then the new taxpayer-specific tax data comprising 1099 INT data will not be included in the AGI, and there will effectively be no change in the second calculated tax data comprising the total tax liability and tax refund or tax owed, as the case may be. In this case, the system40will identify that the reason there was no change in the second calculated data is because the 1009 INT is less than the minimum taxable 1099 INT amount. Conversely, if the totalInterest from the new 1099 INT data exceeds the minimum value, then there will most likely be a change in the second calculated tax data for the total tax liability and the tax refund or tax owed, because the AGI will increase. In this case, the system40will identify that the reason for the change in the second calculated tax data is because the 1099 INT exceeds the minimum taxable 1099 INT amount. At step1224, the tax return preparation system40presents the reason why there is, or is not, a difference between the first and second tax calculation data to the user. Referring again to the example started above, if the new 1099 INT data changes the overall tax liability, this information may be utilized for tax planning purposes, such as the taxpayer investing less in investments producing 1099 INT income and investing in more tax efficient investments. Conversely, if the new 1099 INT data did not change the overall tax liability, the taxpayer may want to invest more in investments earning 1099 INT income, such that the 1099 INT income is expected be at or near the minimum taxable amount. Referring back toFIG.7, the system40includes a tax logic agent (TLA)60. The TLA60operates in conjunction with the shared data store42whereby updated tax data represented by instances46are read to the TLA60. The TLA60contains run time data62that is read from the shared data store42. The run time data62represents the instantiated representation of the canonical tax schema44at runtime. The TLA60may contain therein a rule engine64that utilizes a fact cache to generate either non-binding suggestions66for additional question(s) to present to a user or “Done” instructions68which indicate that completeness has occurred and additional input is not needed. The rule engine64may operate in the form a Drools expert engine. Other declarative rules engines64may be utilized and a Drools expert rule engine64is provided as one example of how embodiments may be implemented. The TLA60may be implemented as a dedicated module contained within the tax preparation software100. As seen inFIG.7, The TLA60uses the decision tables30to analyze the run time data62and determine whether a tax return is complete. Each decision table30created for each topic or sub-topic is scanned or otherwise analyzed to determine completeness for each particular topic or sub-topic. In the event that completeness has been determined with respect to each decision table30, then the rule engine64outputs a “done” instruction68to the UI control80. If the rule engine64does not output a “done” instruction68that means there are one or more topics or sub-topics that are not complete, in which case, as explained in more detail below, the UI control80presents interview questions to a user for answer. The TLA60identifies a decision table30corresponding to one of the non-complete topics or sub-topics and, using the rule engine64, identifies one or more non-binding suggestions66to present to the UI control80. The non-binding suggestions66may include a listing or compilation of one or more questions (e.g., Q1-Q5as seen inFIG.7) from the decision table30. In some instances, the listing or compilation of questions may be ranked in order by rank. The ranking or listing may be weighted in order of importance, relevancy, confidence level, or the like. For example, a top ranked question may be a question that, based on the remaining rows (e.g., R1-R5) in a decision will most likely lead to a path to completion. As part of this ranking process, statistical information such as the STAT1, STAT2 percentages as illustrated inFIG.5may be used to augment or aid this ranking process. Questions may also be presented that are most likely to increase the confidence level of the calculated tax liability or refund amount. In this regard, for example, those questions that resolve data fields associated with low confidence values may, in some embodiments, be ranked higher. The following pseudo code generally expresses how a rule engine64functions utilizing a fact cache based on the runtime canonical data62or the instantiated representation of the canonical tax schema46at runtime and generating non-binding suggestions66provided as an input a UI control80. As described in U.S. application Ser. No. 14/097,057 previously incorporated herein by reference, data such as required inputs can be stored to a fact cache so that the needed inputs can be recalled at a later time, and to determine what is already known about variables, factors or requirements of various rules: Rule engine (64)/Tax Logic Agent (TLA) (60) // initialization processLoad_Tax_Knowledge_Base;Create_Fact_Cache; While (new_data_from_application)Insert_data_into_fact_cache;collection=Execute_Tax_Rules; // collection is all the fired rules and corresponding conditionssuggestions=Generate_suggestions (collection);send_to_application(suggestions); The TLA60may also receive or otherwise incorporate information from a statistical/life knowledge module70. The statistical/life knowledge module70contains statistical or probabilistic data related to the taxpayer. For example, statistical/life knowledge module70may indicate that taxpayers residing within a particular zip code are more likely to be homeowners than renters. The TLA60may use this knowledge to weight particular topics or questions related to these topics. For example, in the example given above, questions about home mortgage interest may be promoted or otherwise given a higher weight. The statistical knowledge may apply in other ways as well. For example, tax forms often require a taxpayer to list his or her profession. These professions may be associated with transactions that may affect tax liability. For instance, a taxpayer may list his or her occupation as “teacher.” The statistic/life knowledge module70may contain data that shows that a large percentage of teachers have retirement accounts and in particular 403(b) retirement accounts. This information may then be used by the TLA60when generating its suggestions66. For example, rather than asking generically about retirement accounts, the suggestion66can be tailored directly to a question about 403(b) retirement accounts. The data that is contained within the statistic/life knowledge module70may be obtained by analyzing aggregate tax data of a large body of taxpayers. For example, entities having access to tax filings may be able to mine their own proprietary data to establish connections and links between various taxpayer characteristics and tax topics. This information may be contained in a database or other repository that is accessed by the statistic/life knowledge module70. This information may be periodically refreshed or updated to reflect the most up-to-date relationships. Generally, the data contained in the statistic/life knowledge module70is not specific to a particular tax payer but is rather generalized to characteristics shared across a number of tax payers although in other embodiments, the data may be more specific to an individual taxpayer. Still referring toFIG.7, the UI controller80encompasses a user interface manager82and a user interface presentation or user interface84. The user interface presentation84is controlled by the interface manager82may manifest itself, typically, on a visual screen or display104that is presented on a computing device102(seen, for example, inFIG.13). The computing device102may include the display of a computer, laptop, tablet, mobile phone (e.g., Smartphone), or the like. Different user interface presentations84may be invoked using a UI generator85depending, for example, on the type of display or screen104that is utilized by the computing device. For example, an interview screen with many questions or a significant amount of text may be appropriate for a computer, laptop, or tablet screen but such as presentation may be inappropriate for a mobile computing device such as a mobile phone or Smartphone. In this regard, different interface presentations84may be prepared for different types of computing devices102. The nature of the interface presentation84may not only be tied to a particular computing device102but different users may be given different interface presentations84. For example, a taxpayer that is over the age of 60 may be presented with an interview screen that has larger text or different visual cues than a younger user. The user interface manager82, as explained previously, receives non-binding suggestions from the TLA60. The non-binding suggestions may include a single question or multiple questions that are suggested to be displayed to the taxpayer via the user interface presentation84. The user interface manager82, in one aspect of the invention, contains a suggestion resolution element88, which is responsible for resolving how to respond to the incoming non-binding suggestions66. For this purpose, the suggestion resolution element88may be programmed or configured internally. Alternatively, the suggestion resolution element88may access external interaction configuration files. Additional details regarding configuration files and their use may be found in U.S. patent application Ser. No. 14/206,834, which is incorporated by reference herein. Configuration files specify whether, when and/or how non-binding suggestions are processed. For example, a configuration file may specify a particular priority or sequence of processing non-binding suggestions66such as now or immediate, in the current user interface presentation84(e.g., interview screen), in the next user interface presentation84, in a subsequent user interface presentation84, in a random sequence (e.g., as determined by a random number or sequence generator). As another example, this may involve classifying non-binding suggestions as being ignored. A configuration file may also specify content (e.g., text) of the user interface presentation84that is to be generated based at least in part upon a non-binding suggestion66. A user interface presentation84may be pre-programmed interview screens that can be selected and provided to the generator element85for providing the resulting user interface presentation84or content or sequence of user interface presentations84to the user. User interface presentations84may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element85to construct a final user interface presentation84on the fly during runtime. As seen inFIG.7, the UI controller80interfaces with the shared data store42such that data that is entered by a user in response to the user interface presentation84can then be transferred or copied to the shared data store42. The new or updated data is then reflected in the updated instantiated representation of the schema44. Typically, although not exclusively, in response to a user interface presentation84that is generated (e.g., interview screen), a user inputs data to the tax preparation software100using an input device that is associated with the computing device. For example, a taxpayer may use a mouse, finger tap, keyboard, stylus, voice entry, or the like to respond to questions. The taxpayer may also be asked not only to respond to questions but also to include dollar amounts, check or un-check boxes, select one or more options from a pull down menu, select radio buttons, or the like. Free form text entry may also be requested of the taxpayer. For example, with regard to donated goods, the taxpayer may be prompted to explain what the donated goods are and describe the same in sufficient detail to satisfy requirements set by a particular taxing authority. Still referring toFIG.7, the TLA60is operatively coupled to a services engine90that is configured to perform a number of tasks or services for the taxpayer. For example, the services engine90can include a printing option92. The printing option92may be used to print a copy of a tax return, tax return data, summaries of tax data, reports, tax forms and schedules, and the like. The services engine90may also electronically file94or e-file a tax return with a tax authority (e.g., federal or state tax authority). Whether a paper or electronic return is filed, data from the shared data store42required for particular tax forms, schedules, and the like is transferred over into the desired format. With respect to e-filed tax returns, the tax return may be filed using the MeF web-based system that allows electronic filing of tax returns through the Internet. Of course, other e-filing systems may also be used other than those that rely on the MeF standard. The services engine90may also make one or more recommendations96based on the run-time data62contained in the TLA60. For instance, the services engine90may identify that a taxpayer has incurred penalties for underpayment of estimates taxes and may recommend to the taxpayer to increase his or her withholdings or estimated tax payments for the following tax year. As another example, the services engine90may find that a person did not contribute to a retirement plan and may recommend96that a taxpayer open an Individual Retirement Account (IRA) or look into contributions in an employer-sponsored retirement plan. The services engine90may also include a calculator98that can be used to calculate various intermediate calculations used as part of the overall tax calculation algorithm. For example, the calculator98can isolate earned income, investment income, deductions, credits, and the like. The calculator98can also be used to estimate tax liability based on certain changed assumptions (e.g., how would my taxes change if I was married and filed a joint return?). The calculator98may also be used to compare analyze differences between tax years. FIG.8illustrates another schematic illustration of a system40′ for calculating taxes using rules and calculations based on declarative data structures. Those elements equivalent to the embodiment ofFIG.7are labeled with the same element numbers. In this alternative embodiment, the system40′ includes an estimation module110that writes to the shared data store42with estimates112or guesses of one or more data fields contained within the shared data store42. The estimates112or guesses may pertain to any number of tax topics and may include alphanumeric characters, a response to a Boolean operation, text, and the like. In this particular embodiment, the estimate module110assigns an estimated value to one or more data fields of the schema44contained in the shared data store42. The estimated value may be obtained in a number of ways. In one aspect, user input114is used to generate the estimated value. For example, the user may be prompted by UI control80with a prompt84to enter a guess or estimate on a particular data field. In another aspect, a prior tax return or multiple tax returns116can be used to generate an estimated value. For example, taxpayer A may have a history of the past three years of tax return data (e.g., stored as proprietary or standardized files) stored or otherwise made available to tax preparation software100that shows yearly dividend income of $1,200, $1,350, and $1,400. The estimation module110may generate an average of $1,317 to be used as an estimate for a current year return. Alternatively, the estimation module110may employ more robust analytics than merely computing an average or mean value. In the context of this example, the estimation module100seeing that dividends appear to be increasing in value each year may attempt to find a function (e.g., linear or non-linear function) that fits the observable data and can be used to better estimate current year tax data. For example, in the above example, a curve fitting function may estimate current year dividend at $1,525 rather than the average value of $1,317. Online resources118may also be used by the estimation module110to provide estimated values. Online resources118include, for example, financial services accounts for a taxpayer that can be accessed to estimate certain values. For example, a taxpayer may have one or more accounts at a bank, credit union, or stock brokerage. These online resources118can be accessed by the tax preparation software100to scrape, copy, or otherwise obtain tax relevant data. For example, online resources118may be accessed to estimate the value of interest income earned. A user's linked accounts may be accessed to find all of the interest income transactions that have occurred in the past year. This information may be used as the basis to estimate total interest income for the taxpayer. In another example, online resources118may be accessed to estimate the amount of mortgage interest that has been paid by a taxpayer. Instead of waiting for a Form 1098 from the mortgage service provider. Still referring toFIG.8, third party information120may be used by the estimation module110to arrive at an estimated value for one or more data fields. Third party information120may include credit bureaus, government databases, and the like. For example, credit bureaus may include information on student loans taken out by a taxpayer. This information may be used by the estimation module110to determine the amount of interest paid on such loans which may be qualified student loan interest. It should also be understood that the estimation module110may rely on one or more inputs to arrive at an estimated value. For example, the estimation module110may rely on a combination of prior tax return data116in addition to online resources118to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control80may be used in conjunction with the estimation module110to select those sources of data to be used by the estimation module110. For example, user input114will require input by the user of data using a user interface presentation84. The UI control80may also be used to identify and select prior tax returns116. Likewise, user names and passwords may be needed for online resources118and third party information120in which case UI control80will be needed to obtain this information from the user. In one embodiment of the invention, the estimated values or other estimated data provided by the estimation module110may be associated with one or more attributes122as illustrated inFIG.9. The attributes122may indicate a label such as a source124or provenance of the estimated value (e.g., user input114, prior tax return116, etc.). In the example ofFIG.9, a source ID124indicates the particular source of the data that is used for the field. For example, source ID01may correspond to user input114. Source ID03may correspond to a prior year tax return116. Source ID05may correspond to online resources118while source ID06corresponds to third party information120. The attributes122may also include a confidence level126associated with each estimated field. The confidence level126is indicative of the level of trustworthiness of the estimated user-specific tax data and may be expressed in a number of different ways. For example, confidence level126may be broken down to intervals (e.g., low, medium, high) with each estimated value given an associated label (e.g., L—low, M—medium, H, high). Alternatively, confidence levels126may be described along a continuum without specific ranges (e.g., range from 0.0 to 1.0 with 0.0 being no confidence and 1.0 with 100% confidence). The confidence level126may be assigned based on the source of the estimated user-specific tax data (e.g., source #1 is nearly always correct so estimated data obtained from this source will be automatically assigned a high confidence level). In some embodiments, the estimation module110may acquire a plurality of estimates from different sources (e.g., user input1145, prior year tax returns116, online resources118, third party information120) and only write the “best” estimate to the shared data store42(e.g., the source with the highest confidence level126). Alternatively, the estimation module110may be configured to ignore data (e.g., sources) that have confidence levels126below a pre-determined threshold. For example, all “low” level data from a source may be ignored. Alternatively, all the data may be stored in the shared data store42including, for example, the attribute122of the confidence level126with each entry. The tax calculation engine50may ignore data entries having a confidence level below a pre-determined threshold. The estimation module110may generate a number of different estimates from a variety of different sources and then writes a composite estimate based on all the information from all the different sources. For example, sources having higher confidence levels126may be weighted more than other sources having lower confidence levels126. Still referring toFIG.9, another attribute122may include a confirmation flag128that indicates that a taxpayer or user of the tax preparation software100has confirmed a particular entry. For example, confirmed entries may be given an automatic “high” confidence value as these are finalized by the taxpayer. Another attribute122may include a range of values130that expresses a normal or expected range of values for the particular data field. The range of values130may be used to identify erroneous estimates or data entry that appear to be incorrect because they fall outside an intended range of expected values. Some estimates, such as responses to Boolean expressions, do not have a range of values130. In this example, for example, if the number of estimates dependents is more than five (5), the tax logic agent60may incorporate into the rules engine64attribute range information that can be used to provide non-binding suggestions to the UI control80recommending a question to ask the taxpayer about the high number of dependents (prompting user with “are you sure you have 7 dependents”). Statistical data may also be used instead of specific value ranges to identify suspect data. For example, standard deviation may be used instead of a specific range. When a data field exhibits statistical deviation beyond a threshold level, the rules engine64may suggest a prompt or suggestion66to determine whether the entry is a legitimate or not. Additional details regarding methods and systems that are used to identify suspect electronic tax data may be found in U.S. Pat. No. 8,346,635 which is incorporated by reference herein. Referring back toFIG.8, in this embodiment, the tax logic agent64includes within or as part of the rules engine64attribute rules130that are incorporated and used to generate the non-binding suggestion. For example, as explained above, when an estimated value is input or otherwise transferred to the shared data structure42, this estimated value may fall outside a generally accepted range of values. This may prompt the TLA60to suggest a confirmatory question to the UI control80to confirm the accuracy of the estimated value that has been obtained. Likewise, various data fields may be associated with a low level of confidence as seen inFIG.9. Questions relating to tax topics that incorporate these low confidence fields may be promoted or otherwise ranked higher so that accurate values may be obtained from the taxpayer. Conversely, if a particular estimated tax field is associated with a high level of confidence, questions concerning this field may be demoted to a lower importance using the attribute rules130. For example, multiple fields with a high level of confidence could be presented to the user in a single interview screen to confirm the accuracy of this information without the need to walk through individual questions. In some embodiments, each estimated value produced by the estimation module110will need to be confirmed by the user using the UI control80. For example, the user interface manager82may present estimated data fields to the user for confirmation or verification using a user interface presentation84. In other embodiments, however, the user may override data using the user interface presentation84. Some estimated data, for example, data having a high confidence level126may not need to be confirmed but can be assumed as accurate. FIG.10illustrates an illustrative user interface presentation84on a computing device102that incorporates the attribute rules130to arrive at a confidence level for tax calculations. The user interface presentation84appears on a screen104of the computing device102. As seen inFIG.10, the dollar amount of the calculated federal refund in listed along with the refund amount of the calculated state refund. The user interface presentation84includes a confidence level indicator132. The confidence level indicator132indicates the overall or aggregate confidence level in the tax calculation. The tax calculation could include a refund amount as illustrated inFIG.10but it may also include a taxes due amount. In the example given inFIG.10, the confidence level indicator132is expressed as a bar134in a bar meter type implementation. The confidence level indicator132may take a number of different forms, however. For example, the confidence level indicator132may be in the form of a gauge or the like that such as that illustrated inFIG.11. In the example, ofFIG.11, the confidence level indicator132is indicated as being “low.” Of course, the confidence level indicator132may also appear as a percentage (e.g., 0% being low confidence, 100% being high confidence) or as a text response (e.g., “low,” “medium,” and “high” or the like). Other graphic indicia may also be used for the confidence level indicator132. For example, the color of a graphic may change or the size of the graphic may change as a function of level of confidence. Referring toFIG.11, in this instance, the user interface presentation84may also include hyperlinked tax topics136that are the primary sources for the low confidence in the resulting tax calculation. For example, the reason that the low confidence is given is that there is low confidence in the amount listed on the taxpayer's W-2 form that has been automatically imported into the shared data store42. This is indicated by the “LOW” designation that is associated with the “earned income” tax topic. In addition, in this example, there is low confidence in the amount of itemized deductions being claimed by a taxpayer. This is seen with the “LOW” designation next to the “deductions” tax topic. Hyperlinks136are provided on the screen so that the user can quickly be taken to and address the key drivers in the uncertainty in the calculated tax liability. FIG.12illustrates the operations of one illustrative method for calculating tax liability according to an embodiment of the invention. In operation1000, a user initiates the tax preparation software100on a computing device102as seen, for example, inFIG.13. The tax preparation software100may reside on the actual computing device102that the user interfaces with or, alternatively, the tax preparation software100may reside on a remote computing device103such as a server or the like as illustrated. In such an instances, the computing device102that is utilized by the user or tax payer communicates via the remote computing device103using an application105contained on the computing device102. The tax preparation software100may also be run using conventional Internet browser software. Communication between the computing device102and the remote computing device103may occur over a wide area network such as the Internet. Communication may also occur over a private communication network (e.g., mobile phone network). Referring back toFIG.12, after initiating the tax preparation software100, the tax preparation software100, in operation1100, gathers or imports tax related data from the one or more data sources48as illustrated inFIGS.7and8. Note that the gathering of tax related data from the one or more data sources48may occur at the time the tax preparation software100is run. Alternatively, the gathering of tax related data from the one or more data sources48may occur over a period of time. For example, data sources48may be periodically queried over time (e.g., during a tax reporting year) whereby updated information is stored in a database (not shown) or the like that is then accessed by the tax preparation software100. This option may improve the efficiency and speed of tax return preparation as the information is already available. In one embodiment, the gathering or importation of data sources such as prior tax returns48b, online resources48c, and third party information48dis optional. For example, a taxpayer may want to start the process from scratch without pulling information from other sources. However, in order to streamline and more efficiently complete a tax return other users may desire to obtain tax related information automatically. This would reduce the number of interview or prompt screens that are presented to the user if such information were obtained automatically by the tax preparation software100. A user may be given the opportunity to select which data sources48they want accessed and searched for relevant tax related data that will be imported into the shared data store42. A user may be asked to submit his or her account and password information for some data sources48using the UI control80. Other data sources48such as some third party data sources48dmay be accessed without such information. Next, as seen in operation1200, after the schema44is populated with the various imported or entered data fields from the data sources48, the tax calculation engine50, using the calculation graphs14, reads data from the shared data store42, performs tax calculations, and writes back data to the shared data store42. The schema44may also be populated with estimates or educated guesses as explained herein using the estimation module110as described in the context of the embodiment ofFIG.8. Operation1200may utilize the method1210, as described above, to efficiently perform the tax calculations using the tax calculation engine50and the calculation graph(s)14. In operation1300, the tax logic agent60reads the run time data62which represents the instantiated representation of the canonical tax schema44at runtime. The tax logic agent60then utilizes the decision tables30to generate and send non-binding suggestions66to the UI control80as seen in operation1400. Alternatively, the tax logic agent60may determine that completeness has been achieved across the tax topics in which case a done instruction may be delivered to the UI control as seen in operation1500. If not done, the process continues whereby the user interface manager82will then process the suggestion(s)66using the suggestion resolution element88for resolving of how to respond to the incoming non-binding suggestions66as seen in operation1600. The user interface manager82then generates a user interface presentation84to the user as seen in operation1700whereby the user is presented with one or more prompts. The prompts may include questions, affirmations, confirmations, declaratory statements, and the like. The prompts are displayed on a screen104of the computing device102whereby the user can then respond to the same by using one or more input devices associated with the computing device102(e.g., keyboard, mouse, finger, stylus, voice recognition, etc.). Still referring toFIG.12, as seen in operation1800, the response or responses that are given by the user of the tax preparation software100are then written back to the shared data store42to thereby update all appropriate fields of the schema44. The process then continues with operation1200and proceeds as explained above until a completeness state has been reached and a done instruction is sent to the UI control80. FIG.14illustrates a schematic representation of one preferred embodiment of the invention in which user input via the user interface presentation84is minimized. As seen inFIG.14, tax calculations2000are performed based on a number of inputs including user inputs2100that are input using the user interface presentation84that appears on the computing device102. It should be noted that tax calculations2000can be made even though there may be some missing data entry that is not incorporated into the tax calculation2000. While the tax return may not be in a condition to be filed, the tax liability or a sub-component thereof (e.g., total itemized deductions, or gross income) can be calculated. These user inputs2100are combined with data sources2200as well as estimates2300. Data sources2200are obtained, for example, as described previously with respect to data sources48. Estimates2300are obtained, as explained previously, using the estimation module110. In one aspect of the invention, a large portion of data needed for the calculation and preparation of taxes is obtained either by data sources2200, estimates2300or both. The user input2100aspect may be minimized by first populating relevant fields using data sources2200and/or estimates2300. The user input2100may be used to input missing data that was not otherwise obtained using data sources2200or estimates2300. User input2100, however, may also be used to verify estimates or verify sourced data. For example, prior to being incorporated into tax calculations (e.g., stored within the shared data store42), the user may be prompted to accept, reject, or alter the values of sourced data2200or estimates2300. User input2100may also be used to resolve conflicts. For example, soured data2200and estimates2300may conflict with one another and user input2100may be required to resolve the conflict. User input2100may also be used to accept or reject sourced data2200or estimates2300. For example, a user may know that a particular estimate2300is incorrect and plans to input this particular value manually. The user may be given the option to override the importation and utilization of sourced data2200and estimates2300. FIG.15generally illustrates components of a computing device102,103that may be utilized to execute the software for automatically calculating or determining tax liability and preparing an electronic or paper return based thereon. The components of the computing device102include a memory300, program instructions302, a processor or controller304to execute program instructions302, a network or communications interface306, e.g., for communications with a network or interconnect308between such components. The computing device102,103may include a server, a personal computer, laptop, tablet, mobile phone, or other portable electronic device. The memory300may be or include one or more of cache, RAM, ROM, SRAM, DRAM, RDRAM, EEPROM and other types of volatile or non-volatile memory capable of storing data. The processor unit304may be or include multiple processors, a single threaded processor, a multi-threaded processor, a multi-core processor, or other type of processor capable of processing data. Depending on the particular system component (e.g., whether the component is a computer or a hand held mobile communications device), the interconnect308may include a system bus, LDT, PCI, ISA, or other types of buses, and the communications or network interface may, for example, be an Ethernet interface, a Frame Relay interface, or other interface. The interface306may be configured to enable a system component to communicate with other system components across a network which may be a wireless or various other networks. It should be noted that one or more components of the computing device102,103may be located remotely and accessed via a network. Accordingly, the system configuration illustrated inFIG.15is provided to generally illustrate how embodiments may be configured and implemented. Method embodiments may also be embodied in, or readable from, a computer-readable medium or carrier, e.g., one or more of the fixed and/or removable data storage data devices and/or data communications devices connected to a computer. Carriers may be, for example, magnetic storage medium, optical storage medium and magneto-optical storage medium. Examples of carriers include, but are not limited to, a floppy diskette, a memory stick or a flash drive, CD-R, CD-RW, CD-ROM, DVD-R, DVD-RW, or other carrier now known or later developed capable of storing data. The processor304performs steps or executes program instructions302within memory300and/or embodied on the carrier to implement method embodiments. Embodiments, however, are not so limited and implementation of embodiments may vary depending on the platform utilized. Accordingly, embodiments are intended to exemplify alternatives, modifications, and equivalents that may fall within the scope of the claims.
77,688
11861735
DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION Referring now toFIGS.1through5, exemplary inventive practice conceives of financial data as encompassing typical data, project structure, and OM&S costs. As shown inFIG.1, the three kinds of OM&S costs are commitments, obligations, and actual costs. All three kinds of OM&S costs—viz., those designated as “commitments,” “obligations,” and “actual costs”—are contained in the billing element (BE) work breakdown structure (WBS). In addition, the kind of OM&S cost designated as “other actual costs” is contained in the network activities (NWA). As shown inFIG.2, ERP transaction code S_ALR_87013542 is capable of delimiting OM&S costs in terms of four cost elements as contained in three cost element categories, viz., 1511.2000, 6790.3000, and 6790.5000. It is notable that the four cost elements for ERP transaction code S_ALR_87013542 somewhat parallel the five OM&S categories defined by Secretary of the Navy Instruction 5200.44 (“Operating Materials and Supplies—Accountability and Management”), 29 Mar. 2019, which are listed inFIG.3. See also DoD Financial Regulation, Volume 11B, Chapter 56 (“Operating Materials and Supplies”), December 1994. For purposes of evaluating accuracy, convenience, and processing time, the present inventors tested their methodology with respect to sixty-four funding documents. Generally speaking, accuracy was considered the most important factor, while time and convenience pertained to ease of accomplishment by project managers and project business financial managers. The financial data were inventively obtained by using transaction codes S_ALR_87013542, ZRQIS0003, and ZRQIS0002. Typical financial data were taken from ERP transaction code ZRQIS0003. Project structure data (the lower level WBS) were taken from ZRQIS0002. OM&S costs were taken from transaction code S_ALR_87013542. Four reports were derived from S_ALR_87013542, one report was derived from ZRQIS0003, and one report was derived from ZRQIS0002. As illustrated inFIG.4, according to an exemplary inventive embodiment six ERP reports were data-filtered in order to attain a balance sheet that was to include OM&S costs. Six reports in total were exported from ERP and imported, respectively, into six Excel worksheets. Macros were executed to produce the balance sheet. In particular, the macros were executed to produce a composite report (synonymously referred to herein as a “master worksheet”) and pivot tables. The master worksheet displayed the following: the original costs from the ZRQIS0003 report (left column ofFIG.5); the OM&S costs from the S_ALR_87013542 reports (middle column ofFIG.5); and total costs (right column ofFIG.5). The total costs were obtained by adding the OM&S costs to the original costs. The execution time for the macros was less than one minute. The overall time to retrieve the six ERP reports in the demonstration was about 30 minutes. With some approximation, it may be expected that this time will increase proportionately for a practitioner of the present invention, as the number of funding documents increases. The accuracy of the inventive embodiment was evaluated by comparing the available budget calculated by the inventive method to the funds remaining from the ERP transaction code ZRQIS0001 report, on the assumption that the ZRQIS0001 report provides the correct balance by sale orders. The calculated available budget, considered along with OM&S costs, equals the funds remaining as described by the ZRQIS0001 report. That is: [Available Budget(ZRQIS0003)]−[OM&SCosts]=[Funds Remaining(ZRQIS0001)] A premise of this comparison to the ZRQIS0001 report was that, in order to be accurate, the inventive method must account for all OM&S costs in order to have the calculated available budget agree with the value from the ZRQIS0001 report. When the available budget calculated by an embodiment of the inventive method was compared to the funds remaining from a ERP ZRQIS0001 report, good agreement was found for funding documents that do not have OM&S costs. There were sixteen funding documents with OM&S costs, and in this case good agreement was found for eight funding documents. For the other eight funding documents the difference between the calculated available budget and the funds remaining was about a few hundred dollars, except for one. In the absence of further investigation, among the possible explanations for these differences is that not all OM&S costs were assigned to the correct cost elements or to the correct funding documents. Exemplary practice of the present invention reports expenses including OM&S costs, Budget, and Available Budget. The inventive method demonstrated that it could match the funds remaining from the ZRQIS0001 report when there were no OM&S costs. The correct reporting of OM&S costs is still problematical insofar as causing disagreement between the calculated available budget and funds remaining for a small number of funding documents. The present inventors also compared five different ERP transaction codes with regard to determining OM&S costs. Each transaction code was tested in its ability to determine OM&S costs, and was rated in terms of accuracy, time, and convenience for the user. A transaction code was rated as accurate if it found all of the OM&S costs. The five ERP transaction codes that were evaluated were: ERP transaction code ZRQIS0002 (Budget Hierarchy Report); ERP transaction code ZRQIS0003 (Project Hierarchy Report); ERP transaction code CJI3 (Actual Line Items Report); ERP transaction code CJI5 (Commitment Line Items Report); and ERP transaction code S_ALR_87013542 (Project Plan/Actual/Variance). Many project managers and project business financial managers were familiar with ERP transaction codes ZRQIS0002, ZRQIS0003, CJI3, and CJI5. The ERP transaction code S_ALR_87013542 was not as well-known as the other four transaction codes. In the comparative study testing of the five transaction codes, ERP transaction code ZRQIS0002 was found to be accurate but rated low on time and convenience. The OM&S costs were found by manually drilling down on the commitments, obligations, and actual costs at the Billing Element (BE) Work Breakdown Structure (WBS). As the number of projects would increase, the number of times the user had to manually drill down would increase by a factor of three. The transaction code ZRQIS0003 was able to organize the financial data in an abbreviated project structure showing the Billing Element Work Breakdown Structure (BE WBS) and the Network Activities (NWA) by funding documents. This attribute was found only in the ZRQIS0003 report, while other ERP reports such as the ZRQIS0002 report lacked this desired detail. The ZRQIS0003 report displayed the OM&S costs by drilling down at the three different costs (commitments, obligations, and actual costs) in much the same way as described hereinabove for the ZRQIS0002. The ERP transaction codes CJI3 and CJI5 were the standard ERP vehicles in the industry to determine the OM&S costs. In the study, the CJI3 transaction code was found to be accurate and easier to use than the ZRQIS0002 transaction code and the ZRQIS0003 transaction code. The CJI5 transaction code was found to be inaccurate. Several commitments and obligations did not show up in the CJI5 report, but were itemized in the ZRQIS0002 report and the S_ALR_87013542 report. The S_ALR_87013542 transaction code was found to be a good alternative for both the CJI3 transaction code and the CJI5 transaction code. Unlike the four other transaction codes being tested, the ERP transaction code S_ALR_87013542 was capable of accepting inputs as projects, work breakdown structure (WBS) elements, and networks. The OM&S costs were determined by entering the BE WBS to represent multiple funding documents. As shown inFIG.2, the OM&S costs were found at three cost elements: 1511.2000 (OM&S Held for Use); 6790.3000 (Other Expense and Budget); and 6790.5000 (Other Expenses Not Requiring Budgetary Resources). The present inventors inventively availed themselves of the unique capabilities of the S_ALR_87013542 transaction code. The three cost elements were readily displayed in the S_ALR_87013542 report. The OM&S costs were acquired by drilling down on the three types of costs (commitments, obligations, and actual costs) at the three costs elements (1511.2000; 6790.3000); 6790.5000). Generally speaking, commitment costs and obligation costs are related to each other, and together are clearly distinguishable from actual costs. Accordingly, it may be convenient to consider the three types of costs—viz., commitments, obligations, and actual costs—as constituting two categories of costs, viz., (i) commitments and obligations, and (ii) actual costs. For the cost element 1511.2000 there were two reports. One report corresponding to cost element 1511.2000 showed the OM&S costs at the WBS as actual costs. The second report corresponding to cost element 1511.2000 showed the OM&S costs at the WBS as commitments and obligations. For the cost element 6790.3000, the corresponding report showed OM&S costs at both the WBS and the NWA as actual costs. For the cost element 6790.5000, the corresponding report showed OM&S costs at the NWA as actual costs. The “drill down” technique —specifically, drilling down on (i) commitments and obligations and (ii) actual costs—was used four times, and was effectuated independently of the number of funding documents or projects. The S_ALR_87013542 transaction code was found to be accurate, and most efficient as compared to the four other reports (ZRQIS0002; ZRQIS0003; CJI3; CJI5), for retrieving OM&S costs. In an example of inventive practice, the present inventors started with the data from a ZRQIS0003 report in order to generate a composite report (master worksheet). Typically, OM&S costs appeared at the billing element work breakdown structure (BE WBS), the two lower level (levels 3 and 4) WBSs, and at the network activities (NWAs). The ZRQIS0003 report would sometimes show the OM&S costs at the BE WBS, but did not display them at the NWAs. The ZRQIS0003 report also would not display the two lower level WBSs below the BE WBS. The lower level WBSs and their titles were obtained from the ERP ZRQIS0002 report and inserted into the ZRQIS0003 data to create a more detailed project structure. In this example, the ERP transaction code S_ALR_87013542 was used to obtain the OM&S costs at the BE WBS, lower level WBSs, and NWAs. On a few occasions OM&S costs would show up for the same BE WBS in both the ZRQIS0003 and S_ALR_87013542 reports. The present inventors decided to include the OM&S costs from the ZRQIS0003 in the composite report, but exclude the OM&S costs from the S_ALR_87013542 in the composite report. This decision was based on a comparison between the available budget calculated in this example, versus the available budget calculated by the ERP transaction code ZRQIS0001 (Project Funding Report). Agreement between the two calculations was found by not including the OM&S costs for the WBS at the BE as indicated in the S_ALR_87013542 report. A non-inventive approach to creating a composite report (master worksheet) in Excel would represent a tedious and time-consuming proposition especially prone to human error, such as involving manual gathering and assembling of items of information—e.g., inserting formulas, cutting, copying, and pasting among multiple worksheets having millions of pieces of data. In contrast, exemplary inventive practice provides for at least one data-filtering macro in Excel, such as written in the computer language known as Visual Basic for Applications. The one or more inventive macros afford an automation that reduces the time it takes to organize and generate a composite report. Generally speaking, a typical Microsoft Excel document (file) displays one or more tabbed worksheets, with the number of tabbed worksheets at the option of the Excel user. For instance, if the Excel document displays three tabbed worksheets, then Excel labels these worksheets as “Sheet1,” “Sheet2,” and “Sheet3,” respectively. In general, a pivot table is a data summarization tool that can automatically sort and sum data, such as data contained in a more extensive table. By “pivoting” (arranging and rearranging) statistics, a pivot table serves to set forth or emphasize useful or particularly useful information. Microsoft Excel software has a general built-in capability of displaying pivot tables in separate worksheets. Exemplary inventive practice creates, by means of at least one macro, a visually and informatively enhanced presentation of a balance sheet in the form of a pivot table, such as shown by way of example inFIGS.11and12. The pivot-tabular balance sheet shown inFIG.12displays the budgeted amount, assigned costs with and without OM&S costs, OM&S costs, and available budget with OM&S costs by funding document (standard document number). The user is able to customize the pivot-tabular balance sheet with simple clicks of the computer mouse. For instance, in accordance with a customizable pivot table's many features, filters may be applied to display specific funding documents, resource sponsors, or contract end dates. The user may choose to change the design style of a pivot table, and/or display various other data on a master worksheet or other pivot table. Depending on the inventive embodiment, a composite report (master worksheet) may contain at least some of the following columnar designations (and/or other columnar designations not enumerated among the following): Object Number (PS); Short Text; Assignment; Funded Program; Control Key; Actual Work; Contract End Date; Standard Document; Appropriation; Resource Sponsor; ACRN (Accounting Classification Reference Number); Work Center; Budgeted; Planned; Commitments; Obligations; Actual Costs; Assigned Costs; Available Budget; Remaining Planned; *Commitments; *Obligations; *Actual Costs; *Assigned Costs; **Commitments; **Obligations; **Actual Costs; **Assigned Costs; **Available Budget; **Remaining Planned; Owner. An exemplary inventive method uses ERP transaction code S_ALR_87013542 and an Excel macro to data-filter the data from six ERP reports and to present the resulting balance sheet data in the form of one or more pivot tables. The transaction code S_ALR_87013542 was shown to be accurate and more convenient in determining the OM&S costs than any other ERP methods such as CJI3 and CJI5. As uniquely featured by a report inventively obtained via ERP transaction code S_ALR_87013542, desired OM&S costs are segregated and displayed in three cost element categories (1511.2000, 6790.3000, and 6790.5000). This feature is available to other ERP reports only by manually customizing for instance the CJI3 and CJI5 reports. The generation of the balance sheet to show expenses, including the OM&S costs, budget, and available budget, requires the manipulation of hundreds to millions of pieces of data, depending on the number of funding documents. Previous to the present invention, this task has been accomplished in Excel by hand, e.g., by inserting formulas, cutting, copying, and pasting among multiple worksheets. In contrast, exemplary practice of the present invention takes advantage of the capabilities of Excel so as to implement one or more macros to perform data-filtering and to generate one or more pivot-tabular visual presentations of a balance sheet, thereby facilitating the generation of balance sheets. Alternative methods to retrieve the OM&S costs are ZRQIS0002, ZRQIS0003, CJI3, and CJI5 reports. The ZRQIS0002 and ZRQIS0003 are accurate but more time-consuming to use than the S_ALR_87013542 report. It was found that OM&S costs from CJI3 report are in agreement with the S_ALR_87013542 report. The CJI5 report was found to be incorrect and should not be used. The S_ALR_87013542 report was found to be accurate and most convenient for the user. An alternative to generating the balance sheet is insert formula, cut and paste the data by hand, but is time-consuming and tedious. The mentioned alternatives may lead to incorrect reporting of the OM&S cost and a lot of time required to get to the wrong results. With reference toFIGS.6through12, exemplary practice of the present invention uses data in ERP and pivot tables in Microsoft Excel. According to exemplary inventive practice, several large sets of financial data, such as shown inFIG.10, are exported from ERP and saved on the user's computer. The alpha-numeric data, consisting of a mixture of text and numbers, are imported into multiple Excel worksheets to be processed. The present inventors developed macros using the computer language Visual Basic for Applications (VBA) in Excel. As illustrated by way of example inFIG.9, the present invention's implementation of one or more macros automates the process of generating a balance sheet. Other embodiments of the present invention are shown respectively inFIGS.6and7and inFIG.8.FIGS.6through8are each schematically illustrative of exemplary inventive practice, wherein one or more Excel macros are produced to retrieve financial data from ERP, and one or more pivot tables are created to organize, categorize, and manipulate the financial data. Particularly with reference toFIG.9, an exemplary inventive embodiment includes three macros. A first macro is used to import the data, from the ERP reports, to separate “raw” worksheets in Excel. A second macro creates pivot tables, which are utilized to speed up the data processing by organizing data for finding and extracting the desired data. The data are further organized, categorized, and manipulated by the second macro to create the composite report (master worksheet) and one or more summary pivot tables, such as a “Standard 1” or an “Available Budget” pivot table. A third macro is used to extract data from the summary pivot table(s) to create customized worksheets for specific funding documents. In comparison to writing VBA code, creating pivot tables was found by the present inventors to be a more efficient way for accomplishing same or similar tasks of manipulating data. The present invention's pivot tables served two functions. First, at least one intermediate presentation pivot table was implemented as “intermediate” means to summarize particular sets of data for further processing by the inventive macro(s). Second, at least one summary presentation pivot table was implemented as “summary” means to display financial data in a dynamically customizable fashion. Many inventive embodiments create several intermediate pivot tables, an example of which is the intermediate pivot table shown inFIG.11(which is an abbreviated version of a larger table). As shown inFIG.10, the data from ERP are presented in columns setting forth the date, network activity, costs, and material description. The same Network Activity appears in several instances with material purchases. An intermediate pivot table, such as that shown inFIG.11, succeeds in calculating the total cost for each NWA. A master worksheet (composite report), which itself is a pivot table, contains data from ERP reports and intermediate pivot tables. A summary pivot table allows the user to customize the data to be finally presented. The summary pivot table draws its data from a larger data set, viz., the master worksheet. U.S. Pat. No. 10,282,407 B1 to inventor Gilbert F. Lee, issue date 7 May 2019, is pertinent to the instant disclosure and is hereby incorporated herein by reference. The present invention, which is disclosed herein, is not to be limited by the embodiments described or illustrated herein, which are given by way of example and not of limitation. Other embodiments of the present invention will be apparent to those skilled in the art from a consideration of the instant disclosure, or from practice of the present invention. Various omissions, modifications, and changes to the principles disclosed herein may be made by one skilled in the art without departing from the true scope and spirit of the present invention, which is indicated by the following claims.
20,229
11861736
DESCRIPTION OF EXAMPLE EMBODIMENTS Social network operators strive to accommodate a diverse set of users. As discussed above, one way to accomplish this is to provide internationalization and localization strategies that facilitate communication between users in hundreds of different languages throughout the world. This allows users from diverse backgrounds to utilize the social networking platform to communicate with others that speak different languages and fosters the spread of new ideas and collaboration. In fact, automatic language translation engines, site customizations, and other initiatives that increase international adoption have become a large focus of investment for operators. However, despite its near universal appeal and intuitive nature, music is often neglected as a communication medium. For example, music can be used to communicate a host of human emotions, reactions, announcements, time periods, and senses. For example, a melody performed on an electronic synthesizer over drums nearly instantly conjures images of the 1980's to the listener. As another example, low stacked chords instill a sense of fear in the listener. Upbeat fast rhythms conjure images of happiness to the listener. As yet another example, the happy birthday melody proclaims happy birthday to every listener that hears it. Well-known melodies can express feelings of celebration, party, congratulations, sorrow, fright, or even express condolences to the listener, in addition to a nearly unlimited set of additional expressions and feelings. These musical associations are understood by a wide variety of cultures and do not require translation to convey the intended message. In addition, music can be used to accentuate other communications. For example, dark dreary low chords accompanying an opinion article can express that a user disagrees with the opinion expressed in the article, even without sending a single message to that affect. A happy light melody can express happy sentiments over a communication. Moreover, music can enhance the receiving users experience of the communication. For example, music can increase suspense or introduce feelings of celebration or fear in the listener, especially when timed with or otherwise coordinated with a visual message. Whether as a standalone communication, or accompanied with other visual communications or expressions, mechanisms for harnessing the powerful communication medium of music in social-networking environments remain largely untapped. The teachings of the present disclosure aim to harness music as a communication medium by enabling users to draft musical compositions tailored to each user's musical sophistication. A music composition interface enables recording or drafting of original musical works and enables reproduction and performance of drafted compositions. These compositions can then be embedded in musical messages just as written words are. In addition, compositions can be associated with posts or stand alone as posts on their own without accompaniment. In certain embodiments, a user interface is provided in a social-networking application or website that enables users to compose, record, edit, and share music with others. For example, a typical social-networking application may contain a news feed that contains stories, photographs, videos, and other communications posted by related users. The social-network may also have a messaging service that allows users to communicate to a more limited set of users. The news feed may have an interface that initiates a composition process that provides for the creation and sharing of content with other users. The messaging application provides a similar interface to allow a user to compose messages including text, video, audio, animations, stickers, emoji's, drawings, and the like with other users in a messaging session. In certain embodiments, the composition interface includes an interface for composing an original music composition. For example, selection of a button associated with composing a musical message displays a piano keyboard that is scaled to accommodate a suitable number of keys within a given screen. For example, the user interfaces can be scaled up or down depending on whether the user is using a mobile phone, IPAD, desktop computer, or any other computing device that is used as an interface to the social networking application or website. For example, on a mobile phone held in a portrait orientation, a single octave row of piano or keyboard keys starting at middle C is displayed, but when the user turns the device to the landscape orientation, additional display area is provided for the composition interface to display additional keys such as, for example, a second octave of keys corresponding to the bass clef. As another example, several rows of piano keys can be displayed in a single composition interface. As another example, interface buttons are provided to switch between octaves, such as by toggling between treble and bass clefs. In certain embodiments, the music composition interface allows users to input gestures corresponding to musical outputs such as sound and musical notes. The musical notes and sound can be recorded and displayed on a composition clef. For example, as the user gestures by touching display areas corresponding to particular keys on a piano interface, the corresponding note is recorded on a composition clef displayed. In certain embodiments, the clef can record timing, force, and other attributes of the input gesture. The composition clef can denote these different presses with the appropriate notes and modifiers, such as by indicating half or whole notes, slides, staccato, allegro, andante, presto, crescendo, and various other composition syntax elements. In certain embodiments, other instruments can be displayed in the music composition interface. For example, a virtual guitar, virtual drum set, virtual harpsichord, or any other conceivable instrument interface can be provided. For example, one guitar interface may provide a virtual neck and wires that allow a user to strum and hold chords on a neck. The achieved notes may be played allowed and recorded on the composition clef In certain embodiments, the music composition interface may allow users to layer in various instruments, external tracks, and voices to create a layered composition. With reference toFIG.1, a flow-chart100describing a method for composing and communicating with composed music in a social-networking system is illustrated according to a non-limiting embodiment of the present disclosure. At step110, a social-networking system receives a series of coordinated user gestures that are input into a composition control interface. The social-networking system also receives a post identifier that identifies a post in a social-networking system. For example, the post identifier may identify an existing post or a new post that is being created in the music composition interface. For example, when the user creates a new post, a new post identifier can be created by the social-networking system and associated with the new musical composition and other media elements from the post. In describing embodiments associated with the flow-chart100inFIG.1, reference is now made to accompanyingFIGS.2-5. With reference now toFIG.2, an example user interface is displayed that allows a user of a social-networking system to create content for sharing in the social-networking system. This particular interface is designed for interacting with a mobile device, though the teachings of the present disclosure fully contemplate use at any variety of interfaces, including virtual reality, desktop computers, mobile phones, tablets, and the like. For example, the user may type a message to create a post by selecting area210with an input gesture. The user may compose a multimedia message by selecting any one of the selectable target areas illustrated for exemplary purposes in selectable target areas220a-e. For example, the user may share or post a GIF, sticker, emoji, or photograph by selecting areas220bor220e. The user may check in to a location and share the user's location with a selectable set of friends by selecting the check in area220c. As another example, the user can tag people in the content being composed by selecting the area220d. In certain embodiments, a user may wish to compose a message or post that consists in whole or in part of a piece of original composed music. In this case, the user would select the target selectable area220awith an input gesture. In certain embodiments, such a selection may result in display of the interface shown inFIG.3.FIG.3shows an example user interface control for composing music as referenced in step110ofFIG.1. The user may create an original work of music that is recorded in musical notation and composed by the social-networking system. For example, the user may use the selectable target area310, which is illustrated in the present example as individual keys in one octave of a digital keyboard. The user may interact with the interface by gesturing in the form of tapping, touching, swiping, or otherwise interacting with the selectable target areas shown in310. For example, the user may tap individual keys310a-n. In certain embodiments, the interface may output a corresponding sound for each note input by the user. For example, the sound may mimic the acoustic properties of the selected instrument. While the user is inputting gestures and interfacing with the selectable target areas shown in310, a system records the corresponding notes and other information regarding the user input. For example, the system may record the length or duration of each input gesture associated with each key. If the interface is equipped with the means to do so, the system may also determine the strength of the user input gesture. For example, if the user inputs a forceful input gesture, the system may note that. Likewise, the system may determine a soft or delicate input gesture and store information regarding that input. In certain embodiments, a practice mode is initialized by default when composing a music message. For example, when the user selects the music composer interface button220afrom the composition screen inFIG.2, the interface inFIG.3is displayed but no recording is enabled. The practice mode can be denoted by displaying the record button330in the staff interface320. For example, the user can practice the music he or she wishes to compose by gesturing in the relevant areas of area310. Corresponding sounds are produced as input by the user but the system does not begin composing the music until the user selects recording button330. In certain embodiments, the user can select a wide variety of instruments to “play” by gesturing or interacting with selectable target areas displayed in the interface. For example, with reference to buttons340,350,360, and370, the user can change from a default “piano” interface to various other interfaces such as drums, record or digital turntables, or a guitar. The input target areas310may change with each target instrument selected. For example, when the drums button350is selected, the target interface area310changes to display a series of drum pads. The user can interact with each drum pad to output a sound. Recording and composition as described below follows in much the same manner based on the user's desired instrument selection. While the teachings of the present disclosure refer to notes, the embodiments described herein are equally applicable to percussive melodies which can also be described as notes with timing information. Sound information can also be stored to indicate which type of drum the user interfaced with for reproduction that mimics that particular type of drum. In certain embodiments, the user can select a wide variety of attributes of the desired instrument. For example, with reference to the piano keys310illustrated inFIG.3, the user can select the octave associated with the set of keys. In certain embodiments, the user can rotate the interface device to display a wider array of keys. For example, the treble and base clef octaves of keys can be displayed if the user rotates the display device to a horizontal orientation. The user can swipe the interface keys310left or right to display the next higher or lower octave of keys. In certain embodiments, the clef displayed in staff320changes based on the octave selected by the user. In certain embodiments, when the user wishes to end practice mode and start composing a musical message, the user presses the record button330and begins gesturing into the selectable target areas associated with one or more selected instruments. The music (or absence of sound/notes) in the form of one or more of input gestures, notes, and/or the sounds produced therefrom begins recording when the user selects the record button. In certain embodiments, a musical note is displayed in the staff320that corresponds to each key selected by the user. The duration of each note is indicated in the staff based on the length of time that the user holds each key (by holding the input gesture). For example, a flagged note or linked notes can express long input key depressions. Those of ordinary skill in the art will appreciate that while “keys” are discussed in the context of the present disclosure for ease of reference and consistency with the illustrated figures, the embodiments referenced herein apply equally to the wide variety of other instruments. In certain embodiments, the notes displayed in the staff are responsive to the user's input gestures. For example, this may allow the music composer interface to teach users how to compose music themselves. As the user sees the corresponding notes appearing in the staff, the user associates the key with that note and learns to draft or compose music. This operation is discussed more fully below with reference to the music composition tutoring mode of operation. In certain embodiments, a user can edit various aspects of a recorded musical composition. For example, the user can modify the notes, the length of time that each note is played, the duration of the composition (e.g., by cropping the beginning or end of the composition) and other aspects of the composition with the composition interface. For example, the user can tap in the staff320and swipe left or right to access the notes for a particular piece of the composition. In certain embodiments, the user can pinch to zoom in or out of the staff to expand or contract the scale by which the viewer views the composition. For example, the user can zoom out of the staff320to view an entire 3 minute composition in one interface. The user can zoom into a particular part of the staff to enlarge the musical notation associated with a particular part of the composition. Returning toFIG.1, at step120the system translates the input gestures into a music composition. In certain embodiments, the system translates each individual input gesture into a corresponding expression in musical notation. The timing information is also derived from the input gesture information to denote the timing with respect to each musical note recorded on a staff. The system may also translate the other associated information into appropriate musical notation. For example, the system may determine that the user gestures are becoming more forceful in ascending order, and may interpret this as a crescendo in volume. The system may denote a crescendo symbol in the music composition. Similarly, the system may interpret gestures to have been input in a fast, light, repeating manner of short duration. These gestures may be associated with a staccato musical notation, and the appropriate notation can be applied or denoted in the composition, In addition to the translation and composition of the written musical notation, the system may also record the resulting sound created by “playing” the recorded music. For example, the interface may record the sound produced by the user's live input gestures input into the interface. Additionally or alternatively, the composed music composition can be replayed to produce the same and/or a similar resulting music composition. For example, like the rolls of a player piano, the recorded music composition can be digitally replayed by digitally reproducing each note of the recorded composition at the intensity and with the style that the user input for each gesture while recording. For example, this “style” information (e.g., the vigor or strength, duration, and other input information) can be recorded along with the associated note for each user input gesture. This information can be used to guide the digital reproduction of each recorded note. In certain embodiments, this information is stored with the musical composition to enable realistic playback that reflects the composer's unique style. Returning now toFIG.1, at step130the system associates the musical composition with a post item. The post item may be new or existing. For example, in the example shown inFIG.2, the system creates a new post item with a new post identifier for the composition. In certain embodiments, the system presents an opportunity for a user to compose an original music composition in response to or for an existing post item. For example, in the example shown inFIG.4a post is displayed to a user, such as in a traditional news feed or other post aggregation. However, in this interface, an interface button420provides the user with an option to compose a musical message that can be in response to or part of the original post item410. In certain embodiments, when the user selects interface button420, an input control interface for composing a music composition is displayed, such as the example interface shown inFIG.5. This interface may be similar to or the same as the new composition interface illustrated inFIG.3, but is now displayed in relation to the post item (410fromFIG.4). This interface allows a user to compose a message that is inspired by, relates to, or enhances an underlying post item. In addition, the interface allows the user to express an opinion, or enhance a viewer's experience of a post item by composing an original music composition for display with the post item. In certain embodiments, the interface allows a user to comment on a particular post. For example, a musical composition displayed as a comment is not overlaid with or displayed with the original post, but is instead displayed below the original post. For example, the comment does not suggest an affiliation between the agent posting the original post and the agent posting the musical comment. In certain embodiments, the musical composition can be a “reaction” to the original post. For example, similar to “liking” or reacting with an “emoji” to a particular post, a musical composition can express the composer's reaction to a given post. “Reactions”, like comments, also do not convey an affiliation with the original poster but instead clearly denote that the message conveys a summary of the viewer or recipient's response to the posted material. In certain embodiments, user interfaces are created to display posts in a news-feed style aggregation in which “reactions” are played live as they are posted to a particular post. For example, while a user is viewing post410fromFIG.4, original music compositions that other users compose and post in reaction to the post410can be played. In addition to other emoji or “liking” style emojis, the musical compositions can also be displayed. For example, as people react with musical notes input into their interface composition keyboards, the musical notes may pop up over the post. For example, the live reactions may appear as a series of notes that appear to float over the post. By way of example, as the user inFIG.5reacts to the post, he or she inputs gestures corresponding to keys310a-n. Not only are the gestures recorded by the system, but in certain embodiments, the notes are individually live streamed to other users who are viewing the post. The individual notes input by the user are displayed as floating on top of the post or on a staff underneath the post. The notes may scroll across the staff for some or all of the viewers viewing a post as they are played by the composing user. Returning now toFIG.1, at step140a request for a post item is received and at step150the post item is formatted for display with a graphical representation of the associated musical composition in a user interface control. For example, with reference toFIG.4, a post410is displayed in a graphical user interface. The post410is associated with a musical composition as indicated by the sound icon in the lower right corner of the first image. In certain embodiments, the user may click the sound icon to modify the post display to play the associated composition and/or display the musical notation associated with the composition. For example, the staff with notes620a-cfromFIG.6can be displayed beneath the original post410while the musical composition is played. In certain embodiments, the notes620a-cmay scroll across the staff as time progresses and as each note is played. For example, each note may be played as it crosses some axis in the screen, such as the center point. This indicates to the user which note is being played and further assists in teaching the user to associate given notes with the corresponding musical notation. In certain embodiments, the music composition interface allows the user to share his or her composition with control over the format of the output and audience. For example, the user may wish to make the composition private to only his or her friends. The user may wish to only share the sheet music version of the composition, or only share the audio version. The user may wish to overlay the composition over a related post. The user may wish to allow friends to contribute to the musical composition post by adding additional layers or editing the content of the composition. Each of these variables can be configured by the user at post time or in privacy settings. In certain embodiments, the music composition interface provides for customizing traditional communication mechanisms, such as by overlaying an original piece of composed music over, for example, an emoji. The system may store compositions for particular characters or emojis and replay the composition during transmission of the character with virtually limitless personalization possibilities for fixed characters. For example, a set of frequently used compositions can be stored in a user's “music roll” that is similar to a camera roll of recent photos. For example, with reference again toFIG.6, an interface for typing a message and associating a musical note with each typed character in the message is illustrated in accordance with a non-limiting embodiment of the present disclosure. In the illustrated embodiment, the word “HEYY” is being associated with a musical composition. In certain embodiments, each character in the alphanumeric keyboard can be associated with an individual musical note. As the user types the message, the musical note for each character pops up out of the interface displaying the key in live response to the user typing the message. Additionally or alternatively, each note620a-ccorresponding to each typed key630a-cis added to the staff with the timing indicated by the timing of the alphanumeric key press. For example, if the user waits a long time between pressing “H” and “E”, the timing on the staff may reflect this pause in displaying associated notes620aandb. In certain embodiments, the letters610a-ccan be color coded to correspond to color coded notes620a-c. For example, a green note630apops out of the alphanumeric keyboard interface while a green note620ais displayed on the staff and a green “H” character610is displayed in the message composition interface. In certain embodiments, an interface is provided for associating each character in a message or each character, icon (emoji), sticker, or image, with a unique musical note or message. For example, the user can assign notes, melodies, or compositions to a particular character or emoji. This enables users to fully customize a fixed set of characters with original music compositions. In certain embodiments, an enhanced interface for displaying the associated musical compositions can be provided by synchronizing display of the musical notes being played with animations that appear to show the notes popping out of the screen as each note is being played. In certain embodiments, this may be referred to as “text songs” to reflect the idea that musical compositions can be created by merely texting a friend when each key has an assigned musical note. In certain embodiments, instrument localization defaults to a native virtual instrument for a new geographic area. For example, if a user is traveling to Berlin, a virtual accordion can be displayed as the default virtual instrument in the music composer interface (the accordion was invented in Berlin). In certain embodiments, the location services provided by the user's device or user preferences or profile can be consulted in determining a location for the user. When the user indicates a desire to compose a music interface, the default instrument or set of instruments for that location can be displayed. Thus, unlike with custom language keyboards, which require knowledge and understanding of the target language before any communications can be created, the music interface allows people from different backgrounds to communicate in the instruments associated with foreign culture and create music compositions. In certain embodiments, the system provides an interface for a “music tutor” mode or “learning” mode that instructs users to play and compose music using a similar composition interface. One example interface for a tutor mode is illustrated inFIG.7. For example, users can import purchased popular songs to see a scaled-down rendition of the song and learn how to play the song using virtual instruments. In certain embodiments, individual keys710a-cof the tutor interface light up to teach the user how to play a particular song. For example, in the music composition interface, such as that described in connection withFIG.3above, a tutor mode button720(fromFIG.7) is provided that enables a user to toggle tutor mode on or off. The tutor mode enables a user to select a pre-recorded song or composition and instructs the user how to play the song by lighting up a series of keys710a-c. The user can choose to record the user's rendition of the song, and share the created composition as a post, comment, reaction, message, or other communication. This enables even new musicians to add custom flare or emotion to any song and share it with their friends. In certain embodiments, an interface is provided for selecting a song or musical composition for use in tutor mode by typing in the first letters of the song in a normal message composition screen. As the system recognizes the characters input are spelling a song, the auto-complete area of the interface can indicate that the song is available for use in tutor mode. When the user selects the auto-complete target, the tutor mode is engaged for that song. FIG.8illustrates an example network environment800associated with a social-networking system. Network environment800includes a client system830, a social-networking system860, and a third-party system870connected to each other by a network810. AlthoughFIG.8illustrates a particular arrangement of client system830, social-networking system860, third-party system870, and network810, this disclosure contemplates any suitable arrangement of client system830, social-networking system860, third-party system870, and network810. As an example and not by way of limitation, two or more of client system830, social-networking system860, and third-party system870may be connected to each other directly, bypassing network810. As another example, two or more of client system830, social-networking system860, and third-party system870may be physically or logically co-located with each other in whole or in part. Moreover, althoughFIG.8illustrates a particular number of client systems830, social-networking systems860, third-party systems870, and networks810, this disclosure contemplates any suitable number of client systems830, social-networking systems860, third-party systems870, and networks810. As an example and not by way of limitation, network environment800may include multiple client system830, social-networking systems860, third-party systems870, and networks810. This disclosure contemplates any suitable network810. As an example and not by way of limitation, one or more portions of network810may include an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, or a combination of two or more of these. Network810may include one or more networks810. Links850may connect client system830, social-networking system860, and third-party system870to communication network810or to each other. This disclosure contemplates any suitable links850. In particular embodiments, one or more links850include one or more wireline (such as for example Digital Subscriber Line (DSL) or Data Over Cable Service Interface Specification (DOCSIS)), wireless (such as for example Wi-Fi or Worldwide Interoperability for Microwave Access (WiMAX)), or optical (such as for example Synchronous Optical Network (SONET) or Synchronous Digital Hierarchy (SDH)) links. In particular embodiments, one or more links850each include an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, a portion of the Internet, a portion of the PSTN, a cellular technology-based network, a satellite communications technology-based network, another link850, or a combination of two or more such links850. Links850need not necessarily be the same throughout network environment800. One or more first links850may differ in one or more respects from one or more second links850. In particular embodiments, client system830may be an electronic device including hardware, software, or embedded logic components or a combination of two or more such components and capable of carrying out the appropriate functionalities implemented or supported by client system830. As an example and not by way of limitation, a client system830may include a computer system such as a desktop computer, notebook or laptop computer, netbook, a tablet computer, e-book reader, GPS device, camera, personal digital assistant (PDA), handheld electronic device, cellular telephone, smartphone, augmented/virtual reality device, other suitable electronic device, or any suitable combination thereof. This disclosure contemplates any suitable client systems830. A client system830may enable a network user at client system830to access network810. A client system830may enable its user to communicate with other users at other client systems830. In particular embodiments, client system830may include a web browser832, and may have one or more add-ons, plug-ins, or other extensions. A user at client system830may enter a Uniform Resource Locator (URL) or other address directing the web browser832to a particular server (such as server862, or a server associated with a third-party system870), and the web browser832may generate a Hyper Text Transfer Protocol (HTTP) request and communicate the HTTP request to server. The server may accept the HTTP request and communicate to client system830one or more Hyper Text Markup Language (HTML) files responsive to the HTTP request. Client system830may render a webpage based on the HTML files from the server for presentation to the user. This disclosure contemplates any suitable webpage files. As an example and not by way of limitation, webpages may render from HTML files, Extensible Hyper Text Markup Language (XHTML) files, or Extensible Markup Language (XML) files, according to particular needs. Such pages may also execute scripts such as, for example and without limitation, those written in JAVASCRIPT, JAVA, MICROSOFT SILVERLIGHT, combinations of markup language and scripts such as AJAX (Asynchronous JAVASCRIPT and XML), and the like. Herein, reference to a webpage encompasses one or more corresponding webpage files (which a browser may use to render the webpage) and vice versa, where appropriate. In particular embodiments, social-networking system860may be a network-addressable computing system that can host an online social network. Social-networking system860may generate, store, receive, and send social-networking data, such as, for example, user-profile data, concept-profile data, social-graph information, or other suitable data related to the online social network. Social-networking system860may be accessed by the other components of network environment800either directly or via network810. As an example and not by way of limitation, client system830may access social-networking system860using a web browser832, or a native application associated with social-networking system860(e.g., a mobile social-networking application, a messaging application, another suitable application, or any combination thereof) either directly or via network810. In particular embodiments, social-networking system860may include one or more servers862. Each server862may be a unitary server or a distributed server spanning multiple computers or multiple datacenters. Servers862may be of various types, such as, for example and without limitation, web server, news server, mail server, message server, advertising server, file server, application server, exchange server, database server, proxy server, another server suitable for performing functions or processes described herein, or any combination thereof. In particular embodiments, each server862may include hardware, software, or embedded logic components or a combination of two or more such components for carrying out the appropriate functionalities implemented or supported by server862. In particular embodiments, social-networking system860may include one or more data stores864. Data stores864may be used to store various types of information. In particular embodiments, the information stored in data stores864may be organized according to specific data structures. In particular embodiments, each data store864may be a relational, columnar, correlation, or other suitable database. Although this disclosure describes or illustrates particular types of databases, this disclosure contemplates any suitable types of databases. Particular embodiments may provide interfaces that enable a client system830, a social-networking system860, or a third-party system870to manage, retrieve, modify, add, or delete, the information stored in data store864. In particular embodiments, social-networking system860may store one or more social graphs in one or more data stores864. In particular embodiments, a social graph may include multiple nodes-which may include multiple user nodes (each corresponding to a particular user) or multiple concept nodes (each corresponding to a particular concept)—and multiple edges connecting the nodes. Social-networking system860may provide users of the online social network the ability to communicate and interact with other users. In particular embodiments, users may join the online social network via social-networking system860and then add connections (e.g., relationships) to a number of other users of social-networking system860to whom they want to be connected. Herein, the term “friend” may refer to any other user of social-networking system860with whom a user has formed a connection, association, or relationship via social-networking system860. In particular embodiments, social-networking system860may provide users with the ability to take actions on various types of items or objects, supported by social-networking system860. As an example and not by way of limitation, the items and objects may include groups or social networks to which users of social-networking system860may belong, events or calendar entries in which a user might be interested, computer-based applications that a user may use, transactions that allow users to buy or sell items via the service, interactions with advertisements that a user may perform, or other suitable items or objects. A user may interact with anything that is capable of being represented in social-networking system860or by an external system of third-party system870, which is separate from social-networking system860and coupled to social-networking system860via a network810. In particular embodiments, social-networking system860may be capable of linking a variety of entities. As an example and not by way of limitation, social-networking system860may enable users to interact with each other as well as receive content from third-party systems870or other entities, or to allow users to interact with these entities through an application programming interfaces (API) or other communication channels. In particular embodiments, a third-party system870may include one or more types of servers, one or more data stores, one or more interfaces, including but not limited to APIs, one or more web services, one or more content sources, one or more networks, or any other suitable components, e.g., that servers may communicate with. A third-party system870may be operated by a different entity from an entity operating social-networking system860. In particular embodiments, however, social-networking system860and third-party systems870may operate in conjunction with each other to provide social-networking services to users of social-networking system860or third-party systems870. In this sense, social-networking system860may provide a platform, or backbone, which other systems, such as third-party systems870, may use to provide social-networking services and functionality to users across the Internet. In particular embodiments, a third-party system870may include a third-party content object provider. A third-party content object provider may include one or more sources of content objects, which may be communicated to a client system830. As an example and not by way of limitation, content objects may include information regarding things or activities of interest to the user, such as, for example, movie show times, movie reviews, restaurant reviews, restaurant menus, product information and reviews, or other suitable information. As another example and not by way of limitation, content objects may include incentive content objects, such as coupons, discount tickets, gift certificates, or other suitable incentive objects. In particular embodiments, social-networking system860also includes user-generated content objects, which may enhance a user's interactions with social-networking system860. User-generated content may include anything a user can add, upload, send, or “post” to social-networking system860. As an example and not by way of limitation, a user communicates posts to social-networking system860from a client system830. Posts may include data such as status updates or other textual data, location information, photos, videos, links, music or other similar data or media. Content may also be added to social-networking system860by a third-party through a “communication channel,” such as a newsfeed or stream. In particular embodiments, social-networking system860may include a variety of servers, sub-systems, programs, modules, logs, and data stores. In particular embodiments, social-networking system860may include one or more of the following: a web server, action logger, API-request server, relevance-and-ranking engine, content-object classifier, notification controller, action log, third-party-content-object-exposure log, inference module, authorization/privacy server, search module, advertisement-targeting module, user-interface module, user-profile store, connection store, third-party content store, or location store. Social-networking system860may also include suitable components such as network interfaces, security mechanisms, load balancers, failover servers, management-and-network-operations consoles, other suitable components, or any suitable combination thereof. In particular embodiments, social-networking system860may include one or more user-profile stores for storing user profiles. A user profile may include, for example, biographic information, demographic information, behavioral information, social information, or other types of descriptive information, such as work experience, educational history, hobbies or preferences, interests, affinities, or location. Interest information may include interests related to one or more categories. Categories may be general or specific. As an example and not by way of limitation, if a user “likes” an article about a brand of shoes the category may be the brand, or the general category of “shoes” or “clothing.” A connection store may be used for storing connection information about users. The connection information may indicate users who have similar or common work experience, group memberships, hobbies, educational history, or are in any way related or share common attributes. The connection information may also include user-defined connections between different users and content (both internal and external). A web server may be used for linking social-networking system860to one or more client systems830or one or more third-party system870via network810. The web server may include a mail server or other messaging functionality for receiving and routing messages between social-networking system860and one or more client systems830. An API-request server may allow a third-party system870to access information from social-networking system860by calling one or more APIs. An action logger may be used to receive communications from a web server about a user's actions on or off social-networking system860. In conjunction with the action log, a third-party-content-object log may be maintained of user exposures to third-party-content objects. A notification controller may provide information regarding content objects to a client system830. Information may be pushed to a client system830as notifications, or information may be pulled from client system830responsive to a request received from client system830. Authorization servers may be used to enforce one or more privacy settings of the users of social-networking system860. A privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged by social-networking system860or shared with other systems (e.g., third-party system870), such as, for example, by setting appropriate privacy settings. Third-party-content-object stores may be used to store content objects received from third parties, such as a third-party system870. Location stores may be used for storing location information received from client systems830associated with users. Advertisement-pricing modules may combine social information, the current time, location information, or other suitable information to provide relevant advertisements, in the form of notifications, to a user. FIG.4illustrates example social graph900. In particular embodiments, social-networking system860may store one or more social graphs900in one or more data stores. In particular embodiments, social graph900may include multiple nodes-which may include multiple user nodes902or multiple concept nodes904—and multiple edges906connecting the nodes. Example social graph900illustrated inFIG.9is shown, for didactic purposes, in a two-dimensional visual map representation. In particular embodiments, a social-networking system860, client system830, or third-party system870may access social graph900and related social-graph information for suitable applications. The nodes and edges of social graph900may be stored as data objects, for example, in a data store (such as a social-graph database). Such a data store may include one or more searchable or queryable indexes of nodes or edges of social graph900. In particular embodiments, a user node902may correspond to a user of social-networking system860. As an example and not by way of limitation, a user may be an individual (human user), an entity (e.g., an enterprise, business, or third-party application), or a group (e.g., of individuals or entities) that interacts or communicates with or over social-networking system860. In particular embodiments, when a user registers for an account with social-networking system860, social-networking system860may create a user node902corresponding to the user, and store the user node902in one or more data stores. Users and user nodes902described herein may, where appropriate, refer to registered users and user nodes902associated with registered users. In addition or as an alternative, users and user nodes902described herein may, where appropriate, refer to users that have not registered with social-networking system860. In particular embodiments, a user node902may be associated with information provided by a user or information gathered by various systems, including social-networking system860. As an example and not by way of limitation, a user may provide his or her name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, or other demographic information. In particular embodiments, a user node902may be associated with one or more data objects corresponding to information associated with a user. In particular embodiments, a user node902may correspond to one or more webpages. In particular embodiments, a concept node904may correspond to a concept. As an example and not by way of limitation, a concept may correspond to a place (such as, for example, a movie theater, restaurant, landmark, or city); a website (such as, for example, a website associated with social-network system860or a third-party website associated with a web-application server); an entity (such as, for example, a person, business, group, sports team, or celebrity); a resource (such as, for example, an audio file, video file, digital photo, text file, structured document, or application) which may be located within social-networking system860or on an external server, such as a web-application server; real or intellectual property (such as, for example, a sculpture, painting, movie, game, song, idea, photograph, or written work); a game; an activity; an idea or theory; an object in a augmented/virtual reality environment; another suitable concept; or two or more such concepts. A concept node904may be associated with information of a concept provided by a user or information gathered by various systems, including social-networking system860. As an example and not by way of limitation, information of a concept may include a name or a title; one or more images (e.g., an image of the cover page of a book); a location (e.g., an address or a geographical location); a website (which may be associated with a URL); contact information (e.g., a phone number or an email address); other suitable concept information; or any suitable combination of such information. In particular embodiments, a concept node904may be associated with one or more data objects corresponding to information associated with concept node904. In particular embodiments, a concept node904may correspond to one or more webpages. In particular embodiments, a node in social graph900may represent or be represented by a webpage (which may be referred to as a “profile page”). Profile pages may be hosted by or accessible to social-networking system860. Profile pages may also be hosted on third-party websites associated with a third-party system870. As an example and not by way of limitation, a profile page corresponding to a particular external webpage may be the particular external webpage and the profile page may correspond to a particular concept node904. Profile pages may be viewable by all or a selected subset of other users. As an example and not by way of limitation, a user node902may have a corresponding user-profile page in which the corresponding user may add content, make declarations, or otherwise express himself or herself. As another example and not by way of limitation, a concept node904may have a corresponding concept-profile page in which one or more users may add content, make declarations, or express themselves, particularly in relation to the concept corresponding to concept node904. In particular embodiments, a concept node904may represent a third-party webpage or resource hosted by a third-party system870. The third-party webpage or resource may include, among other elements, content, a selectable or other icon, or other inter-actable object (which may be implemented, for example, in JavaScript, AJAX, or PHP codes) representing an action or activity. As an example and not by way of limitation, a third-party webpage may include a selectable icon such as “like,” “check-in,” “eat,” “recommend,” or another suitable action or activity. A user viewing the third-party webpage may perform an action by selecting one of the icons (e.g., “check-in”), causing a client system830to send to social-networking system860a message indicating the user's action. In response to the message, social-networking system860may create an edge (e.g., a check-in-type edge) between a user node902corresponding to the user and a concept node904corresponding to the third-party webpage or resource and store edge906in one or more data stores. In particular embodiments, a pair of nodes in social graph900may be connected to each other by one or more edges906. An edge906connecting a pair of nodes may represent a relationship between the pair of nodes. In particular embodiments, an edge906may include or represent one or more data objects or attributes corresponding to the relationship between a pair of nodes. As an example and not by way of limitation, a first user may indicate that a second user is a “friend” of the first user. In response to this indication, social-networking system860may send a “friend request” to the second user. If the second user confirms the “friend request,” social-networking system860may create an edge906connecting the first user's user node902to the second user's user node902in social graph900and store edge906as social-graph information in one or more of data stores864. In the example ofFIG.9, social graph900includes an edge906indicating a friend relation between user nodes902of user “A” and user “B” and an edge indicating a friend relation between user nodes902of user “C” and user “B.” Although this disclosure describes or illustrates particular edges906with particular attributes connecting particular user nodes902, this disclosure contemplates any suitable edges906with any suitable attributes connecting user nodes902. As an example and not by way of limitation, an edge906may represent a friendship, family relationship, business or employment relationship, fan relationship (including, e.g., liking, etc.), follower relationship, visitor relationship (including, e.g., accessing, viewing, checking-in, sharing, etc.), subscriber relationship, superior/subordinate relationship, reciprocal relationship, non-reciprocal relationship, another suitable type of relationship, or two or more such relationships. Moreover, although this disclosure generally describes nodes as being connected, this disclosure also describes users or concepts as being connected. Herein, references to users or concepts being connected may, where appropriate, refer to the nodes corresponding to those users or concepts being connected in social graph900by one or more edges906. In particular embodiments, an edge906between a user node902and a concept node904may represent a particular action or activity performed by a user associated with user node902toward a concept associated with a concept node904. As an example and not by way of limitation, as illustrated inFIG.9, a user may “like,” “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” a concept, each of which may correspond to an edge type or subtype. A concept-profile page corresponding to a concept node904may include, for example, a selectable “check in” icon (such as, for example, a clickable “check in” icon) or a selectable “add to favorites” icon. Similarly, after a user clicks these icons, social-networking system860may create a “favorite” edge or a “check in” edge in response to a user's action corresponding to a respective action. As another example and not by way of limitation, a user (user “C”) may listen to a particular song (“Imagine”) using a particular application (an online music application). In this case, social-networking system860may create a “listened” edge906and a “used” edge (as illustrated inFIG.9) between user nodes902corresponding to the user and concept nodes904corresponding to the song and application to indicate that the user listened to the song and used the application. Moreover, social-networking system860may create a “played” edge906(as illustrated inFIG.9) between concept nodes904corresponding to the song and the application to indicate that the particular song was played by the particular application. In this case, “played” edge906corresponds to an action performed by an external application on an external audio file (the song “Imagine”). Although this disclosure describes particular edges906with particular attributes connecting user nodes902and concept nodes904, this disclosure contemplates any suitable edges906with any suitable attributes connecting user nodes902and concept nodes904. Moreover, although this disclosure describes edges between a user node902and a concept node904representing a single relationship, this disclosure contemplates edges between a user node902and a concept node904representing one or more relationships. As an example and not by way of limitation, an edge906may represent both that a user likes and has used at a particular concept. Alternatively, another edge906may represent each type of relationship (or multiples of a single relationship) between a user node902and a concept node904(as illustrated inFIG.9between user node902for user “E” and concept node904). In particular embodiments, social-networking system860may create an edge906between a user node902and a concept node904in social graph900. As an example and not by way of limitation, a user viewing a concept-profile page (such as, for example, by using a web browser or a special-purpose application hosted by the user's client system830) may indicate that he or she likes the concept represented by the concept node904by clicking or selecting a “Like” icon, which may cause the user's client system830to send to social-networking system860a message indicating the user's liking of the concept associated with the concept-profile page. In response to the message, social-networking system860may create an edge906between user node902associated with the user and concept node904, as illustrated by “like” edge906between the user and concept node904. In particular embodiments, social-networking system860may store an edge906in one or more data stores. In particular embodiments, an edge906may be automatically formed by social-networking system860in response to a particular user action. As an example and not by way of limitation, if a first user uploads a picture, watches a movie, or listens to a song, an edge906may be formed between user node902corresponding to the first user and concept nodes904corresponding to those concepts. Although this disclosure describes forming particular edges906in particular manners, this disclosure contemplates forming any suitable edges906in any suitable manner. In particular embodiments, social-networking system860may determine the social-graph affinity (which may be referred to herein as “affinity”) of various social-graph entities for each other. Affinity may represent the strength of a relationship or level of interest between particular objects associated with the online social network, such as users, concepts, content, actions, advertisements, other objects associated with the online social network, or any suitable combination thereof. Affinity may also be determined with respect to objects associated with third-party systems870or other suitable systems. An overall affinity for a social-graph entity for each user, subject matter, or type of content may be established. The overall affinity may change based on continued monitoring of the actions or relationships associated with the social-graph entity. Although this disclosure describes determining particular affinities in a particular manner, this disclosure contemplates determining any suitable affinities in any suitable manner. In particular embodiments, social-networking system860may measure or quantify social-graph affinity using an affinity coefficient (which may be referred to herein as “coefficient”). The coefficient may represent or quantify the strength of a relationship between particular objects associated with the online social network. The coefficient may also represent a probability or function that measures a predicted probability that a user will perform a particular action based on the user's interest in the action. In this way, a user's future actions may be predicted based on the user's prior actions, where the coefficient may be calculated at least in part on the history of the user's actions. Coefficients may be used to predict any number of actions, which may be within or outside of the online social network. As an example and not by way of limitation, these actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of observation actions, such as accessing or viewing profile pages, media, or other suitable content; various types of coincidence information about two or more social-graph entities, such as being in the same group, tagged in the same photograph, checked-in at the same location, or attending the same event; or other suitable actions. Although this disclosure describes measuring affinity in a particular manner, this disclosure contemplates measuring affinity in any suitable manner. In particular embodiments, social-networking system860may use a variety of factors to calculate a coefficient. These factors may include, for example, user actions, types of relationships between objects, location information, other suitable factors, or any combination thereof. In particular embodiments, different factors may be weighted differently when calculating the coefficient. The weights for each factor may be static or the weights may change according to, for example, the user, the type of relationship, the type of action, the user's location, and so forth. Ratings for the factors may be combined according to their weights to determine an overall coefficient for the user. As an example and not by way of limitation, particular user actions may be assigned both a rating and a weight while a relationship associated with the particular user action is assigned a rating and a correlating weight (e.g., so the weights total 100%). To calculate the coefficient of a user towards a particular object, the rating assigned to the user's actions may comprise, for example, 60% of the overall coefficient, while the relationship between the user and the object may comprise 40% of the overall coefficient. In particular embodiments, the social-networking system860may consider a variety of variables when determining weights for various factors used to calculate a coefficient, such as, for example, the time since information was accessed, decay factors, frequency of access, relationship to information or relationship to the object about which information was accessed, relationship to social-graph entities connected to the object, short- or long-term averages of user actions, user feedback, other suitable variables, or any combination thereof. As an example and not by way of limitation, a coefficient may include a decay factor that causes the strength of the signal provided by particular actions to decay with time, such that more recent actions are more relevant when calculating the coefficient. The ratings and weights may be continuously updated based on continued tracking of the actions upon which the coefficient is based. Any type of process or algorithm may be employed for assigning, combining, averaging, and so forth the ratings for each factor and the weights assigned to the factors. In particular embodiments, social-networking system860may determine coefficients using machine-learning algorithms trained on historical actions and past user responses, or data farmed from users by exposing them to various options and measuring responses. Although this disclosure describes calculating coefficients in a particular manner, this disclosure contemplates calculating coefficients in any suitable manner. In particular embodiments, social-networking system860may calculate a coefficient based on a user's actions. Social-networking system860may monitor such actions on the online social network, on a third-party system870, on other suitable systems, or any combination thereof. Any suitable type of user actions may be tracked or monitored. Typical user actions include viewing profile pages, creating or posting content, interacting with content, tagging or being tagged in images, joining groups, listing and confirming attendance at events, checking-in at locations, liking particular pages, creating pages, and performing other tasks that facilitate social action. In particular embodiments, social-networking system860may calculate a coefficient based on the user's actions with particular types of content. The content may be associated with the online social network, a third-party system870, or another suitable system. The content may include users, profile pages, posts, news stories, headlines, instant messages, chat room conversations, emails, advertisements, pictures, video, music, other suitable objects, or any combination thereof. Social-networking system860may analyze a user's actions to determine whether one or more of the actions indicate an affinity for subject matter, content, other users, and so forth. As an example and not by way of limitation, if a user frequently posts content related to “coffee” or variants thereof, social-networking system860may determine the user has a high coefficient with respect to the concept “coffee”. Particular actions or types of actions may be assigned a higher weight and/or rating than other actions, which may affect the overall calculated coefficient. As an example and not by way of limitation, if a first user emails a second user, the weight or the rating for the action may be higher than if the first user simply views the user-profile page for the second user. In particular embodiments, social-networking system860may calculate a coefficient based on the type of relationship between particular objects. Referencing the social graph900, social-networking system860may analyze the number and/or type of edges906connecting particular user nodes902and concept nodes904when calculating a coefficient. As an example and not by way of limitation, user nodes902that are connected by a spouse-type edge (representing that the two users are married) may be assigned a higher coefficient than a user nodes902that are connected by a friend-type edge. In other words, depending upon the weights assigned to the actions and relationships for the particular user, the overall affinity may be determined to be higher for content about the user's spouse than for content about the user's friend. In particular embodiments, the relationships a user has with another object may affect the weights and/or the ratings of the user's actions with respect to calculating the coefficient for that object. As an example and not by way of limitation, if a user is tagged in a first photo, but merely likes a second photo, social-networking system860may determine that the user has a higher coefficient with respect to the first photo than the second photo because having a tagged-in-type relationship with content may be assigned a higher weight and/or rating than having a like-type relationship with content. In particular embodiments, social-networking system860may calculate a coefficient for a first user based on the relationship one or more second users have with a particular object. In other words, the connections and coefficients other users have with an object may affect the first user's coefficient for the object. As an example and not by way of limitation, if a first user is connected to or has a high coefficient for one or more second users, and those second users are connected to or have a high coefficient for a particular object, social-networking system860may determine that the first user should also have a relatively high coefficient for the particular object. In particular embodiments, the coefficient may be based on the degree of separation between particular objects. The lower coefficient may represent the decreasing likelihood that the first user will share an interest in content objects of the user that is indirectly connected to the first user in the social graph900. As an example and not by way of limitation, social-graph entities that are closer in the social graph900(i.e., fewer degrees of separation) may have a higher coefficient than entities that are further apart in the social graph900. In particular embodiments, social-networking system860may calculate a coefficient based on location information. Objects that are geographically closer to each other may be considered to be more related or of more interest to each other than more distant objects. In particular embodiments, the coefficient of a user towards a particular object may be based on the proximity of the object's location to a current location associated with the user (or the location of a client system830of the user). A first user may be more interested in other users or concepts that are closer to the first user. As an example and not by way of limitation, if a user is one mile from an airport and two miles from a gas station, social-networking system860may determine that the user has a higher coefficient for the airport than the gas station based on the proximity of the airport to the user. In particular embodiments, social-networking system860may perform particular actions with respect to a user based on coefficient information. Coefficients may be used to predict whether a user will perform a particular action based on the user's interest in the action. A coefficient may be used when generating or presenting any type of objects to a user, such as advertisements, search results, news stories, media, messages, notifications, or other suitable objects. The coefficient may also be utilized to rank and order such objects, as appropriate. In this way, social-networking system860may provide information that is relevant to user's interests and current circumstances, increasing the likelihood that they will find such information of interest. In particular embodiments, social-networking system860may generate content based on coefficient information. Content objects may be provided or selected based on coefficients specific to a user. As an example and not by way of limitation, the coefficient may be used to generate media for the user, where the user may be presented with media for which the user has a high overall coefficient with respect to the media object. As another example and not by way of limitation, the coefficient may be used to generate advertisements for the user, where the user may be presented with advertisements for which the user has a high overall coefficient with respect to the advertised object. In particular embodiments, social-networking system860may generate search results based on coefficient information. Search results for a particular user may be scored or ranked based on the coefficient associated with the search results with respect to the querying user. As an example and not by way of limitation, search results corresponding to objects with higher coefficients may be ranked higher on a search-results page than results corresponding to objects having lower coefficients. In particular embodiments, social-networking system860may calculate a coefficient in response to a request for a coefficient from a particular system or process. To predict the likely actions a user may take (or may be the subject of) in a given situation, any process may request a calculated coefficient for a user. The request may also include a set of weights to use for various factors used to calculate the coefficient. This request may come from a process running on the online social network, from a third-party system870(e.g., via an API or other communication channel), or from another suitable system. In response to the request, social-networking system860may calculate the coefficient (or access the coefficient information if it has previously been calculated and stored). In particular embodiments, social-networking system860may measure an affinity with respect to a particular process. Different processes (both internal and external to the online social network) may request a coefficient for a particular object or set of objects. Social-networking system860may provide a measure of affinity that is relevant to the particular process that requested the measure of affinity. In this way, each process receives a measure of affinity that is tailored for the different context in which the process will use the measure of affinity. In connection with social-graph affinity and affinity coefficients, particular embodiments may utilize one or more systems, components, elements, functions, methods, operations, or steps disclosed in U.S. patent application Ser. No. 11/503,093, filed 11 Aug. 2006, U.S. patent application Ser. No. 12/977,027, filed 22 Dec. 2010, U.S. patent application Ser. No. 12/978,265, filed 23 Dec. 2010, and U.S. patent application Ser. No. 13/632,869, filed 1 Oct. 2012, each of which is incorporated by reference. In particular embodiments, one or more of the content objects of the online social network may be associated with a privacy setting. The privacy settings (or “access settings”) for an object may be stored in any suitable manner, such as, for example, in association with the object, in an index on an authorization server, in another suitable manner, or any combination thereof. A privacy setting of an object may specify how the object (or particular information associated with an object) can be accessed (e.g., viewed or shared) using the online social network. Where the privacy settings for an object allow a particular user to access that object, the object may be described as being “visible” with respect to that user. As an example and not by way of limitation, a user of the online social network may specify privacy settings for a user-profile page that identify a set of users that may access the work experience information on the user-profile page, thus excluding other users from accessing the information. In particular embodiments, the privacy settings may specify a “blocked list” of users that should not be allowed to access certain information associated with the object. In other words, the blocked list may specify one or more users or entities for which an object is not visible. As an example and not by way of limitation, a user may specify a set of users that may not access photos albums associated with the user, thus excluding those users from accessing the photo albums (while also possibly allowing certain users not within the set of users to access the photo albums). In particular embodiments, privacy settings may be associated with particular social-graph elements. Privacy settings of a social-graph element, such as a node or an edge, may specify how the social-graph element, information associated with the social-graph element, or content objects associated with the social-graph element can be accessed using the online social network. As an example and not by way of limitation, a particular concept node904corresponding to a particular photo may have a privacy setting specifying that the photo may only be accessed by users tagged in the photo and their friends. In particular embodiments, privacy settings may allow users to opt in or opt out of having their actions logged by social-networking system360or shared with other systems (e.g., third-party system870). In particular embodiments, the privacy settings associated with an object may specify any suitable granularity of permitted access or denial of access. As an example and not by way of limitation, access or denial of access may be specified for particular users (e.g., only me, my roommates, and my boss), users within a particular degrees-of-separation (e.g., friends, or friends-of-friends), user groups (e.g., the gaming club, my family), user networks (e.g., employees of particular employers, students or alumni of particular university), all users (“public”), no users (“private”), users of third-party systems870, particular applications (e.g., third-party applications, external websites), other suitable users or entities, or any combination thereof. Although this disclosure describes using particular privacy settings in a particular manner, this disclosure contemplates using any suitable privacy settings in any suitable manner. In particular embodiments, one or more servers862may be authorization/privacy servers for enforcing privacy settings. In response to a request from a user (or other entity) for a particular object stored in a data store864, social-networking system860may send a request to the data store864for the object. The request may identify the user associated with the request and may only be sent to the user (or a client system830of the user) if the authorization server determines that the user is authorized to access the object based on the privacy settings associated with the object. If the requesting user is not authorized to access the object, the authorization server may prevent the requested object from being retrieved from the data store864, or may prevent the requested object from being sent to the user. In the search query context, an object may only be generated as a search result if the querying user is authorized to access the object. In other words, the object must have a visibility that is visible to the querying user. If the object has a visibility that is not visible to the user, the object may be excluded from the search results. Although this disclosure describes enforcing privacy settings in a particular manner, this disclosure contemplates enforcing privacy settings in any suitable manner. FIG.10illustrates an example computer system1000. In particular embodiments, one or more computer systems1000perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems1000provide functionality described or illustrated herein. In particular embodiments, software running on one or more computer systems1000performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one or more computer systems1000. Herein, reference to a computer system may encompass a computing device, and vice versa, where appropriate. Moreover, reference to a computer system may encompass one or more computer systems, where appropriate. This disclosure contemplates any suitable number of computer systems1000. This disclosure contemplates computer system1000taking any suitable physical form. As example and not by way of limitation, computer system1000may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, an augmented/virtual reality device, or a combination of two or more of these. Where appropriate, computer system1000may include one or more computer systems1000; be unitary or distributed; span multiple locations; span multiple machines; span multiple data centers; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one or more computer systems1000may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one or more computer systems1000may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One or more computer systems1000may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate. In particular embodiments, computer system1000includes a processor1002, memory1004, storage1006, an input/output (I/O) interface1008, a communication interface1010, and a bus1012. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement. In particular embodiments, processor1002includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions, processor1002may retrieve (or fetch) the instructions from an internal register, an internal cache, memory1004, or storage1006; decode and execute them; and then write one or more results to an internal register, an internal cache, memory1004, or storage1006. In particular embodiments, processor1002may include one or more internal caches for data, instructions, or addresses. This disclosure contemplates processor1002including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation, processor1002may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions in memory1004or storage1006, and the instruction caches may speed up retrieval of those instructions by processor1002. Data in the data caches may be copies of data in memory1004or storage1006for instructions executing at processor1002to operate on; the results of previous instructions executed at processor1002for access by subsequent instructions executing at processor1002or for writing to memory1004or storage1006; or other suitable data. The data caches may speed up read or write operations by processor1002. The TLBs may speed up virtual-address translation for processor1002. In particular embodiments, processor1002may include one or more internal registers for data, instructions, or addresses. This disclosure contemplates processor1002including any suitable number of any suitable internal registers, where appropriate. Where appropriate, processor1002may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one or more processors1002. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor. In particular embodiments, memory1004includes main memory for storing instructions for processor1002to execute or data for processor1002to operate on. As an example and not by way of limitation, computer system1000may load instructions from storage1006or another source (such as, for example, another computer system1000) to memory1004. Processor1002may then load the instructions from memory1004to an internal register or internal cache. To execute the instructions, processor1002may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions, processor1002may write one or more results (which may be intermediate or final results) to the internal register or internal cache. Processor1002may then write one or more of those results to memory1004. In particular embodiments, processor1002executes only instructions in one or more internal registers or internal caches or in memory1004(as opposed to storage1006or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory1004(as opposed to storage1006or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may couple processor1002to memory1004. Bus1012may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside between processor1002and memory1004and facilitate accesses to memory1004requested by processor1002. In particular embodiments, memory1004includes random access memory (RAM). This RAM may be volatile memory, where appropriate. Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. This disclosure contemplates any suitable RAM. Memory1004may include one or more memories1004, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory. In particular embodiments, storage1006includes mass storage for data or instructions. As an example and not by way of limitation, storage1006may include a hard disk drive (HDD), a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Storage1006may include removable or non-removable (or fixed) media, where appropriate. Storage1006may be internal or external to computer system1000, where appropriate. In particular embodiments, storage1006is non-volatile, solid-state memory. In particular embodiments, storage1006includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplates mass storage1006taking any suitable physical form. Storage1006may include one or more storage control units facilitating communication between processor1002and storage1006, where appropriate. Where appropriate, storage1006may include one or more storages1006. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage. In particular embodiments, I/O interface1008includes hardware, software, or both, providing one or more interfaces for communication between computer system1000and one or more I/O devices. Computer system1000may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person and computer system1000. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touch screen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces1008for them. Where appropriate, I/O interface1008may include one or more device or software drivers enabling processor1002to drive one or more of these I/O devices. I/O interface1008may include one or more I/O interfaces1008, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface. In particular embodiments, communication interface1010includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) between computer system1000and one or more other computer systems1000or one or more networks. As an example and not by way of limitation, communication interface1010may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface1010for it. As an example and not by way of limitation, computer system1000may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example, computer system1000may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these. Computer system1000may include any suitable communication interface1010for any of these networks, where appropriate. Communication interface1010may include one or more communication interfaces1010, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface. In particular embodiments, bus1012includes hardware, software, or both coupling components of computer system1000to each other. As an example and not by way of limitation, bus1012may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCIe) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus1012may include one or more buses1012, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect. Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context. The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.
90,607
11861737
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure. Embodiments are disclosed in sections according to the following outline:1. GENERAL OVERVIEW2. EXAMPLE AGRICULTURAL INTELLIGENCE COMPUTER SYSTEM2.1. STRUCTURAL OVERVIEW2.2. APPLICATION PROGRAM OVERVIEW2.3. DATA INGEST TO THE COMPUTER SYSTEM2.4. PROCESS OVERVIEW—AGRONOMIC MODEL TRAINING2.5. HYBRID SEED CLASSIFICATION SUBSYSTEM2.6. HYBRID SEED RECOMMENDATION SUBSYSTEM2.7. HYBRID SEED SUPPLY SUBSYSTEM2.8 IMPLEMENTATION EXAMPLE—HARDWARE OVERVIEW3. FUNCTIONAL OVERVIEW—GENERATE AND DISPLAY TARGET SUCCESS YIELD GROUP OF HYBRID SEEDS3.1. DATA INPUT3.2. AGRICULTURAL DATA PROCESSING3.3. PRESENT TARGET SUCCESS YIELD GROUP4. FUNCTIONAL OVERVIEW—GENERATE AND DISPLAY TARGET HYBRID SEEDS FOR PLANTING4.1. DATA INPUT4.2. HYBRID SEED SELECTION4.3. GENERATE RISK VALUES FOR HYBRID SEEDS4.4. GENERATE DATASET OF TARGET HYBRID SEEDS4.5. SEED PORTFOLIO ANALYSIS4.6. PRESENT SET OF TARGET HYBRID SEEDS5. FUNCTIONAL OVERVIEW—DETERMINE SUPPLY BASED ON SEED PLACEMENT PRESCRIPTION5.1 DEMAND FULFILLMENT THROUGH SUPPLY CHAIN5.2 SAFETY STOCK OPTIMIZATION5.3 EXAMPLE PROCESSES6. EXTENSIONS AND ALTERNATIVES 1. General Overview A computer system and a computer-implemented method are disclosed herein for managing hybrid seed supply based on hybrid seed placement prescriptions. In an embodiment, the computer system is programmed to initially determine prescriptions of hybrid seeds for one or more target fields, which may help determine what growers of the one or more target fields will plant in the next growing season. More specifically, the computer system is programmed to initially select some hybrid seeds given probabilities of successful yields associated with hybrid seeds. The computer system is programmed to next estimate yields for the initially selected hybrid seeds and associated risks given historical agricultural data associated with the initially selected hybrid seeds. The computer system is programmed to finally select certain hybrid seeds from the initially selected hybrid seeds given the estimated yields, the associated risks, and properties of the one or more target fields. Alternatively, the computer system is programmed to initially compute yield values and environmental classifications for certain hybrid seeds to be planted at a certain location having one or more target fields given yield properties of the certain hybrid seeds planted at another location. The computer system is programmed to next generate probabilities of successful yields at the certain location for the hybrid seeds given the computed yields values and environmental classifications for the certain hybrid seeds. The computer system is programmed to finally select some hybrid seeds from the certain hybrid seeds given the probabilities of successful yields for the certain hybrid seeds. In some embodiments, the computer system is programmed to manage supply of hybrid seeds given the prescriptions of the hybrid seed placements. One general assumption is that growers tend to follow the prescriptions, and thus the prescribed hybrid seed volumes may be highly correlated to actual demands for the hybrid seeds. Besides uncertainties associated with the grower fields, additional uncertainties in actual demands may be introduced by various factors, such as the economy, the road conditions, or the operations of distribution facilities. The computer system is programmed to estimate safety stock that will assist a production site in meeting the actual demands. The deviations from prescribed hybrid seed placements may be captured from fluctuations in actual demands across multiple supply management periods or can be specifically modeled after the various potential risk factors. The computer system provides many technical benefits. Supply management can be challenging, especially as the size and structural complexity of the supply chain increases. By utilizing the prescriptions of hybrid seed placements, the computer system has a solid basis for estimating actual demands. The automatic estimates obviate the need to solicit information regarding planting strategies from the grower computers or other computational components of the supply chain, thus reducing computer network traffic. The accurate estimates avoid the need of repeatedly modifying predictions of actual demands and communicating such to grower computers, thus reducing utilization of computational resources and further reducing computer network traffic. The accurate estimates also increase efficiency of supply management. By optimizing safety stock with respect to the supply chain, the computer system is able to handle various uncertainties and risks and help further increases the efficiency of supply management. 2. Example Agricultural Intelligence Computer System 2.1 Structural Overview FIG.1illustrates an example computer system that is configured to perform the functions described herein, shown in a field environment with other apparatus with which the system may interoperate. In one embodiment, a user102owns, operates or possesses a field manager computing device104in a field location or associated with a field location such as a field intended for agricultural activities or a management location for one or more agricultural fields. The field manager computer device104is programmed or configured to provide field data106to an agricultural intelligence computer system130via one or more networks109. Examples of field data106include (a) identification data (for example, acreage, field name, field identifiers, geographic identifiers, boundary identifiers, crop identifiers, and any other suitable data that may be used to identify farm land, such as a common land unit (CLU), lot and block number, a parcel number, geographic coordinates and boundaries, Farm Serial Number (FSN), farm number, tract number, field number, section, township, and/or range), (b) harvest data (for example, crop type, crop variety, crop rotation, whether the crop is grown organically, harvest date, Actual Production History (APH), expected yield, yield, crop price, crop revenue, grain moisture, tillage practice, and previous growing season information), (c) soil data (for example, type, composition, pH, organic matter (OM), cation exchange capacity (CEC)), (d) planting data (for example, planting date, seed(s) type, relative maturity (RM) of planted seed(s), seed population), (e) fertilizer data (for example, nutrient type (Nitrogen, Phosphorous, Potassium), application type, application date, amount, source, method), (f) chemical application data (for example, pesticide, herbicide, fungicide, other substance or mixture of substances intended for use as a plant regulator, defoliant, or desiccant, application date, amount, source, method), (g) irrigation data (for example, application date, amount, source, method), (h) weather data (for example, precipitation, rainfall rate, predicted rainfall, water runoff rate region, temperature, wind, forecast, pressure, visibility, clouds, heat index, dew point, humidity, snow depth, air quality, sunrise, sunset), (i) imagery data (for example, imagery and light spectrum information from an agricultural apparatus sensor, camera, computer, smartphone, tablet, unmanned aerial vehicle, planes or satellite), (j) scouting observations (photos, videos, free form notes, voice recordings, voice transcriptions, weather conditions (temperature, precipitation (current and over time), soil moisture, crop growth stage, wind velocity, relative humidity, dew point, black layer)), and (k) soil, seed, crop phenology, pest and disease reporting, and predictions sources and databases. A data server computer108is communicatively coupled to agricultural intelligence computer system130and is programmed or configured to send external data110to agricultural intelligence computer system130via the network(s)109. The external data server computer108may be owned or operated by the same legal person or entity as the agricultural intelligence computer system130, or by a different person or entity such as a government agency, non-governmental organization (NGO), and/or a private data service provider. Examples of external data include weather data, imagery data, soil data, or statistical data relating to crop yields, among others. External data110may consist of the same type of information as field data106. In some embodiments, the external data110is provided by an external data server108owned by the same entity that owns and/or operates the agricultural intelligence computer system130. For example, the agricultural intelligence computer system130may include a data server focused exclusively on a type of data that might otherwise be obtained from third party sources, such as weather data. In some embodiments, an external data server108may actually be incorporated within the system130. An agricultural apparatus111may have one or more remote sensors112fixed thereon, which sensors are communicatively coupled either directly or indirectly via agricultural apparatus111to the agricultural intelligence computer system130and are programmed or configured to send sensor data to agricultural intelligence computer system130. Examples of agricultural apparatus111include tractors, combines, harvesters, planters, trucks, fertilizer equipment, aerial vehicles including unmanned aerial vehicles, and any other item of physical machinery or hardware, typically mobile machinery, and which may be used in tasks associated with agriculture. In some embodiments, a single unit of apparatus111may comprise a plurality of sensors112that are coupled locally in a network on the apparatus; controller area network (CAN) is example of such a network that can be installed in combines, harvesters, sprayers, and cultivators. Application controller114is communicatively coupled to agricultural intelligence computer system130via the network(s)109and is programmed or configured to receive one or more scripts that are used to control an operating parameter of an agricultural vehicle or implement from the agricultural intelligence computer system130. For instance, a controller area network (CAN) bus interface may be used to enable communications from the agricultural intelligence computer system130to the agricultural apparatus111, such as how the CLIMATE FIELDVIEW DRIVE, available from The Climate Corporation, San Francisco, California, is used. Sensor data may consist of the same type of information as field data106. In some embodiments, remote sensors112may not be fixed to an agricultural apparatus111but may be remotely located in the field and may communicate with network109. The apparatus111may comprise a cab computer115that is programmed with a cab application, which may comprise a version or variant of the mobile application for device104that is further described in other sections herein. In an embodiment, cab computer115comprises a compact computer, often a tablet-sized computer or smartphone, with a graphical screen display, such as a color display, that is mounted within an operator's cab of the apparatus111. Cab computer115may implement some or all of the operations and functions that are described further herein for the mobile computer device104. The network(s)109broadly represent any combination of one or more data communication networks including local area networks, wide area networks, internetworks or internets, using any of wireline or wireless links, including terrestrial or satellite links. The network(s) may be implemented by any medium or mechanism that provides for the exchange of data between the various elements ofFIG.1. The various elements ofFIG.1may also have direct (wired or wireless) communications links. The sensors112, controller114, external data server computer108, and other elements of the system each comprise an interface compatible with the network(s)109and are programmed or configured to use standardized protocols for communication across the networks such as TCP/IP, Bluetooth, CAN protocol and higher-layer protocols such as HTTP, TLS, and the like. Agricultural intelligence computer system130is programmed or configured to receive field data106from field manager computing device104, external data110from external data server computer108, and sensor data from remote sensor112. Agricultural intelligence computer system130may be further configured to host, use or execute one or more computer programs, other software elements, digitally programmed logic such as FPGAs or ASICs, or any combination thereof to perform translation and storage of data values, construction of digital models of one or more crops on one or more fields, generation of recommendations and notifications, and generation and sending of scripts to application controller114, in the manner described further in other sections of this disclosure. In an embodiment, agricultural intelligence computer system130is programmed with or comprises a communication layer132, presentation layer134, data management layer140, hardware/virtualization layer150, and model and field data repository160. “Layer,” in this context, refers to any combination of electronic digital interface circuits, microcontrollers, firmware such as drivers, and/or computer programs or other software elements. Communication layer132may be programmed or configured to perform input/output interfacing functions including sending requests to field manager computing device104, external data server computer108, and remote sensor112for field data, external data, and sensor data respectively. Communication layer132may be programmed or configured to send the received data to model and field data repository160to be stored as field data106. Presentation layer134may be programmed or configured to generate a graphical user interface (GUI) to be displayed on field manager computing device104, cab computer115or other computers that are coupled to the system130through the network109. The GUI may comprise controls for inputting data to be sent to agricultural intelligence computer system130, generating requests for models and/or recommendations, and/or displaying recommendations, notifications, models, and other field data. Data management layer140may be programmed or configured to manage read operations and write operations involving the repository160and other functional elements of the system, including queries and result sets communicated between the functional elements of the system and the repository. Examples of data management layer140include JDBC, SQL server interface code, and/or HADOOP interface code, among others. Repository160may comprise a database. As used herein, the term “database” may refer to either a body of data, a relational database management system (RDBMS), or to both. As used herein, a database may comprise any collection of data including hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, distributed databases, and any other structured collection of records or data that is stored in a computer system. Examples of RDBMS's include, but are not limited to including, ORACLE®, MYSQL, IBM® DB2, MICROSOFT® SQL SERVER, SYBASE®, and POSTGRESQL databases. However, any database may be used that enables the systems and methods described herein. When field data106is not provided directly to the agricultural intelligence computer system via one or more agricultural machines or agricultural machine devices that interacts with the agricultural intelligence computer system, the user may be prompted via one or more user interfaces on the user device (served by the agricultural intelligence computer system) to input such information. In an example embodiment, the user may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user102may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers. In alternative embodiments, the user may specify identification data by accessing field identification data (provided as shape files or in a similar format) from the U. S. Department of Agriculture Farm Service Agency or other source via the user device and providing such field identification data to the agricultural intelligence computer system. In an example embodiment, the agricultural intelligence computer system130is programmed to generate and cause displaying a graphical user interface comprising a data manager for data input. After one or more fields have been identified using the methods described above, the data manager may provide one or more graphical user interface widgets which when selected can identify changes to the field, soil, crops, tillage, or nutrient practices. The data manager may include a timeline view, a spreadsheet view, and/or one or more editable programs. FIG.5depicts an example embodiment of a timeline view for data entry. Using the display depicted inFIG.5, a user computer can input a selection of a particular field and a particular date for the addition of event. Events depicted at the top of the timeline may include Nitrogen, Planting, Practices, and Soil. To add a nitrogen application event, a user computer may provide input to select the nitrogen tab. The user computer may then select a location on the timeline for a particular field in order to indicate an application of nitrogen on the selected field. In response to receiving a selection of a location on the timeline for a particular field, the data manager may display a data entry overlay, allowing the user computer to input data pertaining to nitrogen applications, planting procedures, soil application, tillage procedures, irrigation practices, or other information relating to the particular field. For example, if a user computer selects a portion of the timeline and indicates an application of nitrogen, then the data entry overlay may include fields for inputting an amount of nitrogen applied, a date of application, a type of fertilizer used, and any other information related to the application of nitrogen. In an embodiment, the data manager provides an interface for creating one or more programs. “Program,” in this context, refers to a set of data pertaining to nitrogen applications, planting procedures, soil application, tillage procedures, irrigation practices, or other information that may be related to one or more fields, and that can be stored in digital data storage for reuse as a set in other operations. After a program has been created, it may be conceptually applied to one or more fields and references to the program may be stored in digital storage in association with data identifying the fields. Thus, instead of manually entering identical data relating to the same nitrogen applications for multiple different fields, a user computer may create a program that indicates a particular application of nitrogen and then apply the program to multiple different fields. For example, in the timeline view ofFIG.5, the top two timelines have the “Spring applied” program selected, which includes an application of 150 lbs. N/ac in early April. The data manager may provide an interface for editing a program. In an embodiment, when a particular program is edited, each field that has selected the particular program is edited. For example, inFIG.5, if the “Spring applied” program is edited to reduce the application of nitrogen to 130 lbs. N/ac, the top two fields may be updated with a reduced application of nitrogen based on the edited program. In an embodiment, in response to receiving edits to a field that has a program selected, the data manager removes the correspondence of the field to the selected program. For example, if a nitrogen application is added to the top field inFIG.5, the interface may update to indicate that the “Spring applied” program is no longer being applied to the top field. While the nitrogen application in early April may remain, updates to the “Spring applied” program would not alter the April application of nitrogen. FIG.6depicts an example embodiment of a spreadsheet view for data entry. Using the display depicted inFIG.6, a user can create and edit information for one or more fields. The data manager may include spreadsheets for inputting information with respect to Nitrogen, Planting, Practices, and Soil as depicted inFIG.6. To edit a particular entry, a user computer may select the particular entry in the spreadsheet and update the values. For example,FIG.6depicts an in-progress update to a target yield value for the second field. Additionally, a user computer may select one or more fields in order to apply one or more programs. In response to receiving a selection of a program for a particular field, the data manager may automatically complete the entries for the particular field based on the selected program. As with the timeline view, the data manager may update the entries for each field associated with a particular program in response to receiving an update to the program. Additionally, the data manager may remove the correspondence of the selected program to the field in response to receiving an edit to one of the entries for the field. In an embodiment, model and field data is stored in model and field data repository160. Model data comprises data models created for one or more fields. For example, a crop model may include a digitally constructed model of the development of a crop on the one or more fields. “Model,” in this context, refers to an electronic digitally stored set of executable instructions and data values, associated with one another, which are capable of receiving and responding to a programmatic or other digital call, invocation, or request for resolution based upon specified input values, to yield one or more stored or calculated output values that can serve as the basis of computer-implemented recommendations, output data displays, or machine control, among other things. Persons of skill in the field find it convenient to express models using mathematical equations, but that form of expression does not confine the models disclosed herein to abstract concepts; instead, each model herein has a practical application in a computer in the form of stored executable instructions and data that implement the model using the computer. The model may include a model of past events on the one or more fields, a model of the current status of the one or more fields, and/or a model of predicted events on the one or more fields. Model and field data may be stored in data structures in memory, rows in a database table, in flat files or spreadsheets, or other forms of stored digital data. In an embodiment, a hybrid seed classification subsystem170contains specially configured logic, including, but not limited to, hybrid seed normalization instructions172, probability of success generation instructions174, and yield classification instructions176, and comprises a set of one or more pages of main memory, such as RAM, in the agricultural intelligence computer system130into which executable instructions have been loaded and which when executed cause the agricultural intelligence computing system to perform the functions or operations that are described herein with reference to those modules. For example, the hybrid seed normalization instructions172may comprise a set of pages in RAM that contain instructions which when executed cause performing the target identification functions that are described herein. In an embodiment, a hybrid seed recommendation subsystem180contains specially configured logic, including, but not limited to, hybrid seed filtering instructions182, risk generation instructions184, and optimization classification instructions186, and comprises a set of one or more pages of main memory, such as RAM, in the agricultural intelligence computer system130into which executable instructions have been loaded and which when executed cause the agricultural intelligence computing system to perform the functions or operations that are described herein with reference to those modules. In an embodiment, a hybrid seed supply subsystem190contains specially configured logic, including, but not limited to, demand management instructions192and safety stock optimization instructions194, and comprises a set of one or more pages of main memory, such as RAM, in the agricultural intelligence computer system130into which executable instructions have been loaded and which when executed cause the agricultural intelligence computing system to perform the functions or operations that are described herein with reference to those modules. The instructions may be in machine executable code in the instruction set of a CPU and may have been compiled based upon source code written in JAVA, C, C++, OBJECTIVE-C, or any other human-readable programming language or environment, alone or in combination with scripts in JAVASCRIPT, other scripting languages and other programming source text. The term “pages” is intended to refer broadly to any region within main memory and the specific terminology used in a system may vary depending on the memory architecture or processor architecture. In another embodiment, each of hybrid seed normalization instructions172, probability of success generation instructions174, yield classification instructions176, hybrid seed filtering instructions182, risk generation instructions184, optimization classification instructions186, demand management instructions192, and safety stock optimization instructions194also may represent one or more files or projects of source code that are digitally stored in a mass storage device such as non-volatile RAM or disk storage, in the agricultural intelligence computer system130or a separate repository system, which when compiled or interpreted cause generating executable instructions which when executed cause the agricultural intelligence computing system to perform the functions or operations that are described herein with reference to those modules. In other words, the drawing figure may represent the manner in which programmers or software developers organize and arrange source code for later compilation into an executable, or interpretation into bytecode or the equivalent, for execution by the agricultural intelligence computer system130. Hardware/virtualization layer150comprises one or more central processing units (CPUs), memory controllers, and other devices, components, or elements of a computer system such as volatile or non-volatile memory, non-volatile storage such as disk, and I/O devices or interfaces as illustrated and described, for example, in connection withFIG.4. The layer150also may comprise programmed instructions that are configured to support virtualization, containerization, or other technologies. For purposes of illustrating a clear example,FIG.1shows a limited number of instances of certain functional elements. However, in other embodiments, there may be any number of such elements. For example, embodiments may use thousands or millions of different mobile computing devices104associated with different users. Further, the system130and/or external data server computer108may be implemented using two or more processors, cores, clusters, or instances of physical machines or virtual machines, configured in a discrete location or co-located with other elements in a datacenter, shared computing facility or cloud computing facility. 2.2. Application Program Overview In an embodiment, the implementation of the functions described herein using one or more computer programs or other software elements that are loaded into and executed using one or more general-purpose computers will cause the general-purpose computers to be configured as a particular machine or as a computer that is specially adapted to perform the functions described herein. Further, each of the flow diagrams that are described further herein may serve, alone or in combination with the descriptions of processes and functions in prose herein, as algorithms, plans or directions that may be used to program a computer or logic to implement the functions that are described. In other words, all the prose text herein, and all the drawing figures, together are intended to provide disclosure of algorithms, plans or directions that are sufficient to permit a skilled person to program a computer to perform the functions that are described herein, in combination with the skill and knowledge of such a person given the level of skill that is appropriate for inventions and disclosures of this type. In an embodiment, user102interacts with agricultural intelligence computer system130using field manager computing device104configured with an operating system and one or more application programs or apps; the field manager computing device104also may interoperate with the agricultural intelligence computer system independently and automatically under program control or logical control and direct user interaction is not always required. Field manager computing device104broadly represents one or more of a smart phone, PDA, tablet computing device, laptop computer, desktop computer, workstation, or any other computing device capable of transmitting and receiving information and performing the functions described herein. Field manager computing device104may communicate via a network using a mobile application stored on field manager computing device104, and in some embodiments, the device may be coupled using a cable113or connector to the sensor112and/or controller114. A particular user102may own, operate or possess and use, in connection with system130, more than one field manager computing device104at a time. The mobile application may provide client-side functionality, via the network to one or more mobile computing devices. In an example embodiment, field manager computing device104may access the mobile application via a web browser or a local client application or app. Field manager computing device104may transmit data to, and receive data from, one or more front-end servers, using web-based protocols or formats such as HTTP, XML and/or JSON, or app-specific protocols. In an example embodiment, the data may take the form of requests and user information input, such as field data, into the mobile computing device. In some embodiments, the mobile application interacts with location tracking hardware and software on field manager computing device104which determines the location of field manager computing device104using standard tracking techniques such as multilateration of radio signals, the global positioning system (GPS), WiFi positioning systems, or other methods of mobile positioning. In some cases, location data or other data associated with the device104, user102, and/or user account(s) may be obtained by queries to an operating system of the device or by requesting an app on the device to obtain data from the operating system. In an embodiment, field manager computing device104sends field data106to agricultural intelligence computer system130comprising or including, but not limited to, data values representing one or more of: a geographical location of the one or more fields, tillage information for the one or more fields, crops planted in the one or more fields, and soil data extracted from the one or more fields. Field manager computing device104may send field data106in response to user input from user102specifying the data values for the one or more fields. Additionally, field manager computing device104may automatically send field data106when one or more of the data values becomes available to field manager computing device104. For example, field manager computing device104may be communicatively coupled to remote sensor112and/or application controller114which include an irrigation sensor and/or irrigation controller. In response to receiving data indicating that application controller114released water onto the one or more fields, field manager computing device104may send field data106to agricultural intelligence computer system130indicating that water was released on the one or more fields. Field data106identified in this disclosure may be input and communicated using electronic digital data that is communicated between computing devices using parameterized URLs over HTTP, or another suitable communication or messaging protocol. A commercial example of the mobile application is CLIMATE FIELDVIEW, commercially available from The Climate Corporation, San Francisco, California. The CLIMATE FIELDVIEW application, or other applications, may be modified, extended, or adapted to include features, functions, and programming that have not been disclosed earlier than the filing date of this disclosure. In one embodiment, the mobile application comprises an integrated software platform that allows a grower to make fact-based decisions for their operation because it combines historical data about the grower's fields with any other data that the grower wishes to compare. The combinations and comparisons may be performed in real time and are based upon scientific models that provide potential scenarios to permit the grower to make better, more informed decisions. FIG.2illustrates two views of an example logical organization of sets of instructions in main memory when an example mobile application is loaded for execution. InFIG.2, each named element represents a region of one or more pages of RAM or other main memory, or one or more blocks of disk storage or other non-volatile storage, and the programmed instructions within those regions. In one embodiment, in view (a), a mobile computer application200comprises account-fields-data ingestion-sharing instructions202, overview and alert instructions204, digital map book instructions206, seeds and planting instructions208, nitrogen instructions210, weather instructions212, field health instructions214, and performance instructions216. In one embodiment, a mobile computer application200comprises account, fields, data ingestion, sharing instructions202which are programmed to receive, translate, and ingest field data from third party systems via manual upload or APIs. Data types may include field boundaries, yield maps, as-planted maps, soil test results, as-applied maps, and/or management zones, among others. Data formats may include shape files, native data formats of third parties, and/or farm management information system (FMIS) exports, among others. Receiving data may occur via manual upload, e-mail with attachment, external APIs that push data to the mobile application, or instructions that call APIs of external systems to pull data into the mobile application. In one embodiment, mobile computer application200comprises a data inbox. In response to receiving a selection of the data inbox, the mobile computer application200may display a graphical user interface for manually uploading data files and importing uploaded files to a data manager. In one embodiment, digital map book instructions206comprise field map data layers stored in device memory and are programmed with data visualization tools and geospatial field notes. This provides growers with convenient information close at hand for reference, logging and visual insights into field performance. In one embodiment, overview and alert instructions204are programmed to provide an operation-wide view of what is important to the grower, and timely recommendations to take action or focus on particular issues. This permits the grower to focus time on what needs attention, to save time and preserve yield throughout the season. In one embodiment, seeds and planting instructions208are programmed to provide tools for seed selection, hybrid placement, and script creation, including variable rate (VR) script creation, based upon scientific models and empirical data. This enables growers to maximize yield or return on investment through optimized seed purchase, placement and population. In one embodiment, script generation instructions205are programmed to provide an interface for generating scripts, including variable rate (VR) fertility scripts. The interface enables growers to create scripts for field implements, such as nutrient applications, planting, and irrigation. For example, a planting script interface may comprise tools for identifying a type of seed for planting. Upon receiving a selection of the seed type, mobile computer application200may display one or more fields broken into management zones, such as the field map data layers created as part of digital map book instructions206. In one embodiment, the management zones comprise soil zones along with a panel identifying each soil zone and a soil name, texture, drainage for each zone, or other field data. Mobile computer application200may also display tools for editing or creating such, such as graphical tools for drawing management zones, such as soil zones, over a map of one or more fields. Planting procedures may be applied to all management zones or different planting procedures may be applied to different subsets of management zones. When a script is created, mobile computer application200may make the script available for download in a format readable by an application controller, such as an archived or compressed format. Additionally, and/or alternatively, a script may be sent directly to cab computer115from mobile computer application200and/or uploaded to one or more data servers and stored for further use. In one embodiment, nitrogen instructions210are programmed to provide tools to inform nitrogen decisions by visualizing the availability of nitrogen to crops. This enables growers to maximize yield or return on investment through optimized nitrogen application during the season. Example programmed functions include displaying images such as SSURGO images to enable drawing of fertilizer application zones and/or images generated from subfield soil data, such as data obtained from sensors, at a high spatial resolution (as fine as millimeters or smaller depending on sensor proximity and resolution); upload of existing grower-defined zones; providing a graph of plant nutrient availability and/or a map to enable tuning application(s) of nitrogen across multiple zones; output of scripts to drive machinery; tools for mass data entry and adjustment; and/or maps for data visualization, among others. “Mass data entry,” in this context, may mean entering data once and then applying the same data to multiple fields and/or zones that have been defined in the system; example data may include nitrogen application data that is the same for many fields and/or zones of the same grower, but such mass data entry applies to the entry of any type of field data into the mobile computer application200. For example, nitrogen instructions210may be programmed to accept definitions of nitrogen application and practices programs and to accept user input specifying to apply those programs across multiple fields. “Nitrogen application programs,” in this context, refers to stored, named sets of data that associates: a name, color code or other identifier, one or more dates of application, types of material or product for each of the dates and amounts, method of application or incorporation such as injected or broadcast, and/or amounts or rates of application for each of the dates, crop or hybrid that is the subject of the application, among others. “Nitrogen practices programs,” in this context, refer to stored, named sets of data that associates: a practices name; a previous crop; a tillage system; a date of primarily tillage; one or more previous tillage systems that were used; one or more indicators of application type, such as manure, that were used. Nitrogen instructions210also may be programmed to generate and cause displaying a nitrogen graph, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. In one embodiment, a nitrogen graph comprises a graphical display in a computer display device comprising a plurality of rows, each row associated with and identifying a field; data specifying what crop is planted in the field, the field size, the field location, and a graphic representation of the field perimeter; in each row, a timeline by month with graphic indicators specifying each nitrogen application and amount at points correlated to month names; and numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen graph may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen graph. The user may then use his optimized nitrogen graph and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. Nitrogen instructions210also may be programmed to generate and cause displaying a nitrogen map, which indicates projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted; in some embodiments, different color indicators may signal a magnitude of surplus or magnitude of shortfall. The nitrogen map may display projections of plant use of the specified nitrogen and whether a surplus or shortfall is predicted for different times in the past and the future (such as daily, weekly, monthly or yearly) using numeric and/or colored indicators of surplus or shortfall, in which color indicates magnitude. In one embodiment, the nitrogen map may include one or more user input features, such as dials or slider bars, to dynamically change the nitrogen planting and practices programs so that a user may optimize his nitrogen map, such as to obtain a preferred amount of surplus to shortfall. The user may then use his optimized nitrogen map and the related nitrogen planting and practices programs to implement one or more scripts, including variable rate (VR) fertility scripts. In other embodiments, similar instructions to the nitrogen instructions210could be used for application of other nutrients (such as phosphorus and potassium), application of pesticide, and irrigation programs. In one embodiment, weather instructions212are programmed to provide field-specific recent weather data and forecasted weather information. This enables growers to save time and have an efficient integrated display with respect to daily operational decisions. In one embodiment, field health instructions214are programmed to provide timely remote sensing images highlighting in-season crop variation and potential concerns. Example programmed functions include cloud checking, to identify possible clouds or cloud shadows; determining nitrogen indices based on field images; graphical visualization of scouting layers, including, for example, those related to field health, and viewing and/or sharing of scouting notes; and/or downloading satellite images from multiple sources and prioritizing the images for the grower, among others. In one embodiment, performance instructions216are programmed to provide reports, analysis, and insight tools using on-farm data for evaluation, insights and decisions. This enables the grower to seek improved outcomes for the next year through fact-based conclusions about why return on investment was at prior levels, and insight into yield-limiting factors. The performance instructions216may be programmed to communicate via the network(s)109to back-end analytics programs executed at agricultural intelligence computer system130and/or external data server computer108and configured to analyze metrics such as yield, yield differential, hybrid, population, SSURGO zone, soil test properties, or elevation, among others. Programmed reports and analysis may include yield variability analysis, treatment effect estimation, benchmarking of yield and other metrics against other growers based on anonymized data collected from many growers, or data for seeds and planting, among others. Applications having instructions configured in this way may be implemented for different computing device platforms while retaining the same general user interface appearance. For example, the mobile application may be programmed for execution on tablets, smartphones, or server computers that are accessed using browsers at client computers. Further, the mobile application as configured for tablet computers or smartphones may provide a full app experience or a cab app experience that is suitable for the display and processing capabilities of cab computer115. For example, referring now to view (b) ofFIG.2, in one embodiment a cab computer application220may comprise maps-cab instructions222, remote view instructions224, data collect and transfer instructions226, machine alerts instructions228, script transfer instructions230, and scouting-cab instructions232. The code base for the instructions of view (b) may be the same as for view (a) and executables implementing the code may be programmed to detect the type of platform on which they are executing and to expose, through a graphical user interface, only those functions that are appropriate to a cab platform or full platform. This approach enables the system to recognize the distinctly different user experience that is appropriate for an in-cab environment and the different technology environment of the cab. The maps-cab instructions222may be programmed to provide map views of fields, farms or regions that are useful in directing machine operation. The remote view instructions224may be programmed to turn on, manage, and provide views of machine activity in real-time or near real-time to other computing devices connected to the system130via wireless networks, wired connectors or adapters, and the like. The data collect and transfer instructions226may be programmed to turn on, manage, and provide transfer of data collected at sensors and controllers to the system130via wireless networks, wired connectors or adapters, and the like. The machine alerts instructions228may be programmed to detect issues with operations of the machine or tools that are associated with the cab and generate operator alerts. The script transfer instructions230may be configured to transfer in scripts of instructions that are configured to direct machine operations or the collection of data. The scouting-cab instructions232may be programmed to display location-based alerts and information received from the system130based on the location of the field manager computing device104, agricultural apparatus111, or sensors112in the field and ingest, manage, and provide transfer of location-based scouting observations to the system130based on the location of the agricultural apparatus111or sensors112in the field. 2.3. Data Ingest to the Computer System In an embodiment, external data server computer108stores external data110, including soil data representing soil composition for the one or more fields and weather data representing temperature and precipitation on the one or more fields. The weather data may include past and present weather data as well as forecasts for future weather data. In an embodiment, external data server computer108comprises a plurality of servers hosted by different entities. For example, a first server may contain soil composition data while a second server may include weather data. Additionally, soil composition data may be stored in multiple servers. For example, one server may store data representing percentage of sand, silt, and clay in the soil while a second server may store data representing percentage of organic matter (OM) in the soil. In an embodiment, remote sensor112comprises one or more sensors that are programmed or configured to produce one or more observations. Remote sensor112may be aerial sensors, such as satellites, vehicle sensors, planting equipment sensors, tillage sensors, fertilizer or insecticide application sensors, harvester sensors, and any other implement capable of receiving data from the one or more fields. In an embodiment, application controller114is programmed or configured to receive instructions from agricultural intelligence computer system130. Application controller114may also be programmed or configured to control an operating parameter of an agricultural vehicle or implement. For example, an application controller may be programmed or configured to control an operating parameter of a vehicle, such as a tractor, planting equipment, tillage equipment, fertilizer or insecticide equipment, harvester equipment, or other farm implements such as a water valve. Other embodiments may use any combination of sensors and controllers, of which the following are merely selected examples. The system130may obtain or ingest data under user102control, on a mass basis from a large number of growers who have contributed data to a shared database system. This form of obtaining data may be termed “manual data ingest” as one or more user-controlled computer operations are requested or triggered to obtain data for use by the system130. As an example, the CLIMATE FIELDVIEW application, commercially available from The Climate Corporation, San Francisco, California, may be operated to export data to system130for storing in the repository160. For example, seed monitor systems can both control planter apparatus components and obtain planting data, including signals from seed sensors via a signal harness that comprises a CAN backbone and point-to-point connections for registration and/or diagnostics. Seed monitor systems can be programmed or configured to display seed spacing, population and other information to the user via the cab computer115or other devices within the system130. Examples are disclosed in U.S. Pat. No. 8,738,243 and US Pat. Pub. 20150094916, and the present disclosure assumes knowledge of those other patent disclosures. Likewise, yield monitor systems may contain yield sensors for harvester apparatus that send yield measurement data to the cab computer115or other devices within the system130. Yield monitor systems may utilize one or more remote sensors112to obtain grain moisture measurements in a combine or other harvester and transmit these measurements to the user via the cab computer115or other devices within the system130. In an embodiment, examples of sensors112that may be used with any moving vehicle or apparatus of the type described elsewhere herein include kinematic sensors and position sensors. Kinematic sensors may comprise any of speed sensors such as radar or wheel speed sensors, accelerometers, or gyros. Position sensors may comprise GPS receivers or transceivers, or WiFi-based position or mapping apps that are programmed to determine location based upon nearby WiFi hotspots, among others. In an embodiment, examples of sensors112that may be used with tractors or other moving vehicles include engine speed sensors, fuel consumption sensors, area counters or distance counters that interact with GPS or radar signals, PTO (power take-off) speed sensors, tractor hydraulics sensors configured to detect hydraulics parameters such as pressure or flow, and/or and hydraulic pump speed, wheel speed sensors or wheel slippage sensors. In an embodiment, examples of controllers114that may be used with tractors include hydraulic directional controllers, pressure controllers, and/or flow controllers; hydraulic pump speed controllers; speed controllers or governors; hitch position controllers; or wheel position controllers provide automatic steering. In an embodiment, examples of sensors112that may be used with seed planting equipment such as planters, drills, or air seeders include seed sensors, which may be optical, electromagnetic, or impact sensors; downforce sensors such as load pins, load cells, pressure sensors; soil property sensors such as reflectivity sensors, moisture sensors, electrical conductivity sensors, optical residue sensors, or temperature sensors; component operating criteria sensors such as planting depth sensors, downforce cylinder pressure sensors, seed disc speed sensors, seed drive motor encoders, seed conveyor system speed sensors, or vacuum level sensors; or pesticide application sensors such as optical or other electromagnetic sensors, or impact sensors. In an embodiment, examples of controllers114that may be used with such seed planting equipment include: toolbar fold controllers, such as controllers for valves associated with hydraulic cylinders; downforce controllers, such as controllers for valves associated with pneumatic cylinders, airbags, or hydraulic cylinders, and programmed for applying downforce to individual row units or an entire planter frame; planting depth controllers, such as linear actuators; metering controllers, such as electric seed meter drive motors, hydraulic seed meter drive motors, or swath control clutches; hybrid selection controllers, such as seed meter drive motors, or other actuators programmed for selectively allowing or preventing seed or an air-seed mixture from delivering seed to or from seed meters or central bulk hoppers; metering controllers, such as electric seed meter drive motors, or hydraulic seed meter drive motors; seed conveyor system controllers, such as controllers for a belt seed delivery conveyor motor; marker controllers, such as a controller for a pneumatic or hydraulic actuator; or pesticide application rate controllers, such as metering drive controllers, orifice size or position controllers. In an embodiment, examples of sensors112that may be used with tillage equipment include position sensors for tools such as shanks or discs; tool position sensors for such tools that are configured to detect depth, gang angle, or lateral spacing; downforce sensors; or draft force sensors. In an embodiment, examples of controllers114that may be used with tillage equipment include downforce controllers or tool position controllers, such as controllers configured to control tool depth, gang angle, or lateral spacing. In an embodiment, examples of sensors112that may be used in relation to apparatus for applying fertilizer, insecticide, fungicide and the like, such as on-planter starter fertilizer systems, subsoil fertilizer applicators, or fertilizer sprayers, include: fluid system criteria sensors, such as flow sensors or pressure sensors; sensors indicating which spray head valves or fluid line valves are open; sensors associated with tanks, such as fill level sensors; sectional or system-wide supply line sensors, or row-specific supply line sensors; or kinematic sensors such as accelerometers disposed on sprayer booms. In an embodiment, examples of controllers114that may be used with such apparatus include pump speed controllers; valve controllers that are programmed to control pressure, flow, direction, PWM and the like; or position actuators, such as for boom height, subsoiler depth, or boom position. In an embodiment, examples of sensors112that may be used with harvesters include yield monitors, such as impact plate strain gauges or position sensors, capacitive flow sensors, load sensors, weight sensors, or torque sensors associated with elevators or augers, or optical or other electromagnetic grain height sensors; grain moisture sensors, such as capacitive sensors; grain loss sensors, including impact, optical, or capacitive sensors; header operating criteria sensors such as header height, header type, deck plate gap, feeder speed, and reel speed sensors; separator operating criteria sensors, such as concave clearance, rotor speed, shoe clearance, or chaffer clearance sensors; auger sensors for position, operation, or speed; or engine speed sensors. In an embodiment, examples of controllers114that may be used with harvesters include header operating criteria controllers for elements such as header height, header type, deck plate gap, feeder speed, or reel speed; separator operating criteria controllers for features such as concave clearance, rotor speed, shoe clearance, or chaffer clearance; or controllers for auger position, operation, or speed. In an embodiment, examples of sensors112that may be used with grain carts include weight sensors, or sensors for auger position, operation, or speed. In an embodiment, examples of controllers114that may be used with grain carts include controllers for auger position, operation, or speed. In an embodiment, examples of sensors112and controllers114may be installed in unmanned aerial vehicle (UAV) apparatus or “drones.” Such sensors may include cameras with detectors effective for any range of the electromagnetic spectrum including visible light, infrared, ultraviolet, near-infrared (NIR), and the like; accelerometers; altimeters; temperature sensors; humidity sensors; pitot tube sensors or other airspeed or wind velocity sensors; battery life sensors; or radar emitters and reflected radar energy detection apparatus; other electromagnetic radiation emitters and reflected electromagnetic radiation detection apparatus. Such controllers may include guidance or motor control apparatus, control surface controllers, camera controllers, or controllers programmed to turn on, operate, obtain data from, manage and configure any of the foregoing sensors. Examples are disclosed in U.S. patent application Ser. No. 14/831,165 and the present disclosure assumes knowledge of that other patent disclosure. In an embodiment, sensors112and controllers114may be affixed to soil sampling and measurement apparatus that is configured or programmed to sample soil and perform soil chemistry tests, soil moisture tests, and other tests pertaining to soil. For example, the apparatus disclosed in U.S. Pat. Nos. 8,767,194 and 8,712,148 may be used, and the present disclosure assumes knowledge of those patent disclosures. In an embodiment, sensors112and controllers114may comprise weather devices for monitoring weather conditions of fields. For example, the apparatus disclosed in U.S. Provisional Application No. 62/154,207, filed on Apr. 29, 2015, U.S. Provisional Application No. 62/175,160, filed on Jun. 12, 2015, U.S. Provisional Application No. 62/198,060, filed on Jul. 28, 2015, and U.S. Provisional Application No. 62/220,852, filed on Sep. 18, 2015, may be used, and the present disclosure assumes knowledge of those patent disclosures. 2.4. Process Overview-Agronomic Model Training In an embodiment, the agricultural intelligence computer system130is programmed or configured to create an agronomic model. In this context, an agronomic model is a data structure in memory of the agricultural intelligence computer system130that comprises field data106, such as identification data and harvest data for one or more fields. The agronomic model may also comprise calculated agronomic properties which describe either conditions which may affect the growth of one or more crops on a field, or properties of the one or more crops, or both. Additionally, an agronomic model may comprise recommendations based on agronomic factors such as crop recommendations, irrigation recommendations, planting recommendations, fertilizer recommendations, fungicide recommendations, pesticide recommendations, harvesting recommendations and other crop management recommendations. The agronomic factors may also be used to estimate one or more crop related results, such as agronomic yield. The agronomic yield of a crop is an estimate of quantity of the crop that is produced, or in some examples the revenue or profit obtained from the produced crop. In an embodiment, the agricultural intelligence computer system130may use a preconfigured agronomic model to calculate agronomic properties related to currently received location and crop information for one or more fields. The preconfigured agronomic model is based upon previously processed field data, including but not limited to, identification data, harvest data, fertilizer data, and weather data. The preconfigured agronomic model may have been cross validated to ensure accuracy of the model. Cross validation may include comparison to ground truthing that compares predicted results with actual results on a field, such as a comparison of precipitation estimate with a rain gauge or sensor providing weather data at the same or nearby location or an estimate of nitrogen content with a soil sample measurement. FIG.3illustrates a programmed process by which the agricultural intelligence computer system generates one or more preconfigured agronomic models using field data provided by one or more data sources.FIG.3may serve as an algorithm or instructions for programming the functional elements of the agricultural intelligence computer system130to perform the operations that are now described. At block305, the agricultural intelligence computer system130is configured or programmed to implement agronomic data preprocessing of field data received from one or more data sources. The field data received from one or more data sources may be preprocessed for the purpose of removing noise, distorting effects, and confounding factors within the agronomic data including measured outliers that could adversely affect received field data values. Embodiments of agronomic data preprocessing may include, but are not limited to, removing data values commonly associated with outlier data values, specific measured data points that are known to unnecessarily skew other data values, data smoothing, aggregation, or sampling techniques used to remove or reduce additive or multiplicative effects from noise, and other filtering or data derivation techniques used to provide clear distinctions between positive and negative data inputs. At block310, the agricultural intelligence computer system130is configured or programmed to perform data subset selection using the preprocessed field data in order to identify datasets useful for initial agronomic model generation. The agricultural intelligence computer system130may implement data subset selection techniques including, but not limited to, a genetic algorithm method, an all subset models' method, a sequential search method, a stepwise regression method, a particle swarm optimization method, and an ant colony optimization method. For example, a genetic algorithm selection technique uses an adaptive heuristic search algorithm, based on evolutionary principles of natural selection and genetics, to determine and evaluate datasets within the preprocessed agronomic data. At block315, the agricultural intelligence computer system130is configured or programmed to implement field dataset evaluation. In an embodiment, a specific field dataset is evaluated by creating an agronomic model and using specific quality thresholds for the created agronomic model. Agronomic models may be compared and/or validated using one or more comparison techniques, such as, but not limited to, root mean square error with leave-one-out cross validation (RMSECV), mean absolute error, and mean percentage error. For example, RMSECV can cross validate agronomic models by comparing predicted agronomic property values created by the agronomic model against historical agronomic property values collected and analyzed. In an embodiment, the agronomic dataset evaluation logic is used as a feedback loop where agronomic datasets that do not meet configured quality thresholds are used during future data subset selection steps (block310). At block320, the agricultural intelligence computer system130is configured or programmed to implement agronomic model creation based upon the cross validated agronomic datasets. In an embodiment, agronomic model creation may implement multivariate regression techniques to create preconfigured agronomic data models. At block325, the agricultural intelligence computer system130is configured or programmed to store the preconfigured agronomic data models for future field data evaluation. 2.5. Hybrid Seed Classification Subsystem In an embodiment, the agricultural intelligence computer system130, among other components, includes the hybrid seed classification subsystem170. The hybrid seed classification subsystem170is configured to generate a target success yield group of hybrid seeds specifically identified for optimal performance on target fields. As used herein the term “optimal” and related terms (e.g., “optimizing”, “optimization”, etc.) are broad terms that refer to the “best or most effective” with respect to any outcome, system, data etc. (“universal optimization”) as well as improvements that are “better or more effective (“relative optimization”). The target success yield group includes a subset of one or more hybrid seeds, an estimated yield forecast for each hybrid seed, and a probability of success of exceeding the average estimated yield forecast for similarly classified hybrid seeds. In an embodiment, identifying hybrid seeds that will optimally perform on target fields is based on input received by the agricultural intelligence computer system130including, but not limited to, agricultural data records for multiple different hybrid seeds and geo-location data related to the fields where the agricultural data records were collected. For example, if agricultural data records are received for one-hundred hybrid seeds, then the agricultural data records would include growth and yield data for the one-hundred hybrid seeds and geo-location data about the fields where the one-hundred hybrid seeds were planted. In an embodiment, the agricultural intelligence computer system130also receives geo-location and agricultural data for a second set of fields. The second set of fields are the target fields where the grower intends to plant selected hybrid seeds. Information about the target fields are particularly relevant for matching specific hybrid seeds to the environment of the target fields. The hybrid seed normalization instructions172provide instructions to generate a dataset of hybrid seed properties that describe representative yield values and environmental classifications that preferred environmental conditions for each of the hybrid seeds received by the agricultural intelligence computer system130. The probability of success generation instructions174provide instructions to generate a dataset of success probability scores associated with each of the hybrid seeds. The success probability scores describe the probability of a successful yield on the target fields. The yield classification instructions176provide instructions to generate a target success yield group of hybrid seeds that have been identified for optimal performance on target fields based on the success probability scores associated with each of the hybrid seeds. In an embodiment, the agricultural intelligence computer system130is configured to present, via the presentation layer134, the target success yield group of selected hybrid seeds and their normalized yield values and success probability scores. Hybrid seed classification subsystem170and related instructions are additionally described elsewhere herein. 2.6. Hybrid Seed Recommendation Subsystem In an embodiment, the agricultural intelligence computer system130, among other components, includes the hybrid seed recommendation subsystem180. The hybrid seed recommendation subsystem180is configured to generate a set of target hybrid seeds specifically selected for optimal performance on target fields with minimized risk. The set of target hybrid seeds includes a subset of one or more hybrid seeds that have estimated yield forecasts above a specific yield threshold and have an associated risk value that is below a specific risk target. In an embodiment, identifying a set of target hybrid seeds that will optimally perform on target fields is based on an input set of hybrid seeds that have been identified as having a specific probability of producing a successful yield on the target fields. The agricultural intelligence computer system130may be configured to receive a set of hybrid seeds as part of a target success yield group generated by the hybrid seed classification subsystem170. The target success yield group may also include agricultural data specifying the probability of success for each hybrid seed and other agricultural data such as yield value, relative maturity, and environmental observations from previously observed harvests. In an embodiment, the agricultural intelligence computer system130also receives geo-location and agricultural data for a set of target fields. The “target fields” are fields where the grower is considering or intends to plant target hybrid seeds. The hybrid seed filtering instructions182provide instructions to filter and identify a subset of hybrid seeds that have a probability of success value that is above a specified success yield threshold. The risk generation instructions184provide instructions to generate a dataset of risk values associated with each of the hybrid seeds. The risk values describe the amount of risk associated with each hybrid seed with respect to the estimated yield value for each hybrid seed. The optimization classification instructions186provide instructions to generate a dataset of target hybrid seeds that have average yield values above a target threshold for a range of risk values from the dataset of risk values. In an embodiment, the agricultural intelligence computer system130is configured to present, via the presentation layer134, the set of target hybrid seeds and including their average yield values. Hybrid seed recommendation subsystem180and related instructions are additionally described elsewhere herein. 2.7. Hybrid Seed Supply Subsystem In an embodiment, the agricultural intelligence computer system130, among other components, includes the hybrid seed supply subsystem190. The hybrid seed supply subsystem190is configured to estimate actual demands and determine appropriate supply to meet the actual demands despite various uncertainties and risks. In some embodiments, the hybrid seed supply subsystem190is programmed or configured to receive prescriptions of hybrid seed placements from the hybrid seed recommendation subsystem180. The demand management instructions192provide instructions to estimate aggregate demand in a certain period for each hybrid seed with respect to a supply chain. Such estimation may be based on the prescriptions of hybrid seed placements. The safety stock optimization instructions194provide instructions to determine optimal safety stock for one or more production sites to meet the aggregate demand considering possible risks. In an embodiment, the agricultural intelligence computer system130is configured to present, via the presentation layer134, the values of optimal safety stock to other devices associated with the production of hybrid seeds. Hybrid seed supply subsystem190and related instructions are additionally described elsewhere herein. 2.8 Implementation Example-Hardware Overview According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, portable computer systems, handheld devices, networking devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. For example,FIG.4is a block diagram that illustrates a computer system400upon which an embodiment of the invention may be implemented. Computer system400includes a bus402or other communication mechanism for communicating information, and a hardware processor404coupled with bus402for processing information. Hardware processor404may be, for example, a general purpose microprocessor. Computer system400also includes a main memory406, such as a random access memory (RAM) or other dynamic storage device, coupled to bus402for storing information and instructions to be executed by processor404. Main memory406also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor404. Such instructions, when stored in non-transitory storage media accessible to processor404, render computer system400into a special-purpose machine that is customized to perform the operations specified in the instructions. Computer system400further includes a read only memory (ROM)408or other static storage device coupled to bus402for storing static information and instructions for processor404. A storage device410, such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to bus402for storing information and instructions. Computer system400may be coupled via bus402to a display412, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device414, including alphanumeric and other keys, is coupled to bus402for communicating information and command selections to processor404. Another type of user input device is cursor control416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor404and for controlling cursor movement on display412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. Computer system400may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system400to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system400in response to processor404executing one or more sequences of one or more instructions contained in main memory406. Such instructions may be read into main memory406from another storage medium, such as storage device410. Execution of the sequences of instructions contained in main memory406causes processor404to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage device410. Volatile media includes dynamic memory, such as main memory406. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus402. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infrared data communications. Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor404for execution. For example, the instructions may initially be carried on a magnetic disk or solid-state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system400can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infrared signal and appropriate circuitry can place the data on bus402. Bus402carries the data to main memory406, from which processor404retrieves and executes the instructions. The instructions received by main memory406may optionally be stored on storage device410either before or after execution by processor404. Computer system400also includes a communication interface418coupled to bus402. Communication interface418provides a two-way data communication coupling to a network link420that is connected to a local network422. For example, communication interface418may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface418may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface418sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. Network link420typically provides data communication through one or more networks to other data devices. For example, network link420may provide a connection through local network422to a host computer424or to data equipment operated by an Internet Service Provider (ISP)426. ISP426in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet”428. Local network422and Internet428both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link420and through communication interface418, which carry the digital data to and from computer system400, are example forms of transmission media. Computer system400can send messages and receive data, including program code, through the network(s), network link420and communication interface418. In the Internet example, a server430might transmit a requested code for an application program through Internet428, ISP426, local network422and communication interface418. The received code may be executed by processor404as it is received, and/or stored in storage device410, or other non-volatile storage for later execution. 3. Functional Overview—Generate and Display Target Success Yield Group of Hybrid Seeds FIG.7depicts a detailed example of generating a target success yield group of hybrid seeds identified for optimal yield performance on target fields based on agricultural data records of the hybrid seeds and geo-location data associated with the target fields. 3.1. Data Input At step705, the agricultural intelligence computer system130receives agricultural data records from one or more fields for multiple different hybrid seeds. In an embodiment, the agricultural data records may include crop seed data for one or more hybrid seeds. Crop seed data can include historical agricultural data related to the planting, growing, and harvesting of specific hybrid seeds on one or more fields. Examples of crop seed data may include, but are not limited to, historical yield values, harvest time information, and relative maturity of a hybrid seed, and any other observation data about the plant life cycle. For example, the agricultural data records may include hybrid seed data for two hundred (or more) different types of available corn hybrids. The crop seed data associated with each of the corn hybrids would include historical yield values associated with observed harvests, harvest time information relative to planting, and observed relative maturity for each of the corn hybrids on each of the observed fields. For instance, corn hybrid-001 may have agricultural data records that include historical yield data collected from twenty (or more) different fields over the past ten (or more) years. In an embodiment, the agricultural data records may include field specific data related to the fields where the crop seed data was observed. For example, field specific data may include, but is not limited to, geo-location information, observed relative maturity based on field geo-location, historical weather index data, observed soil properties, observed soil moisture and water levels, and any other environmental observations that may be specific to the fields where historical crop seed data is collected. Field specific data may be used to further quantify and classify crop seed data as it relates to each of the hybrid seeds. For example, different fields in different geo-locations may be better suited for different hybrid seeds based on relative maturity of the hybrid seeds and the length of the growing season. Fields within specific regions and sub-regions may have an assigned relative maturity for the growing season that is based on the climate associated with the specific geo-location and the amount of growing degree days (GDDs) available during the growing season. FIG.8depicts an example of different regions within a state that have different assigned relative maturity based on the growing season durations. State805is the state of Illinois and is divided into multiple different regions and sub-regions. Examples of sub-regions may include areas based on county, city, or town boundaries. Each of regions810,815,820,825, and830represent geo-location specific regions that have different growing season durations. For example, region810represents a region of fields that based upon their geo-locations and the associated climate have a shorter growing season because of cooler climates. As a result, region810may be classified as fields that are suited for hybrid seeds with a relative maturity of 100 days (shown as a legend of shades and respective GDD inFIG.8). Region815is located south of region100and as a result may have warmer overall climates. Fields in region815may be classified as fields suited for hybrid seeds with a relative maturity of 105 days. Similarly, regions820,825, and830are located further south than regions810and815, and as a result are classified with relative maturity classifications of 110, 115, and 120 days respectively. Relative maturity classifications for different regions may be used with historical yield data for hybrid seeds to assess how well hybrid seeds perform on fields based on rated relative maturities. In an embodiment, specific field data within the agricultural data records may also include crop rotation data. Soil nutrient management for fields may depend on factors such as establishing diverse crop rotations and managing the amount of tillage of the soil. For example, some historical observations have shown that a “rotation effect” of rotating between different crops on a field may increase crop yield by 5 to 15% over planting the same crop year over year. As a result, crop rotation data within the agricultural data records may be used to help determine a more accurate yield estimation. In an embodiment, specific field data may include tillage data and management practices used during the crop season. Tillage data and management practices refer to the manner and schedule of tillage performed on a particular field. Soil quality and the amount of useful nutrients in the soil varies based upon the amount of topsoil. Soil erosion refers to the removal of topsoil, which is the richest layer of soil in both organic matter and nutrient value. One such practice that causes soil erosion is tillage. Tillage breaks down soil aggregates and increases soil aeration, which may accelerate organic matter decomposition. Therefore, tracking tillage management practices may account for understanding the amount of soil erosion that occurs which may affect the overall yield of planted crop. In an embodiment, the agricultural data records include historical crop seed data and field specific data from a set of test fields used to determine hybrid seed properties by manufacturers. For example, Monsanto Corporation produces several commercial hybrid seeds and tests their crop growth on multiple test fields. Monsanto Corp.'s test fields may serve as an example of a set of test fields where agricultural data records are collected and received by the agricultural intelligence computer system130. In another embodiment, the agricultural data records may include historical crop seed data and field specific data from sets of fields owned and operated by individual growers. These sets of fields where agricultural data records are collected may also be the same fields designated as target fields for planting newly selected crops. In yet other embodiments, sets of fields owned and operated by a grower may provide agricultural data records used by other growers when determining the target success yield group of hybrid seeds. Referring back toFIG.7, at step710, the agricultural intelligence computer system130receives geo-location information for one or more target fields. Target fields represent the fields where the grower is considering planting or planning to plant the set of hybrid seeds selected from the target success yield group. In an embodiment, the geo-location information for the one or more target fields may be used in conjunction with the agricultural data records of specific fields to determine which hybrid seeds, based on relative maturity and climate are best suited for the target fields. 3.2. Agricultural Data Processing At step715, the hybrid seed normalization instructions172provide instruction to generate a dataset of hybrid seed properties that describe representative yield values and environmental classifications for each hybrid seed received as part of the agricultural data records. In an embodiment, the agricultural data records associated with hybrid seeds are used to calculate a representative yield value and an environmental classification for each of the hybrid seeds. The representative yield value is an expected yield value for a specific hybrid seed if planted in a field based on the historical yield values and other agricultural data observed from past harvests. In an embodiment, the normalized yield value may be calculated by normalizing multiple different yield observations from different fields across different observed growth years. For example, fields where a specific hybrid seed was first planted may be used to calculate an average first-year growth cycle yield for a specific hybrid seed. The average first-year growth cycle yield for the specific hybrid seed may include combining observed yield values from different fields over different years. For instance, the specific hybrid seed may have been planted on fields tested during the product stage of Monsanto's commercial product cycle (PS3, PS4, MD1, and MD2) over a time span of 2009 through 2015. However, the first cycle of the specific hybrid seed may have been planted on each of the fields on different years. The following table illustrates one such example: 2009201020112012201320142015Cycle 1PS3PS4MD1MD2Cycle 2PS3PS4MD1MD2Cycle 3PS3PS4MD1MD2Cycle 4PS3PS4MD1MD2The columns of the table represent harvest years and the rows of the table represent Monsanto commercial product development cycles, where cycle 1 represents the 4 years of the hybrid seeds was planted on various fields and cycle 2 represents the second cycle of 4 years for another set of hybrid seeds planted on the same field environments and so on. In an embodiment, calculating normalized yield values may be based on similar cycles for the hybrid seed planted at the multiple fields. For instance, the normalized yield value for cycle 1 may be calculated as an average of the yield values observed on fields PS3 (2009), PS4 (2010), MD1 (2011), and MD2 (2012). By doing so, yield values may be averaged based upon the common feature of how many growth cycles have occurred on the particular fields. In other embodiments, calculating normalized yield values may be based on other agricultural properties from the agricultural data records such as same year or same region/field. In an embodiment, the environmental classification for each of the hybrid seeds may be calculated using a relative maturity field property associated agricultural data records of the hybrid seeds. For example, the specific hybrid seed may have been planted across several fields within region820. Each of the fields within region820are classified as having an observed growth season that aligns with the relative maturity of 110 days. Therefore, based the fields associated with the specific hybrid seed, the environmental classification for the specific hybrid seed may be assigned a relative maturity that equals that of the region820, which is 110 days. In other embodiments, if the fields associated with historical observations of the specific hybrid seed contain fields classified within multiple regions then the environmental classification may be calculated as an average of the different assigned relative maturity values. In an embodiment, the dataset of hybrid seed properties contains normalized yield values for each hybrid seed and an environmental classification that describes the relative maturity value associated with the normalized yield value. In other embodiments, the dataset of hybrid seed properties may also include properties related to the hybrid seed growth cycle and field properties such as crop rotations, tillage, weather observations, soil composition, and any other agricultural observations. Referring back toFIG.7, at step720the probability of success generation instructions174provide instruction to generate a dataset of success probability scores for each of the hybrid seeds which, describe a probability of a successful yield as a probabilistic value of achieving a successful yield relative to average yields of other hybrid seeds with the same relative maturity. In an embodiment, the success probability scores for the hybrid seeds are based upon the dataset of hybrid seed properties with respect to the geo-locations associated with the target fields. For example, relative maturity values associated with the geo-locations of the target fields are used in part to determine the set of hybrid seeds to evaluate against in order to calculate a success probability score for a particular hybrid seed. For instance, corn hybrid-002 may be a hybrid seed with a normalized yield calculated as 7.5 bushels per acre and an assigned relative maturity of 100 GDD. Corn hybrid-002 is then compared against other hybrid seeds that have similar relative maturity in order to determine whether corn hybrid-002 a good candidate for planting based upon the normalized yield value of corn hybrid-002 and the other hybrid seeds. Machine learning techniques are implemented to determine probability of success scores for the hybrid seeds at the geo-locations associated with the target fields. In an embodiment, the normalized yield values and assigned relative maturity values are used as predictor variables for machine learning models. In other embodiments, additional hybrid seed properties such as, crop rotations, tillage, weather observations, soil composition, may also be used as additional predictor variables for the machine learning models. The target variable of the machine learning models is a probabilistic value ranging from 0 to 1, where 0 equals a 0% probability of a successful yield and 1 equals a 100% probability of a successful yield. In other embodiments, the target variable may be a probabilistic value that may be scaled from 0 to 10, 1 to 10, or any other scale of measurement. A successful yield is described as the likelihood that the yield of a specific hybrid seed is a certain value above the mean yield for similarly classified hybrid seeds. For example, a successful yield may be defined as a yield that is 5 bushels per acre above the mean yield of hybrid seeds that have the same assigned relative maturity value. FIG.9depicts a sample graph describing the range of normalized yield values for hybrid seeds within a classified relative maturity. Mean value905represents the calculated mean yield value for hybrid seeds that have the same relative maturity, such as 110 GDD. In an embodiment, determining which hybrid seeds have a significant normalized yield above the mean value905may be calculated by implementing a least significant difference calculation. The least significant difference is a value at a particular level of statistical probability. If the value is exceeded by the difference between two means, then the two means are said to be distinct. For example, if the difference between yield values of a hybrid seed and the calculated mean yield exceeds the least significant difference value, then the yield for the hybrid seed is seen as distinct. In other embodiments, determining significant differences between yield values and the mean value905may be determined using any other statistical algorithm. Range910represents a range of yield values that are considered within the least significant difference value, and therefore are not significantly distinct. Threshold915represents the upper limit of the range910. Normalized yield values above threshold915are then considered to be significantly distinct from the mean value905. In an embodiment, range910and threshold915may be configured to represent a threshold for determining which hybrid seed yields are considered to be significantly higher than the mean value905and therefore a successful yield value. For example, threshold915may be configured to equal a value that is 5 bushels per acre above the mean value905. In an embodiment, threshold915may be configured as a yield value that is dependent on the mean value905, range910, and the overall range of yield values for the specific hybrid seeds that have the same relative maturity. Range920represents a range of yield values for hybrid seeds that are considered successful yields. Hybrid seed925represents a specific hybrid seed within the range920that has a normalized yield value above the threshold915. In an embodiment, machine learning models may be configured to use the range910and threshold915when calculating probability of success scores between 0 and 1. Different machine learning models may include, but are not limited to, logistic regression, random forest, vector machine modelling, and gradient boost modelling. In an embodiment, logistic regression may be implemented as the machine learning technique to determine probability of success scores for each of the hybrid seeds for the target fields. For logistic regression, the input values for each hybrid seed are the normalized yield value and the environmental classification, which is specified as relative maturity. The functional form of the logistic regression is: P(y=1❘"\[RightBracketingBar]"⁢x1=yldi_,x2=R⁢Mj_)=ea+b*x1+c*x21+ea+b*x1+c*x2,where⁢⁢P⁡(y=1⁢❘"\[LeftBracketingBar]"x1=yldi_,x2=R⁢Mj_)is the probability of success (y=1) for product i with normalized yield value and in target field j with a given relative maturity;constants a, b and c are the regression coefficients estimated through historical data. The output of the logistic regression is a set of probability scores between 0 and 1 for each hybrid seed specifying success at the target field based upon the relative maturity assigned to the geo-location associated with the target fields. In another embodiment, a random forest algorithm may be implemented as the machine learning technique to determine probability of success scores for each of the hybrid seeds for the target fields. Random forest algorithm is an ensemble machine learning method that operates by constructing multiple decision trees during a training period and then outputs the class that is the mean regression of the individual trees. The input values for each hybrid seed are the normalized yield value and the environmental classification as relative maturity. The output is a set of probability scores for each hybrid seed between 0 and 1. In another embodiment, support vector machine (SVM) modelling may be implemented as the machine learning technique to determine probability of success scores for each of the hybrid seeds for the target fields. Support vector machine modelling is a supervised learning model used to classify whether input using classification and regression analysis. Input values for the support vector machine model are the normalized yield values and the environmental classification relative maturity values for each hybrid seed. The output is a set of probability scores for each hybrid seed between 0 and 1. In yet another embodiment, gradient boost (GBM) modelling may be implemented as the machine learning technique, where the input values are the normalized yield values and the environmental classification relative maturity values for each hybrid seed. Gradient boost is a technique for regression and classification problems, which produces a prediction model in the form of an ensemble of weak prediction models, such as decision trees. Referring toFIG.7, at step725the yield classification instructions176generate a target success yield group made up of a subset of the hybrid seeds that have been identified as having a high probability to produce a yield that is significantly higher than the average yield for other hybrid seeds within the same relative maturity classification for the target fields. In an embodiment, the target success yield group contains hybrid seeds that have probability of success values that are above a specific success probability threshold. The success probability threshold may be configured probability value that is associated with yields that are significantly higher than the mean yield of other hybrid seeds. For example, if at step720the yield threshold for successful yields is equal to five bushels per acre above the mean value, then the success probability threshold may be associated with a probability of success value equal to that of the yield threshold. For instance, if the yield threshold equals five bushels per acre above the mean yield and has a probability of success value as 0.80 then the success probability threshold may be assigned 0.80. In this example, the target success yield group would contain hybrid seeds that have probability of success values equal to or greater than 0.80. In other embodiments, the success probability threshold may be configured to be higher or lower depending on whether the grower desires a smaller or larger target success yield group respectively. 3.3. Present Target Success Yield Group In an embodiment, the target success yield group contains hybrid seeds that have an assigned relative maturity value that equals the relative maturity associated with the target fields. At step730, the presentation layer134of the agricultural intelligence computer system130is configured to display or cause display, on a display device on the field manager computing device104, of the target success yield group and normalized yield values for each hybrid seed within the target success yield group. In another embodiment, the presentation layer134may communicate the display of the target success yield group to any other display devices that may be communicatively coupled to the agricultural intelligence computer system130, such as remote computer devices, display devices within a cab, or any other connected mobile devices. In yet another embodiment, the presentation layer134may communicate the target success yield group to other systems and subsystems with the agricultural intelligence computer system130for further processing and presentation. In an embodiment, the presentation layer134may display additional hybrid seed property data and other agricultural data that may be relevant to the grower. The presentation layer134may also sort the hybrid seed in the target success yield group based on the probability of success values. For example, the display of hybrid seeds may be sorted in descending order of probability of success values such that the grower is able to view the most successful hybrid seeds for his target fields first. In some embodiments, the after receiving the information displayed, a grower may act on the information and plant the suggested hybrid seeds. In some embodiments, the growers may operate as part of the organization that is determining the target success yield group, and/or may be separate. For example, the growers may be clients of the organization determining the target success yield group and may plant seed based on the target success yield group. 4. Functional Overview—Generating and Displaying Target Hybrid Seeds for Planting FIG.10depicts a detailed example of generating a set of target hybrid seeds identified for optimal yield performance and managed risk on target fields based on agricultural data records of the hybrid seeds and geo-location data associated with the target fields. 4.1. Data Input At step1005, the agricultural intelligence computer system130receives a dataset of candidate hybrid seeds including one or more hybrid seeds suited for planting on target fields, probability of success values associated with each hybrid seed, and historical agricultural data associated with each hybrid seed. In an embodiment, the dataset of candidate hybrid seeds may include a set of one or more hybrid seeds identified by the hybrid seed classification subsystem170as having a high probability to produce successful yield values on the target fields and historical agricultural data associated with each hybrid seed in the set of candidate hybrid seeds. The target success yield group generated at step725inFIG.7may represent the dataset of candidate hybrid seeds. In an embodiment, the historical agricultural data may include agricultural data related to the planting, growing, and harvesting of specific hybrid seeds on one or more fields. Examples of agricultural data may include, but are not limited to, historical yield values, harvest time information, and relative maturity of a hybrid seed, and any other observation data about the plant lifecycle. For example, if the dataset of candidate hybrid seeds is the target success yield group from the hybrid seed classification subsystem170, then the agricultural data may include an average yield value and a relative maturity assigned to each hybrid seed. At step1010, the agricultural intelligence computer system130receives data about the target fields where the grower is planning to plant the set of target hybrid seeds. In an embodiment, the data about the target fields is property information that includes, but is not limited to, geo-location information for the target fields and dimension and size information for each of the target fields. In an embodiment, the geo-location information for the target fields may be used in conjunction with the historical agricultural data to determine optimal set of target hybrid seeds and amount of each of the target hybrid seeds to plant on each of the target fields based on relative maturity and climate of the target fields. 4.2. Hybrid Seed Selection At step1015, the hybrid seed filtering instructions182provide instruction to select a subset of one or more hybrid seeds from the candidate set of hybrid seeds that have a probability of success value greater than or equal to a target probability filtering threshold. In an embodiment, the target probability filtering threshold is a configured threshold of the probability of success value associated with each of the hybrid seeds in the candidate set of hybrid seeds. The target probability filtering threshold may be used to further narrow the selection pool of hybrid seeds based upon only selecting the hybrid seeds that have a certain probability of success. In an embodiment, if the candidate set of hybrid seeds represents the target success yield group generated at step725, then it is likely that the set of hybrid seeds have already been filtered to only include hybrid seeds with a high probability of success value. In one example, the target probability filtering threshold may have the same threshold value as the successful yield threshold used to generate the target success yield group. If that is the case, then the subset of one or more hybrid seeds may include the entire set of hybrid seeds. In another example, the grower may desire a more narrowed list of hybrid seeds, which may be achieved by configuring a higher probability of success value for the target probability filtering threshold to filter out the hybrid seeds that have lower than desired probability of success values. At step1020, the seed normalization instructions172provide instruction to generate a representative yield value for each hybrid seed in the subset of one or more hybrid seeds based on yield values from the historical agricultural data for each of the hybrid seeds. In an embodiment, representative yield value is an expected yield value for a specific hybrid seed if planted in a field based on the historical yield values and other agricultural data observed from past harvests. In an embodiment, the representative yield value is a calculated average of yields from multiple different observed growth seasons on multiple fields. For example, the representative yield value may be calculated as an average of different observed growth cycle years, where an average first-year growth cycle yield for the specific hybrid seed may incorporate combining observed yield values from different fields over different years. After calculating average growth cycle yields for different growth cycle years, each of the averages may be combined to generate a representative average yield for each specific hybrid seed. In another embodiment, the representative yield value may be the normalized yield value calculated at step715. 4.3. Generate Risk Values for Hybrid Seeds At step1025, the risk generation instructions184provide instruction to generate a dataset of risk values for each hybrid seed in the subset of one or more hybrid seeds based upon historical agricultural data associated with each of the hybrid seeds. Risk values describe the amount of risk, in terms of yield variability, for each hybrid seed based upon the representative yield value. For example, if for corn hybrid-002 the representative yield is fifteen bushels per acre however, the variability for corn hybrid-002 is high such that the yield may range from five bushels per acre to twenty-five bushels per acre, then it is likely that the representative yield for corn hybrid-002 is not a good representation of actual yield because the yield may vary between five and twenty-five bushels per acre. High risk values are associated with high variability on yield return, whereas low risk values are associated with low variability on yield return and yield outcomes that are more closely aligned to the representative yield. In an embodiment, risk values for hybrid seeds are based on the variability between year-to-year yield returns for a specific hybrid seed over two or more years. For example, calculating a risk value for corn hybrid-002 includes calculating the variability of yield values from multiple years of yield output from the historical agricultural data. The variance in yield output from 2015 and 2016 for corn hybrid-002 may be used to determine a risk value that may be associated with the representative yield value for corn hybrid-002. Determining the variance of yield output is not limited to using yield output from two previous years, variance may be calculated with yield output data from multiple years. In an embodiment, the calculated risk values may be represented in terms of a standard deviation of bushel per acre, where standard deviation is calculated as the square root of the calculated variance of risk. In an embodiment, risk values for hybrid seeds may be based on the variability of yield output from field-to-field observations for a specific year. For example, calculating a risk value associated with field variability may include determining the variability of yields from each field observed for a specific hybrid seed for a specific year. If for a specific hybrid seed the observed yield output across multiple fields ranges from five to fifty bushels per acre, then the specific hybrid seed may have high field variability. As a result, the specific hybrid seed may be assigned a high-risk factor based on field variability because expected output on any given field may vary between five to fifty bushels per acre instead of being closer to the representative yield value. In another embodiment, risk values for hybrid seeds may be based upon variability between year-to-year yield returns and variability between field-to-field observations. Both the year-to-year risk values and the field-to-field risk values may be combined to represent a risk value that incorporates variability of yield output across multiple observed fields and multiple observed seasons. In yet other embodiments, risk values may incorporate other observed crop seed data associated with historical crop growth and yield. 4.4. Generate Dataset of Target Hybrid Seeds At step1030, the optimization classification instructions186provide instruction to generate a dataset of target hybrid seeds for planting on the target fields based on the dataset of risk values, the representative yield values for the hybrid seeds, and the one or more properties for the target fields. In an embodiment, the target hybrid seeds in the dataset of target hybrid seeds are selected based upon their representative yield values and the associated risk values from the dataset of risk values. Determining which combination of hybrid seeds to include in the dataset of target hybrid seeds involves determining a relationship between the representative yield for a specific hybrid seed and the risk value associated with the specific hybrid seed. Choosing hybrid seeds that have high representative yields may not result in an optimal set of hybrid seeds if the high yield hybrid seeds also carry a high level of risk. Conversely, choosing hybrid seeds that have low risk values may not have a high enough yield return on investment. In an embodiment, the hybrid seeds from the subset of one or more hybrid seeds may be graphed based on their respective representative yield values versus their associated risk values.FIG.11depicts an example graph1105of yield versus risk for the subset of one or more hybrid seeds. The y-axis1110represents the representative yield, as expected yield, for the hybrid seeds and the x-axis1115represents the risk values for the hybrid seeds expressed as standard deviation. By representing risk values as standard deviation, the unit of the risk values may be the same as the units for representative yield, which is bushels per acre. Dots on graph1105, represented by group1125and group1130represent each of the hybrid seeds from the subset of one or more hybrid seeds. For example, graph1105shows that hybrid seed1135has a representative yield value two hundred bushels per acre and a risk value having a standard deviation of one hundred ninety-one bushels per acre. In other embodiments, graph1105may be generated using different units such as profit per acre measured in dollars or any other derived unit of measurement. In an embodiment, determining which hybrid seeds belong in the dataset of target hybrid seeds involves determining an expected yield return for a specified amount of risk. To generate set of target hybrid seeds that will likely be resilient to various environmental and other factors, it is preferable to generate a diverse set of hybrid seeds that contains hybrid seeds with both lower and higher risk values as well as moderate to high yield output. Referring toFIG.10, step1032represents generating a target threshold of representative yield values for a range of risk values. In an embodiment, the optimization classification instructions186provide instruction to calculate an optimal frontier curve that represents a threshold of optimal yield output with a manageable amount of risk tolerance over the range of risk values. A frontier curve is a fitted curve that represents the optimal output with respect to the graphed input values considering optimal efficiency. For example, graph1105contains hybrid seeds based on representative yield versus risk value, where it may be inferred that a specific hybrid seed that has a higher yield is likely to also have higher risk. Conversely, hybrid seeds that have lower risk values are likely to have lower representative yield values. Frontier curve1120represents an optimal curve that tracks the optimal amount of yield based on a range of risk values. At step1034, the optimization classification instructions186provide instruction to select hybrid seeds that make up the set of target hybrid seeds by selecting the hybrid seeds that have a representative yield and risk value that meets the threshold defined by the frontier curve1120. Hybrid seeds that fall on or near the frontier curve1120provide the optimal level of yield at the desired level of risk. Target hybrid seeds1140represent the optimal set of hybrid seeds for the dataset of target hybrid seeds. Hybrid seeds that fall under the frontier curve1120have sub-optimal yield output for the level of risk or have higher than desired risk for the level of yield output produced. For example, hybrid seed1135is under the frontier curve1120and may be interpreted as having lower than optimal yield for its amount of risk, as shown by the placement of hybrid seed1135being vertically below the frontier curve1120. Also, hybrid seed1135may be interpreted as having higher than expected risk for its yield output, as shown by the placement of hybrid seed1135being horizontally to the right of the frontier curve1120for that amount of representative yield. Hybrid seeds1135that are not on or near the frontier curve1120have sub-optimal representative yield for their associated risk values and are therefore not included in the set of target hybrid seeds. Additionally, hybrid seeds1135represent hybrid seeds that have a higher than desired risk value and are therefore not included in the set of target hybrid seeds. In an embodiment, the optimization classification instructions186provide instruction to generate allocation instructions for each target hybrid seed in the set of target hybrid seeds. Allocation instructions describe an allocation quantity of seeds for each target hybrid seed in the set of target hybrid seeds that provide an optimal allocation strategy to a grower based upon the amount and location of the target fields. For example, allocation instructions for a set of target hybrid seeds that includes seeds (CN-001, CN-002, SOY-005, CN-023) may include an allocation of 75% of CN-001, 10% of CN-002, 13% of SOY-005, and 2% of CN-023. Embodiments of the allocation instructions may include, but are not limited to, number of bags of seeds, a percentage of the total seeds to be planted across the target fields, or an allotment number of acres for each target hybrid seed to be planted. In an embodiment, determining allocation amounts may be calculated using a third-party optimization solver product, such as CPLEX Optimizer by IBM. The CPLEX Optimizer is a mathematical programming solver for linear programming, mixed integer programming, and quadratic programming. Optimization solvers, such as CPLEX Optimizer, are configured to evaluate the representative yield values and risk values associated with the target hybrid seeds and determine a set of allocation instructions for allocating amounts of seeds for each of the target hybrid seeds in the set of target hybrid seeds. In an embodiment, the optimization solver may use the sum of the representative yield values of target hybrid seeds and a calculated sum of risk values of the target hybrid seeds to calculate a configured total risk threshold that may be used to determine the upper limits of allowed risk and yield output for the set of target hybrid seeds. In another embodiment, the optimization solver may also input target field data describing size, shape, and geo-location of each of the target fields, in order to determine allocation instructions that include placement instructions for each of the allotments of target hybrid seeds. For example, if a particular target field is shaped or sized in a particular way, the optimization solver may determine that allotment of one target hybrid seed is preferable on the particular field as opposed to planting multiple target hybrid seeds on the particular field. The optimization solver is not limited to the CPLEX Optimizer, other embodiments may implement other optimization solvers or other optimization algorithms to determine sets of allocation instructions for the set of target hybrid seeds. 4.5. Seed Portfolio Analysis Step1030described determining and generating the set of target hybrid seeds for a grower based on the target fields using the frontier curve to determine the optimal yield output for the desired level of risks. In an embodiment, the optimization classification instructions186provide instruction to configure the frontier curve to determine overall optimal performance for a grower's seed portfolio relative to other growers within the same region or sub-region. For example, representative yield output and overall risk values may be calculated for each grower within a specific region. For example, using historical agricultural data for multiple growers, the representative yield values and associated risk values for hybrid seeds planted by each grower may be aggregated to generate an aggregated yield output value and aggregated risk value associated with each grower. Then the aggregated values for each grower may be graphed on a seed portfolio graph, similar to graph1105, where the individual dots on the graph may represent a grower's aggregated hybrid seed yield output and aggregated risk. In an embodiment, the frontier curve may be generated to determine an optimal aggregated yield output and aggregated risk value for the growers in the specific region. Growers that are on or near the frontier curve may represent growers whose seed portfolio produces the optimal amount of yield with a managed amount of risk. Growers that are below the frontier curve represent growers that are not maximizing their output based on their risk. In an embodiment, the optimization classification instructions186provide instruction to generate an alert message for a particular grower if the aggregated yield output and aggregated risk for the grower's seed portfolio does not meet the optimal threshold for the seed portfolio as described by the frontier curve on a seed portfolio graph. The presentation layer134may be configured to present and send the alert message to the field manager computing device104for the grower. The grower may then have the option of requesting a set of target hybrid seeds that may provide optimal yield output for future growing seasons. 4.6. Present Set of Target Hybrid Seeds In an embodiment, the dataset of target hybrid seeds may contain the representative yield values and risk values, from the dataset of risk values, associated with each target hybrid seed in the dataset of target hybrid seeds for the target fields. Referring toFIG.10, at step1035the presentation layer134of the agricultural intelligence computer system130is configured to communicate a display, on a display device on the field manager computing device104, of the dataset of target hybrid seeds including the representative yield values and associated risk values for each target hybrid seed. In another embodiment, the presentation layer134may communicate the display of the dataset of target hybrid seeds to any other display devices that may be communicatively coupled to the agricultural intelligence computer system130, such as remote computer devices, display devices within a cab, or any other connected mobile devices. In yet another embodiment, the presentation layer134may communicate the dataset of target hybrid seeds to other systems and subsystems with the agricultural intelligence computer system130for further processing and presentation. In an embodiment, the presentation layer134may display allocation instructions, including seed allotments and placement information, for each target hybrid seed. The presentation layer134may also sort the target hybrid seeds based on allotment quantity or may present the target hybrid seeds based on placement strategy on the target fields. For example, the display of target hybrid seeds and allocation instructions may be superimposed onto a map of the target fields so that the grower may visualize planting strategy for the upcoming season. In some embodiments, growers can take in the information presented related to allocation instructions and plant seeds based on the allocation instructions. The growers may operate as part of the organization that is determining the allocation instructions, and/or may be separate. For example, the growers may be clients of the organization determining the allocation instructions and may plant seed based on the allocation instructions. 5. Functional Overview—Determine Supply Based on Seed Placement Prescription 5.1 Demand Fulfillment Through Supply Chain In some embodiments, the server is programmed to identify a supply chain for each of a plurality of hybrid seeds that may be produced. Generally, the server may represent a hybrid seed maker or a production site, although the server may also represent a distribution facility. The server may communicate with distribution computer systems that represent distribution agents or their facilities. The distribution computer systems may in turn communicate with grower computers that represent growers (seed planters) or their fields. The server and associated hybrid seed production site, the distribution computer systems and associated distribution facilities, and the grower computers and associated grower fields constitute the supply chain. Each distribution facility may have unique characteristics, such as the location, delivery or storage capability, store management efficiency, desired service level, or expected revenue. These unique characteristics may determine which grower fields submit hybrid seed demands to a distribution facility and how much demand the distribution facility receives overall for each growing season. For example, a grower field might buy hybrid seeds from a distribution facility that is nearby, offers reasonable prices, can deliver the hybrid seeds within a certain time frame, and has a straightforward billing system. The unique characteristics may also determine whether the hybrid seed production site sends hybrid seeds to a distribution facility for sale. For example, the hybrid seed production site may prefer a distribution facility that strives to maintain a high service level, willing to pay premium prices, and tends to receive a large volume of aggregate demand. Therefore, hybrid seeds may travel in the supply chain from the production sites to the grower fields through various routes. In some embodiments, the server is programmed or configured to determine one or more distribution facilities for each hybrid seed placement prescribed for a target field. For example, the server can be configured to select the distribution facility that is closest to a grower field and/or typically keeps the largest stock for the prescribed hybrid seed. For further example, the server can be configured to select the distribution facility with which the grower's field has had a relationship for a certain period of time. The server can be configured to further send information regarding the recommended one or more distribution facilities to the grower computer associated with the grower field to increase the chance that the demand of that grower field based on the prescription is met by a recommended distribution facility. In some embodiments, the supply chain comprises no distribution computer systems or associated distribution facilities. Consequently, the server is programmed or configured to receive the demands from all the grower computers directly and can determine the supply to satisfy those demands in a relatively straightforward manner. In other embodiments, the supply chain may have a hierarchical structure comprising multiple levels. For example, instead of a flat list of distributors, there could be retailers who sell to grower fields and wholesalers who sell to retailers. The server can be configured to then map out the flow of demands from one level to the next, starting with the grower fields level. 5.2 Safety Stock Optimization For the hybrid seed production site, the volume to produce for each hybrid seed is largely driven by demand. Each distribution facility receives an aggregate demand for a hybrid seed from one or more grower fields during an aggregation period, and the production site receives an aggregate demand from one or more distribution facilities. It would generally be one of the goals for the production site to completely fulfill the demand without producing extra supply. In reality, the production site would desire to estimate the demand for a future time as well as any risk associated with estimated demand. In some embodiments, the server is thus programmed to estimate the safety stock, which describes a level of extra stock that is maintained at the production site to mitigate the risk of stockouts (shortfall in raw material or packaging) and acts as a buffer stock in case sales are greater than planned and/or the seed producer is unable to deliver the additional units at the expected time. A common approach for calculating the safety stock is based on several factors, including demand, lead time, service level, and forecast error. The demand (aggregate demand received from the distribution facilities) is to be modeled as one or more random variables. For example, it may be assumed that demand during successive unit time periods are independent and identically distributed random variables drawn from a normal distribution. In some embodiments, the demand from each distribution facility and the probabilistic attributes thereof can be estimated from forecasted aggregate demands of hybrid seeds received by each distribution facility in the supply chain, as noted above, and captured into a demand profile, as further discussed below. The lead time is the delay between when the reorder point (when an order is triggered from a low inventory level) is reached at a distribution facility and the time of renewed stock availability at the distribution facility. In certain embodiments, the lead time may depend on how long it takes to make and package the hybrid seeds at the production site and transport the hybrid seeds to a distribution facility. The service level may be related specifically to the desired probability of meeting the demand during the estimated lead time without a stockout. The forecast error is an estimate of how far the actual demand may be from the forecasted demand. In other embodiments, the forecast error may result from the discrepancy between recommended purchasing options—both the type and quantity of hybrid seeds and the choice of distribution facility—and actual purchase orders. The forecast error may account for some of the probabilistic aspects of the demand and the characterization of the forecast error may rely on historical demand patterns at the distribution facility or the grower fields. FIG.12illustrates one example definition of a facility demand profile that characterizes aggregate customer demand during one or more aggregation periods at a distribution facility. This definition is in the form of a two-column table, where the first column1220includes the identifier of a type of data to be provided in the profile and the second column1222includes a description of the type of data. In some embodiments, the aggregation period can depend on how often the server is programmed to provide prescriptions of hybrid seed placements. For example, hybrid seed placements may be prescribed once every year for the growing season in each year. Thus, the aggregation period can be one year or a shorter period in which most of the prescription-triggered purchase orders are submitted following receipt of the prescriptions of hybrid seed placements. In some embodiments, the first group of rows1202concerns basic data regarding the distribution facility in a specific period comprising one or more aggregation periods. Specifically, the “product name” could identify a specific hybrid seed. The second group of rows1204(having only one row) corresponds to the estimated aggregate demand for this specific period. The value for this row can also be estimated as the sum of quantities of prescribed hybrid seeds recommended to this distribution facility. Each quantity demanded by a grower's field can in turn be estimated from the prescribed hybrid seed density and the size of the grower's field. The third group of rows1206concerns data related to individual demands, such as the mean and the standard deviation calculated over all individual demands received over those aggregation periods where the aggregate demand is non-zero. The value for each of these rows can be estimated from the set of quantities of prescribed hybrid seeds recommended to this distribution facility. The fourth group of rows1208concerns data regarding aggregate demands for each aggregation period, such as the mean and the standard deviation calculated over all the aggregation periods. Assuming only one aggregation period in the specific period, the value for the mean can be similarly estimated as the sum of quantities of prescribed hybrid seeds recommended to this distribution facility. The value for the standard deviation, however, might need to be estimated from demand data related to multiple periods. Alternatively, the server can be configured to recommend not only a distribution facility but also an aggregation period in which to submit purchase orders of hybrid seeds to the distribution facility. The value for the standard deviation might then be estimated from the quantities of prescribed hybrid seeds recommended to this distribution facility during multiple aggregation periods within the specific period. The fifth group of rows1210further concerns data regarding demands for each aggregation period, such as a minimum and a maximum over all aggregation periods where the aggregate demand is non-zero. Similarly, the value for these rows might need to be estimated from demand data related to multiple periods or from the quantities of prescribed hybrid seeds recommended to this distribution facility during multiple aggregation periods within the specific period. The group of rows1212concerns data derived from the second group of rows1204, including a total cost based on the total demand quantity. In some embodiments, the supply chain comprises no distribution computer systems or associated distribution facilities. The facility demand profile can then be reduced to a customer demand profile for each grower field. The server can be programmed to estimate the aggregate demand and perform safety stock optimization directly from the prescribed hybrid seed placements. In other embodiments, the supply chain may have a hierarchical structure comprising multiple levels. The server can be configured to estimate the aggregate demand from the top-level distribution facilities by propagating up the demands all the way from the grower fields based on the prescribed hybrid seed placements, the recommended customer-facing distribution facilities, higher-level distribution facilities, and so on. The server can be configured to then perform safety stock optimization based on such aggregate demands. In some embodiments, having built the facility demand profile or customer demand profile and determined values of certain other factors, such as lead time, service level, or forecast error, the server is programmed or configured to apply the common approach for safety stock optimization. The safety stock optimization can be performed according to a schedule, such as in November every year before the growing season in the following year. For the growing season in the first year, which is representative of having no stock for at least one hybrid seed at any of the distribution facility or the production site, the safety stock optimization can be performed mainly based on the prescribed hybrid seed placements for the first year and the corresponding facility demand profiles or customer demand profiles. The server is programmed to instruct enough production to satisfy the forecasted safety stock. In response to receiving prescriptions of hybrid seed placements and recommendations of distribution facilities, some grower computers might submit respective demands, such as purchase orders of specific quantities of specific hybrid seeds, to specific distribution facilities. The distribution computer systems might in turn submit respective demands corresponding to an aggregate demand submitted to the respective distribution computer systems. The server can be configured to thus determine how much the actual demands submitted by the distribution computer systems deviates from the forecasted demands and thus how much stock is left over at the production site after the growing season in the first year. In some embodiments, the server is configured to also receive data from the distribution computer systems regarding leftover stock at the distribution sites or from the grower computers regarding the leftover stock at the grower fields. A leftover stock at a grower's field might result from bad weather, for example. A leftover stock at a distribution facility might result from trying to maintain an over-estimated safety stock. Assuming that the hybrid seeds in the leftover stock are still in good condition for the growing season in the second year, they should simply be used for that growing season before new seeds are produced. In some embodiments, for the growing season in the second year then, the facility or customer demand profiles incorporate the impact of the leftover stock at the distribution facilities or the grower fields. The safety stock optimization can then be similarly performed based on the prescribed hybrid seed placements for the second year and the corresponding facility demand profiles or customer demand profiles. The server can then be programmed to instruct enough production to satisfy the forecasted safety stock minus the leftover stock at the production site. For the growing season in any subsequent year, the server is programmed or configured to follow the same process as described for the growing season in the second year. 5.3 Example Processes FIG.13illustrates an example process performed by the server of managing supply of hybrid seeds based on hybrid seed placement prescriptions.FIG.13is intended to disclose an algorithm, plan or outline that can be used to implement one or more computer programs or other software elements which when executed cause performing the functional improvements and technical advances that are described herein. Furthermore, the flow diagrams herein are described at the same level of detail that persons of ordinary skill in the art ordinarily use to communicate with one another about algorithms, plans, or specifications forming a basis of software programs that they plan to code or implement using their accumulated skill and knowledge. Steps1302through1312correspond to steps1005through1334illustrated inFIG.10. Steps1302through1312could also be replaced by steps corresponding to steps705through725illustrated inFIG.7. In some embodiments, in steps1302through1312, the server is programmed to provide a prescription of a list of target hybrid seed placement for a grower field in terms of a total quantity to be planted on the grower field regardless of the size of the grower field or a seeding density, which can be converted into a total quantity considering the size of the grower field. The prescription instructs how to plant the list of target hybrid seeds typically for the next growing season. The server may be programmed to further provide a recommendation of how to obtain the list of hybrid seeds to be planted, as further discussed below. In some embodiments, in step1314, the server is programmed or configured to generate a prescription related to planting the list of target hybrid seeds in the one or more target fields for a next period, as discussed in section 4.4 above. The prescription may indicate a total quantity to be planted in the one or more fields or a seeding density, which can be converted to a total quantity based on the size of the one or more fields. The server can be configured to further transmit the prescription to a grower device associated with the one or more fields. In some embodiments, in step1316, the server is programmed or configured to determine a safety stock for a next period with respect to a supply chain based on the list of target hybrid seeds for the one or more target fields, as discussed in sections 5.1 and 5.2 above. With the safety stock for each hybrid seed generally depending on a probability distribution of the demand for the hybrid seed during each period, the server can be configured to estimate different parameters of the probability distribution based on prescriptions of hybrid seed placements over one or more periods. For maintenance of the safety stock, the server can also be configured to consider lead time, service level, additional demand forecast error margin, or other factors. For a supply chain comprising the server representing a hybrid seed production site or a distribution facility having more distribution facilities as customers, the server is programmed to address the added complexity of each distribution facility receiving demands from one or more grower fields or other distribution facilities and processing such demands in respectively distinct approaches. More specifically, the server can be configured to also select one or more distribution facilities for each grower field and send a recommendation of the selected distribution facilities to the grower device associated with the grower field. The server can be configured to then build or maintain a facility demand profile for each distribution facility that characterizes the receipt and processing of demands for hybrid seeds based on the prescriptions of hybrid seed placements for planting the hybrid seeds and recommendations of distribution facilities for obtaining the hybrid seeds to be planted. In some embodiments, in step1318, the server is programmed or configured to cause displaying values of the safety stock, thereby causing production of the safety stock, as also discussed in section 5.2 above. The server can be programmed to further convert the safety stock values into seed production instructions and transmit the safety stock values and/or the seed production instructions to agricultural or other implements at the seed production sites. The safety stock values may also include individual values for separate distribution facilities, and the server can be programmed to transmit recommendations of the individual values to the separate distribution facilities. In some embodiments, after the next growing season is over, that growing season became the current growing season, and all the steps illustrated inFIG.13can be repeated to determine the safety stock for the subsequent growing season. The server can be programmed to consider additional data that has become available in the current period in determine how to manage its stock for this next period. The additional data includes the actual demands in the current period, such as actual orders of hybrid seeds received by the server. For example, the server can be configured to produce instructions for production systems to produce a smaller quantity of a hybrid seed than the value of the safety stock based on an estimated aggregate demand for the hybrid seed to utilize any leftover stock. 6. Extensions and Alternatives In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction.
142,821
11861738
EXAMPLE EMBODIMENTS First Example Embodiment The following describes a cultivation-target crop selection assisting apparatus, a cultivation-target crop selection assisting method, and a program according to a first example embodiment of the invention with reference toFIGS.1to5. Apparatus Configuration First, an overall configuration of a cultivation-target crop selection assisting apparatus according to the first example embodiment will be described.FIG.1is a block diagram showing an overall configuration of a cultivation-target crop selection assisting apparatus according to the first example embodiment of the invention. A cultivation-target crop selection assisting apparatus10according to the present example embodiment, which is shown inFIG.1, is an apparatus that assists a user, who is an agricultural business operator, in selecting a cultivation-target crop in a specific cultivated land. As shown inFIG.1, the cultivation-target crop selection assisting apparatus10includes an information collection unit11, a prediction value calculation unit12, and a crop selection unit13. The information collection unit11collects information regarding a specific cultivated land. The prediction value calculation unit12calculates prediction values of actual performance of cultivation of one or a plurality of varieties of crops in a specific cultivated land by applying information collected by the information collection unit11to a prediction model14. The prediction model is a model created by performing machine learning on a relationship between information regarding sample cultivated lands and actual performance information regarding the crops produced in the sample cultivated lands. The crop selection unit13selects a crop that is suitable for a specific cultivated land based on the prediction values calculated for one or a plurality of varieties of crops. In this way, in the first example embodiment, prediction values for a case in which a crop is cultivated in the cultivated land can be obtained by simply applying information regarding the cultivated land to the prediction model14, and also a crop that is suitable for being cultivated is selected. Therefore, with the first example embodiment, the user can select a crop that is suitable for being cultivated in the cultivated land without specialized knowledge. Next, a specific configuration of the cultivation-target crop selection assisting apparatus according to the first example embodiment will be described with reference toFIG.2.FIG.2is a block diagram showing a specific configuration of the cultivation-target crop selection assisting apparatus according to the example embodiment of the invention. As shown inFIG.2, the cultivation-target crop selection assisting apparatus10according to the first example embodiment includes a prediction model creation unit15in addition to the information collection unit11, the prediction value calculation unit12, and the crop selection unit13described above. The prediction model creation unit15creates the prediction model14by performing machine learning, using information regarding sample cultivated lands as explanatory variables, and actual performance information regarding the crops produced in the sample cultivated lands as objective variables. Here, examples of “actual performance information regarding crops”, which serve as objective variables, include yield, quality, sales (profit), and so on. Examples of quality include pH, color, viscosity, sugar content (Brix), lycopene content, polyphenol content, sensory test results, and so on. The value of a quality may be any of an average value, a maximum value, and a minimum value. “Information regarding a cultivated land” collected by the information collection unit11and “information regarding cultivated lands” that serves as explanatory variables may be information that includes information regarding the variety of the cultivated crop, information regarding the properties of the soil in the cultivated land, and environmental information regarding the cultivated land. Examples of “information regarding the properties of the soil” include the physical properties of the soil, the chemical properties of the soil, the biological properties of the soil, the position coordinates (latitude and longitude) of the cultivated land, and so on. More specifically, examples of the physical properties of the soil include the content rates of specific components (clay, silt, sand, and so on), the mass of unit volume, the saturated hydraulic conductivity, the water retention curve, and so on. Examples of the chemical properties of the soil include the pH, the content rates of specific elements, the cation exchange capacity, the electrical conductivity, and so on. Examples of the biological properties of the soil include soil biota (the species and the volume of organisms present in the soil), and so on. Examples of “environmental information regarding the cultivated land” includes weather information, farming information, and so on. More specifically, examples of weather information include the temperature, the humidity, the solar radiation amount, the wind speed, the wind direction, the rainfall amount, and so on, per unit time, for example. Farming information include the amount of irrigation, the amount of fertilizer applied, the amount of chemical sprayed, the number of times plowing is performed, the number of times weeding is performed, and so on, per predetermined period, for example. The prediction model creation unit15can perform machine learning on the above-described explanatory variables and objective variables, using linear regression (multiple regression, Ridge regression, lasso, fused lasso, principal component regression, partial least square regression, or the like), or non-linear regression (a decision tree, a Gaussian process, neural networks, or the like). A method for training a prediction model may be determined in advance, or automatically selected from among a plurality of methods. In the latter case, the prediction model creation unit15applies a plurality of methods to the above-described explanatory variables and objective variables, and adopts the method with which indicators that have been determined in advance (the prediction error, the calculation time, and so on) defined in advance are optimal values. The following describes a specific example of the prediction model14. For example, the prediction model creation unit15generates a prediction model expressed by Math 1 shown below for each of varieties 1, 2, . . . and n, where an objective variable y denotes an indicator to which attention is paid (yield, sugar content, sales, or the like) and x denotes an explanatory variable. y=f1⁡(x):Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢1⁢⁢y=f2⁡(x):Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢2⁢⁢…⁢⁢y=fn⁡(x):Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢nMath.⁢1 In this case, the prediction value calculation unit12inputs, for example, the content rate of sand, the pH, the nitrogen content, the average temperature, and so on of a specific cultivated land, collected by the information collection unit11, to each prediction model, as an explanatory variable vector x, and calculates a prediction value y for each variety. The crop selection unit13selects the variety with the highest prediction value y as a crop suitable for being cultivated. If there are two indicators to which attention is paid, objective variables y1and y2and the explanatory variable vector x are used. In this case, the prediction model creation unit15generates, for each of varieties 1, 2, . . . and n, the prediction model expressed by Math. 2 shown below regarding an objective variable y1, and the prediction model expressed by Math. 3 shown below regarding an objective variable y2. y1=f1⁢1⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y1⁢⁢for⁢⁢Variety⁢⁢1⁢⁢y1=f1⁢2⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y1⁢⁢for⁢⁢Variety⁢⁢2⁢⁢⁢…⁢⁢y1=f1⁢n⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y1⁢⁢for⁢⁢Variety⁢⁢nMath.⁢2y2=f2⁢1⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y2⁢⁢for⁢⁢Variety⁢⁢1⁢⁢y2=f2⁢2⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y2⁢⁢for⁢⁢Variety⁢⁢2⁢⁢⁢…⁢⁢y2=f2⁢n⁡(x):Prediction⁢⁢Model⁢⁢regarding⁢⁢y2⁢⁢for⁢⁢Variety⁢⁢nMath.⁢3 In this case, the prediction value calculation unit12inputs the content rate of sand, the pH, the nitrogen content, the average temperature, and so on of a specific cultivated land, collected by the information collection unit11, to each prediction model, as the explanatory variable vector x, and calculates prediction values y1and y2for each variety. If there are two indicators to which attention is paid, for example, the function expressed by Math. 4 or Math. 5 shown below is defined. Therefore, the crop selection unit13calculate the value of y using the functions expressed by Math. 4 or Math. 5 shown below, and selects the variety with the highest value of y as a crop suitable for being cultivated. Note that whether to use Math. 4 or Math. 5 can be freely set by the system administrator or the user according to the objective variables. In the function expressed by Math. 4, weights w1and w2can be freely set by the system administrator or the user according to the importance levels of the objective variables. y=g(y1,y2)=w1y1+w2y2Math. 4 y=g(y1,y2)=y1y2Math. 5 Here, specific examples of prediction models and selection of a crop using the prediction models will be described with reference toFIGS.3(a) and3(b).FIG.3(a)is a diagram showing examples of objective variables and explanatory variables used to create a prediction model in the first example embodiment of the invention, andFIG.3(b)is a diagram showing examples of prediction models created in the first example embodiment of the invention. As shown inFIG.3(a), in the first example embodiment, the yield and the sugar content in crops (e.g. an average value) in each cultivated land are used as objective variables. Further, the variety of the cultivated crops, the content rate of sand, the pH, the nitrogen content, the average temperature, and so on are used as explanatory variables for each cultivated land. When the objective variables and the explanatory variables shown inFIG.3(a)are used, the prediction model creation unit15can create a prediction model for predicting the yield and a prediction model for predicting the sugar content for each of the variety 1 and the variety 2, as shown inFIG.3(b). In this case, the prediction value calculation unit12inputs the content rate of sand, the pH, the nitrogen content, the average temperature, and so on of a specific cultivated land, collected by the information collection unit11, to the prediction model14shown inFIG.3(b), and calculates prediction values of the yield and the sugar content for each of the varieties 1 and 2. Also, the crop selection unit13calculates the product of the yield and the sugar content for each variety, using Math. 5 shown above. The crop selection unit13selects the variety with the highest product thus calculated, as a crop suitable for being cultivated in the specific cultivated land. For example, it is assumed here that the yield y1=105 t/ha and the sugar content y2=5.7% have been obtained for the variety 1, and the yield y1=100 t/ha and the sugar content y2=6.2% have been obtained for the variety 2. In this case, the prediction value y of the variety 1 is 5.96 and the prediction value y of the variety 2 is 6.20, and therefore the crop selection unit13selects the variety 2 as a crop suitable for being cultivated. Apparatus Operations Next, operations of the cultivation-target crop selection assisting apparatus10according to the first example embodiment will be described with reference toFIGS.4and5. In the following description,FIGS.1to3are referenced when necessary. In the first example embodiment, a cultivation-target crop selection assisting method is performed by operating the cultivation-target crop selection assisting apparatus10. Therefore, a description of the cultivation-target crop selection assisting method according to the first example embodiment is substituted with the following description of the cultivation-target crop selection assisting apparatus10. First, prediction model creating processing that is performed by the cultivation-target crop selection assisting apparatus10according to the first example embodiment will be described with reference toFIG.4.FIG.4is a flowchart showing operations of the cultivation-target crop selection assisting apparatus according to the first example embodiment of the invention, performed when creating a prediction model. As shown inFIG.4, first, the information collection unit11collects information regarding the cultivated land, which serves as explanatory variables, and actual performance information regarding crops, which serves as objective variables (step A1). Specifically, the information collection unit11collects these kinds of information from an external database or a terminal apparatus of the user or the like, via a network such as the Internet. Also, the information collection unit11may collect information that can be acquired from sensors that are installed in the cultivated land, such as weather information, from such sensors. The information (explanatory variables) regarding the cultivated land collected in step A1may be information that was collected when the processing shown inFIG.5described below was performed in the past. In this case, actual performance information (objective variables) regarding crops produced in the cultivated land, collected in step A1, is preferably obtained from actual performance information of the same year as the year in which the above-described processing was performed. Next, the prediction model creation unit15creates a prediction model14by performing machine learning using the information acquired in step A1(step A2). Specifically, as shown inFIG.2, the prediction model creation unit15creates one or a plurality of prediction models14according to the indicators and the varieties to which the user pays attention. The prediction models14thus created are used in cultivation-target crop selection processing described below. Next, cultivation-target crop selection processing that is performed by the cultivation-target crop selection assisting apparatus10according to the first example embodiment will be described with reference toFIG.5.FIG.5is a flowchart showing operations of the cultivation-target crop selection assisting apparatus according to the first example embodiment of the invention, performed when selecting a cultivation-target crop. As shown inFIG.5, first, the information collection unit11collects information regarding a specific cultivated land that has been specified by the user (step B1). Specifically, upon the user inputting information that specifies a cultivated land, using their own terminal apparatus, the information collection unit11collects information related to the cultivated land from an external database, a terminal apparatus of the user or the like, or sensors or the like installed in the cultivated land, via a network such as the Internet. At this time, regarding information that has not been determined at the time of crop selection (e.g. weather information and farming information in the cultivation period), the information collection unit11collects information such as normal values or average values in the past, and uses the collected information as explanatory variables. Alternatively, if forecast values (weather forecast and the like) or planned values (a farming plan and the like) are available, the information collection unit11may use such values as explanatory variables. Next, the prediction value calculation unit12inputs the information collected in step B1to the prediction models14created in step A2shown inFIG.4, to calculate prediction values for the indicators and the varieties to which the user pays attention (step B2). Next, based on the prediction values calculated in step B2, the crop selection unit13selects a crop that is suitable for being cultivated in the specific cultivated land (step B3). Upon performing step B3, the cultivation-target crop selection assisting apparatus10transmits information that specifies the selected crop (hereinafter referred to as “cultivation-target crop information”) to the user's terminal apparatus. Thus, the user can be informed of the crop that is suitable for being cultivated in the specified cultivated land. Effects of First Example Embodiment As described above, according to the first example embodiment, the user can be informed of the crop that is suitable for being cultivated in a cultivated land by only preparing information regarding the cultivated land. That is to say, according to the first example embodiment, the user can select a crop that is suitable for being cultivated in the cultivated land without specialized knowledge. Modification Next, a modification of the present example embodiment will be described. In the present modification, “information regarding a cultivated land” collected by the information collection unit11and “information regarding cultivated lands” that serves as explanatory variables include soil optical reflectance spectrum data measured in a cultivated land. Specifically, soil optical reflectance spectrum data may be used as information regarding the properties of the soil in a cultivated land. Soil optical reflection spectrum data is data obtained by measuring the reflectance of light of various wavelengths applied to the ground surface of a cultivated land. Examples of soil optical reflection spectrum data include multispectral sensor data, multispectral camera data, hyperspectral sensor data, and hyperspectral camera data. Soil optical reflection spectrum data is measured using a dedicated measuring device such as a multispectral sensor, a multispectral camera, a hyperspectral sensor, or a hyperspectral camera. Example of measurement methods include a method in which an operator directly measures the ground surface (bare ground) using a measuring device, at a representative point in a measurement-target cultivated land, and a method in which a measuring device is mounted on a flying body such as a drone, a balloon, or an aircraft, or a satellite, and measurement is performed from above. Data obtained by the measuring device can only be obtained from the ground surface, and therefore it is preferable that the soil has been cultivated and agitated in advance using a cultivator or the like so as to be homogenized to some extent, from the surface to the cultivated soil layer. When a multispectral sensor or a hyperspectral sensor is used as a measuring device, the measuring ranger is narrow, and therefore it is preferable to set a plurality of representative points in the cultivated land and perform measurement at each point. On the other hand, when a multispectral camera or a hyperspectral camera is used as a measuring device, the measurement range is wide, and therefore measurement may be performed from high altitude so that the entire measurement-target cultivated land is within the angle of view, or performed by capturing an image of the entire cultivated land as a plurality of divided images, and combining these images like a mosaic. Soil optical reflectance spectrum data may be used as a substitute for the physical properties of the soil, the chemical properties of the soil, and the biological properties of the soil described above, or may be used in combination therewith. Soil optical reflectance spectrum data can be easily acquired compared to the physical properties of the soil, the chemical properties of the soil, and the biological properties. Therefore, in the former case, it is possible to reduce the operation costs of the cultivation-target crop selection assisting apparatus10. On the other hand, in the latter case, the selection accuracy can be improved. Program A program according to the present example embodiment need only be a program that causes a computer to carry out steps A1and A2shown inFIG.4and steps B1to B3shown inFIG.5. By installing such a program in a computer and executing the program, it is possible to realize the cultivation-target crop selection assisting apparatus10and the cultivation-target crop selection assisting method according to the first example embodiment. In this case, a processor of the computer functions as the information collection unit11, the prediction value calculation unit12, the crop selection unit13, and the prediction model creation unit15, and performs processing. The program according to the first example embodiment may be executed by a computer system that is constituted by a plurality of computers. In such a case, the computers may respectively function as the information collection unit11, the prediction value calculation unit12, the crop selection unit13, and the prediction model creation unit15. Second Example Embodiment The following describes a cultivation-target crop selection assisting apparatus, a cultivation-target crop selection assisting method, and a program according to a second example embodiment of the invention with reference toFIGS.6and7. Apparatus Configuration First, a configuration of a cultivation-target crop selection assisting apparatus according to the second example embodiment will be described.FIG.6is a block diagram showing an overall configuration of a cultivation-target crop selection assisting apparatus according to the second example embodiment of the invention. As shown inFIG.2, a cultivation-target crop selection assisting apparatus20according to the second example embodiment also includes an advice creation unit16in addition to the components of the cultivation-target crop selection assisting apparatus according to the first example embodiment shown inFIGS.1and2. That is to say, the cultivation-target crop selection assisting apparatus20includes the information collection unit11, the prediction value calculation unit12, the crop selection unit13, the prediction model creation unit15, and the advice creation unit16. Among these components, the information collection unit11, the prediction value calculation unit12, the crop selection unit13, and the prediction model creation unit15in the second example embodiment function in the same manner as in the first example embodiment. In addition, “Information regarding a cultivated land” collected by the information collection unit11and “information regarding cultivated lands” that serves as explanatory variables in the second example embodiment also include information regarding properties of the soil of the cultivated land. The advice creation unit16creates an advice on how to improve the soil of a specific cultivated land, based on the weights set to the explanatory variables in the prediction model14, or the amounts of changes in the objective variables when the explanatory variables are changed. Here, the following describes the functions of the advice creation unit16in further details. First, a case in which the prediction model14is a linear model created based on linear regression will be described. For example, an objective variable y denotes an indicator to which attention is paid, xsoildenotes an explanatory variable related to the properties of the soil, and xotherdenotes another kind of explanatory variable. In this case, prediction models expressed by Math. 6 shown below are created by the prediction model creation unit15. Note that “constant” is a term that collectively expresses constant terms. y=f1⁡(xsoil,xother)+constant:Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢1⁢⁢y=f2⁡(xsoil,xother)+constant:Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢2⁢⁢⁢…⁢⁢y=fn⁡(xsoil,xother)+constant:Prediction⁢⁢Model⁢⁢for⁢⁢Variety⁢⁢nMath.⁢6 The following discusses the case of the variety 1 in Math. 6 shown above. The function f1is a linear model, and therefore the prediction model for the variety 1 expressed by Math. 6 above can be expressed by Math. 7 shown below, using a weight vectors wsoiland wotherobtained through training using learning data. y=wsoilT·xsoil+wotherT·xother+constant  Math. 7 Therefore, the advice creation unit16specifies an explanatory variable of which the weight is positive, and creates an advice regarding a soil improvement for increasing the specified explanatory variable. The advice creation unit16also transmits the created advice to the terminal apparatus of the user. For example, when the yield is the objective variable, if the weight on the nitrogen content in the soil, which is an explanatory variable, is a positive value, the yield can be increased by putting a nitrogen fertilizer to increase the nitrogen content in the soil. Therefore, in such a case, the advice creation unit16creates an advice that instructs the user to put a nitrogen fertilizer. On the other hand, if the weight on the nitrogen content in the soil, which is an explanatory variable, is a negative value, the advice creation unit16creates an advice that instructs the user not to put a nitrogen fertilizer any more. Also, the advice creation unit16can calculate soil properties for maximizing the objective variable y all at once by setting a feasible range (the upper limit and the lower limit) for each soil property xsoiland performing linear optimization. Furthermore, the advice creation unit16can creates an advice as described above for each variety, and predict the potential maximum value of y resulting from soil improvement. In such a case, the most suitable variety under the given condition x may be different from the variety that will be the most suitable when the soil improvement is performed based on the advice. Therefore, the advice creation unit16may provide the user with two options, namely the variety that is suitable when the soil improvement is not performed, and the most suitable variety when the soil improvement is performed and required soil property value. The user can realize that there are two options, and can select one of them. In addition, in the second example embodiment, the prediction model creation unit15can collect a pre-farming soil analysis result xbeforebefore farming such as fertilization that changes the soil properties is performed, the content xactivityof farming performed thereafter, and a post-farming soil analysis result xsoilafter farming is performed, from the information collection unit11. In such a case, the prediction model creation unit15can model the effect of the farming on the soil properties. When modeling is performed based on linear regression, the model that can be obtained is as shown in Math. 8 shown below. xsoil=wbeforeT·xbefore+wactivityT·xactivity+constant  Math. 8 In this case, the advice creation unit16can derive the farming necessary for realizing the required soil property value by using the model shown in Math. 8. The result of substituting Math. 8 into Math. 7 is as shown in Math. 9 shown below. y=wsoilT·xsoil+wotherT·xother+constant=wsoilT·(wbeforeT·xbefore+wactivityT·xactivity)+wotherT·xother+constantMath.⁢9 When using the model shown in the above Math. 9, the advice creation unit16employs linear optimization to efficiently search for the farming content xactivitythat maximizes y, and thus the advice creation unit16can provide the user with the obtained content of farming. Soil optical reflectance spectrum data may be used in the second example embodiment as well, as information regarding the properties of the soil in a cultivated land. Here, when xspectrumdenotes soil optical reflectance spectrum, the prediction model modeled based on linear regression for the objective variable y, which is an indicator to which attention is paid, can be expressed by Math. 10 shown below. y=wspectrumT·xspectrum+wotherT·xother+constant  Math. 10 In this case, prediction model creation unit15can create a model for predicting the soil optical reflection spectrum data xspectrumfrom the explanatory variable vector (soil property data) xsoilrelated to the soil properties. When modeling is performed based on linear regression, this model can be expressed by Math. 11 shown below. xspectrum=wsoilT·xsoil+constant  Math. 11 Therefore, Math. 10 shown above can be expressed as Math. 12 shown below. y=wspectrumT·(wsoilT·xsoil)+wotherT·xother+constant=wspectrumT·(wsoilT·(wbeforeT·xbefore++⁢wactivityT·xactivity))+wotherT·xother+constantMath.⁢12 Therefore, the advice creation unit16uses Math. 12 shown above and employs linear optimization to efficiently search for the farming content xactivitythat maximizes y, and thus the advice creation unit16can provide the user with the obtained content of farming. Next, a case in which the prediction model14is created based on non-linear regression using a neural network or the like will be described. When non-linear regression is employed, it is generally difficult to analytically determine how explanatory variables affect objective variables. Therefore, the advice creation unit16repeatedly perform trials in which the advice creation unit16sets the explanatory variable vector xsoilregarding the soil properties to various values, and records the outputs (the prediction values of the objective variables) for these inputs. The advice creation unit16can specify, from the record, the explanatory variable vector xsoilwhen the output is the maximum in the trials that have been performed, and set the specified explanatory variable vector xsoilas the optimum value. In this regard, in order to search and find the optimum value more efficiently, if it is possible to calculate the gradient of the prediction model, it is possible to employ a gradient method such as the steepest descent method or a gradient method such as the Newton method. Even when the shape of the prediction model function is unknown (black box), an optimization method such as the Particle Swarm Optimization or the Bayesian Optimization may be employed. Also, when the content of farming xactivitythat maximizes the objective variable y is to be obtained, the advice creation unit16can obtain the optimum value of xactivityby modeling the relationship between xactivityand y as in the case of using the linear model, and repeating trials in the same manner as above. Apparatus Operations Next, operations of the cultivation-target crop selection assisting apparatus20according to the second example embodiment will be described with reference toFIG.7.FIG.7is a flowchart showing operations of the cultivation-target crop selection assisting apparatus according to the second example embodiment of the invention, performed when creating an advice. In the following description,FIG.6is referenced when necessary. In the second example embodiment, a cultivation-target crop selection assisting method is performed by operating the cultivation-target crop selection assisting apparatus20. Therefore, a description of the cultivation-target crop selection assisting method according to the second example embodiment is substituted with the following description of the cultivation-target crop selection assisting apparatus20. Prediction models are generated by carrying out steps A1and A2shown inFIG.4, and a cultivation-target crop is selected by carrying out steps B1to B3shown inFIG.5, in the second example embodiment as well. As shown inFIG.7, first, the advice creation unit16accepts a specification of a type from the user (step C1). Specifically, the user specifies the variety for which the user needs an advice, via the terminal apparatus. Next, the advice creation unit16acquires the prediction model14for the variety specified in step C1, in order to create an advice regarding soil improvements for the variety specified in step C1(step C2). Next, the advice creation unit16creates an advice, using the acquired prediction model14(step C3). Specifically, if the prediction model14is a linear model, the advice creation unit16determines whether the weight value is positive or negative, using Math. 7 shown above, and creates an advice based on the result of determination. Thereafter, the advice creation unit16transmits the created advice to the user's terminal apparatus, and provides the user with the advice via the terminal apparatus (step C4). Effects of Second Example Embodiment As described above, according to the second example embodiment, it is possible to provide the user with an advance regarding soil improvements for the cultivated land. Therefore, the user can also select a cultivation-target crop on the premise of soil improvements. In addition, the second example embodiment can achieve the same effect as the first example embodiment. A program according to the present example embodiment need only be a program that causes a computer to carry out steps A1and A2shown inFIG.4, steps B1to B3shown inFIG.5, and also steps C1to C4shown inFIG.7. By installing such a program in a computer and executing the program, it is possible to realize the cultivation-target crop selection assisting apparatus20and the cultivation-target crop selection assisting method according to the second example embodiment. In this case, a processor of the computer functions as the information collection unit11, the prediction value calculation unit12, the crop selection unit13, the prediction model creation unit15, and the advice creation unit16, and performs processing. The program according to the second example embodiment may be executed by a computer system that is constituted by a plurality of computers. In such a case, the computers may respectively function as the information collection unit11, the prediction value calculation unit12, the crop selection unit13, the prediction model creation unit15, and the advice creation unit16. Physical Configuration The following describes a computer that realizes a cultivation-target crop selection assisting apparatus by executing a program according to the first example embodiment or the second example embodiment, with reference toFIG.8.FIG.8is a block diagram showing an example of a computer that realizes the cultivation-target crop selection assisting apparatuses according to the first and second example embodiments of the invention. As shown inFIG.8, a computer110includes a CPU (Central Processing Unit)111, a main memory112, a storage apparatus113, an input interface114, a display controller115, a data reader/writer116, and a communication interface117. These units are connected via a bus121so as to be able to perform data communication with each other. Note that the computer110may include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU111or instead of the CPU111. The CPU111loads a program (codes) according to the present example embodiments stored in the storage device113to the main memory112, and executes them in a predetermined order to perform various kinds of computations. The main memory112is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory). Also, the program according to the present example embodiments is provided in the state of being stored in a computer-readable recording medium120. Note that the program according to the present example embodiments may be distributed on the Internet that is connected via the communication interface117. Specific examples of the storage device113include a hard disk drive, and a semiconductor storage device such as a flash memory. The input interface114mediates data transmission between the CPU111and an input device118such as a keyboard or a mouse. The display controller115is connected to a display apparatus119, and controls display on the display apparatus119. The data reader/writer116mediates data transmission between the CPU111and the recording medium120, and reads out the program from the recording medium120and writes the results of processing performed in the computer110to the recording medium120. The communication interface117mediates data transmission between the CPU111and another computer. Specific examples of the recording medium120include general-purpose semiconductor storage devices such as a CF (Compact Flash (registered trademark)) and an SD (Secure Digital), a magnetic recording medium such as a flexible disk, and an optical recording medium such as a CD-ROM (Compact Disk Read Only Memory). Note that the cultivation-target crop selection assisting apparatuses according to the first and second example embodiments may be realized using pieces of hardware corresponding to the units, instead of a computer in which a program is installed. Furthermore, part of the cultivation-target crop selection assisting apparatuses may be realized using a program, and the rest may be realized using hardware. At least one or all of the above-described example embodiments can be expressed as, but are not limited to, Supplementary Notes 1 to 15 described below. Supplementary Note 1 A cultivation-target crop selection assisting apparatus including: an information collection unit configured to collect information regarding a specific cultivated land; a prediction value calculation unit configured to calculate a prediction value of actual performance of cultivation of one or a plurality of varieties of crops in the specific cultivated land by applying the information collected by the information collection unit to a prediction model created by performing machine learning on a relationship between information regarding a sample cultivated land and actual performance information regarding a crop produced in the sample cultivated land; and a crop selection unit configured to select a crop that is suitable for being cultivated in the specific cultivated land based on the prediction value calculated for the one or plurality of varieties of crops. Supplementary Note 2 The cultivation-target crop selection assisting apparatus according to Supplementary Note 1, further including a prediction model creation unit configured to create the prediction model by performing machine learning using the information regarding the sample cultivated land as an explanatory variable and the actual performance information regarding the crop produced in the sample cultivated land as an objective variable. Supplementary Note 3 The cultivation-target crop selection assisting apparatus according to Supplementary Note 2, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land, and the cultivation-target crop selection assisting apparatus further includes an advice creation unit configured to create an advice regarding soil improvements for the specific cultivated land, based on an amount of a change in the explanatory variable when the objective variable is increased in the prediction model, and a weight set to the explanatory variable in the prediction model. Supplementary Note 4 The cultivation-target crop selection assisting apparatus according to Supplementary Note 1 or 2, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land and environmental information regarding the cultivated land. Supplementary Note 5 The cultivation-target crop selection assisting apparatus according to Supplementary Note 1 or 2, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include soil optical reflectance spectrum data measured in the cultivated land. Supplementary Note 6 A cultivation-target crop selection assisting method including: (a) a step of collecting information regarding a specific cultivated land; (b) a step of calculating a prediction value of actual performance of cultivation of one or a plurality of varieties of crops in the specific cultivated land by applying the information collected in the (a) step to a prediction model created by performing machine learning on a relationship between information regarding a sample cultivated land and actual performance information regarding a crop produced in the sample cultivated land; and (c) a step of selecting a crop that is suitable for being cultivated in the specific cultivated land based on the prediction value calculated for the one or plurality of varieties of crops. Supplementary Note 7 The cultivation-target crop selection assisting method according to Supplementary Note 6, further including (d) a step of creating the prediction model by performing machine learning using the information regarding the sample cultivated land as an explanatory variable and the actual performance information regarding the crop produced in the sample cultivated land as an objective variable. Supplementary Note 8 The cultivation-target crop selection assisting method according to Supplementary Note 7, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land, and the cultivation-target crop selection assisting method further includes: (e) a step of creating an advice regarding soil improvements for the specific cultivated land, based on an amount of a change in the explanatory variable when the objective variable is increased in the prediction model, and a weight set to the explanatory variable in the prediction model. Supplementary Note 9 The cultivation-target crop selection assisting method according to Supplementary Note 6 or 7, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land and environmental information regarding the cultivated land. Supplementary Note 10 The cultivation-target crop selection assisting method according to Supplementary Note 6 or 7, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include soil optical reflectance spectrum data measured in the cultivated land. Supplementary Note 11 A computer-readable recording medium that includes a program recorded thereon, the program including instructions that cause a computer to carry out: (a) a step of collecting information regarding a specific cultivated land; (b) a step of calculating a prediction value of actual performance of cultivation of one or a plurality of varieties of crops in the specific cultivated land by applying the information collected in the (a) step to a prediction model created by performing machine learning on a relationship between information regarding a sample cultivated land and actual performance information regarding a crop produced in the sample cultivated land; and (c) a step of selecting a crop that is suitable for being cultivated in the specific cultivated land based on the prediction value calculated for the one or plurality of varieties of crops. Supplementary Note 12 The computer-readable recording medium according to Supplementary Note 11, wherein the program further includes an instruction that causes the computer to carry out (d) a step of creating the prediction model by performing machine learning using the information regarding the sample cultivated land as an explanatory variable and the actual performance information regarding the crop produced in the sample cultivated land as an objective variable. Supplementary Note 13 The computer-readable recording medium according to Supplementary Note 12, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land, and the program further includes an instruction that causes the computer to carry out (e) a step of creating an advice regarding soil improvements for the specific cultivated land, based on an amount of a change in the explanatory variable when the objective variable is increased in the prediction model, and a weight set to the explanatory variable in the prediction model. Supplementary Note 14 The computer-readable recording medium according to Supplementary Note 11 or 12, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include information regarding soil properties of the cultivated land and environmental information regarding the cultivated land. Supplementary Note 15 The computer-readable recording medium according to Supplementary Note 11 or 12, wherein the information regarding the specific cultivated land and the information regarding the sample cultivated land include soil optical reflectance spectrum data measured in the cultivated land. While the present invention has been described above with reference to the example embodiments, the invention is not limited to the example embodiments described above. Various modifications that can be understood by a person skilled in the art may be applied to the configuration and the details of the present invention within the scope of the present invention. This application claims priority to Japanese Patent Application No. 2018-50198 filed Mar. 16, 2018, the disclosure of which is incorporated herein in its entirety. INDUSTRIAL APPLICABILITY As described above, with the invention, it is possible to select a crop that is suitable for being cultivated in a target cultivated land, without specialized knowledge. The invention is useful in the field of agriculture. LIST OF REFERENCE SIGNS 10: Cultivation-target Crop Selection Assisting Apparatus (First Example Embodiment)11: Information Collection Unit12: Prediction Value Calculation Unit13: Crop Selection Unit14: Prediction Model15: Prediction Model Creation Unit16: Advice Creation Unit20: Cultivation-target Crop Selection Assisting Apparatus (Second Example Embodiment)110: Computer111: CPU112: Main Memory113: Storage Device114: Input Interface115: Display Controller116: Data Reader/Writer117: Communication Interface118: Input Device119: Display Apparatus120: Recording Medium121: Bus
46,312
11861739
Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings. DETAILED DESCRIPTION Example embodiments will now be described more fully with reference to the accompanying drawings. An operations manager, who manages a production system, may want to improve the performance of a manufacturing process or production system. The production system performance can be measured using one or more performance metrics (e.g., throughput, production lead time, product quality, customer demand satisfaction, etc.). The operations manager may want to improve one or more of such performance metrics. Specific improvement actions can be difficult to determine because of the complexity of production systems and the time required to determine which action, of multiple possible actions, should be taken by an operations manager to gain the desired performance improvements. The programmable manufacturing advisor (PMA) of the present disclosure can automatically and continuously determine a recommended improvement action that can be implemented by the operations manager. The PMA may use automated real-time measurements coupled with rigorous analytics applied to a mathematical model in order to calculate optimal steps for achieving the desired productivity improvement and offer these steps as advice to the operations manager without requiring that the operations manager be knowledgeable in these analytics. Thus, the PMA may be a knowledge automation device or tool, or an automated decision advisor device, for use in the manufacturing space. Examples regarding the analytics that may be used by the PMA to calculate the optimal steps may be found in the following book, which is incorporated by reference in its entirety: Li, Jingshan and Semyon M. Meerkov. “Production Systems Engineering.” (2008), hereafter referred to as “Production Systems Engineering”. For example, one way to improve a production system may be to increase throughput. The operations manager may ask questions such as—what are the major causes of production losses or how can a desired throughput be achieved in an optimal manner (e.g., making minimal changes to the production system's equipment and/or workforce). Such questions are common in manufacturing practice and are answered by the operations manager before any changes can be made to the production system. The programmable manufacturing advisor (PMA) may automatically monitor the production system and provide optimal continuous improvement recommendations to the operations manager, leading to a desired productivity improvement. The PMA may be configured for different production systems (e.g., serial lines, assembly systems, multi-job manufacturing, etc.) and for improvement of various performance metrics (e.g., throughput, production lead time, product quality, customer demand satisfaction, etc.). Referring now toFIG.1, a block diagram illustrating an example programmable manufacturing advisor (PMA)100is presented. The PMA100may monitor the production system200and automatically provide optimal continuous improvement recommendations to an operations manager114for achieving productivity improvements of the production system200. The PMA100, in the example shown, includes an information unit106, an analytics unit108, and an optimization unit110. The PMA100is coupled to the production system200and electronically receives information and/or measurements regarding the operation of the production system200. A measurement database102can be used to collect and store the information and/or measurements regarding the production system200. The PMA100can electronically receive the information and/or measurements from the production system200via the measurement database102. As further described herein, the PMA100can transform, filter, analyze and/or otherwise process the information and/or measurements from the production system200to continuously and automatically deliver recommended improvement actions to the operations manager114. While the present disclosure describes the interactions of the operations manager114with the PMA100, any user can interact with the PMA100. It should be appreciated that the term “operations manager” can include other users of the PMA100such as engineers, factory superintendents, plant managers, production supervisors, consultants, and the like. As shown, the PMA100includes the information unit106, the analytics unit108and the optimization unit110. The information unit106may perform the functions of receiving measurements or other information regarding the production system200. The information unit106may clean, transform, filter or otherwise manipulate the measurement data for further use by the PMA100. The information unit106may also include the functionality of validating a mathematical model of the production system200. The analytics unit108, as further described below, can include functionality that includes performance analysis, system diagnostics, what-if analysis, and system health. The optimization unit110can include functionality to optimize possible changes to the production system200and to determine and present optimized performance improvements for the production system200. The PMA100, including the information unit106, the analytics unit108and the optimization unit110, can operate to continuously and automatically deliver recommended performance improvements for the production system200to the operations manager114. Additional details describing the functionality and/or structure of the PMA100can be found in the following paper, the contents of which are hereby incorporated by reference in its entirety: Alavian, Pooya & Eun, Yongsoon & Meerkov, Semyon & Zhang, Liang. (2019). Smart production systems: automating decision-making in manufacturing environment. International Journal of Production Research. 1-18. 10.1080/00207543.2019.1600765. In one example, the PMA100is installed on the factory floor and is in the form of a server and interactive display. In other examples, the PMA100can have other forms and/or can be installed remotely from the factory floor, for example. In still other examples, one or more components of the PMA100can be installed on the factory floor and one or more components of the PMA100can be installed remotely from the factory floor. For example, the information unit106and the analytics unit108may be in the form of a cloud server installed remotely from the factory floor, and the optimization unit110may be in the form of a server and interactive display installed on the factory floor. The PMA100may be used in connection with a manufacturing process or production system200as shown inFIG.2A. The operations manager114may generate a structural model of the production system200using an interface of the PMA100such as the one shown inFIG.3A. For example, the operations manager114may generate the structural model using the information unit106. Each operation in the plurality of operations204may be modeled to a corresponding machine in the structural model, as described inFIG.2B. In the example ofFIG.2A, the production system200may be of a streetlight production system. The production system200may include a plurality of operations204, such as an operation204-1of unloading light housings onto a conveyor, an operation204-2of adding covers to the light housings, an operation204-3of adding light drivers to the light housings, and an operation204-4of packaging. AlthoughFIG.2Aillustrates seven operations, the production system200may include more or fewer operations. Furthermore, although the production system200is shown and described for a streetlight production system, it should be appreciated that other types of production systems, such as automotive underbody assembly production systems, hot-dip galvanization production systems, etc., may be used to reference the production system200. Referring now toFIG.2B, a schematic illustrating a structural model208of the production system200is presented. The structural model208is a simplified model that maps physical components (e.g., plurality of operations204, stations, storage areas, conveyors, etc.) of the production system200to standard production system entities (e.g., machines and buffers). Each operation in the plurality of operations204may be modeled to a corresponding machine. For example, the operation204-1of unloading the light housings may correspond to machine one 212-1 in the structural model208. Certain operations may be merged into a single machine, such as when the operation receives raw materials from a storage area. For example, the operation204-2of adding covers to the light housings may be associated with operation three and operation eight. Since operation three receives covers from operation eight (e.g., storage area), operation three and operation eight may be merged into machine three212-2in the structural model208. Similarly, the operation204-3of adding light drivers to the light housings may be associated with operation four and operation nine, and may merge into machine four212-3in the structural model208. Moreover, the operation204-4of packaging may be associated with operation seven and operation ten, and may merge into machine seven212-4in the structural model208. A buffer216may be interposed between each of the plurality of operations204. The buffer216is a quantity of work-in-process that prevents a blockage and/or a starvation. For example, the buffer216may include boxes that store work-in-process, conveyors, silos, robotic storage devices, automated guided vehicles, etc. Circles inFIG.2Bare representative of machines and rectangles are representative of buffers. While not shown inFIG.2A, the physical (or actual) production system200may further include sensors that measure parameters of each operation in the plurality of operations204. The parameters may include (but are not limited to) cycle time, MTBF, and MTTR. The cycle time is an amount of time needed to process a part by a machine. The MTBF is an average up-time of a machine and the MTTR is an average down-time of a machine. A measurement database102can electronically receive and store real-time measurements of the parameters from the sensors. In various implementations, the measurement database102may electronically receive real-time measurements from programmable logic controllers (PLCs) located within a machine, for example. In an example embodiment, the information unit106utilizes automated manufacturing principles (e.g., Industry 4.0) to monitor the parameters of the production system200. For each operation in the plurality of operations204, the information unit106electronically receives real-time measurements of the parameters from the measurement database102. The information unit106may filter the received measurements by removing outliers and catastrophic events to provide more accurate measurement data. The information unit106may generate a mathematical model of the production system200that associates the structural model with various parameters including the parameters (e.g., cycle time, MTBF, and MTTR), buffer parameters (e.g., buffer capacity), and other system parameters (e.g., product mix, number of carriers, etc.). In the example embodiment, each operation is mathematically modeled with cycle time, MTBF, and MTTR; and each buffer is mathematically modeled with buffer capacity. Each operation is mathematically modeled with the same parameters. The information unit106may be further configured to validate the mathematical model to verify that the mathematical model accurately represents the production system200, as described below inFIG.11. The analytics unit108computes performance metrics and diagnoses system health of the production system200. The analytics unit108includes performance analysis, system diagnostics, what-if analysis, and system health. In the example embodiment, the performance analysis computes the performance metrics of the production system200. Example performance metrics may include (but are not limited to) at least one of a throughput, a production rate, a work-in-process, a blockage probability, a starvation probability, and a lead-time. The performance metrics may be computed using the mathematical model received from the information unit106. Details regarding how to compute the performance metrics may be found in Production Systems Engineering. The throughput is an average number of parts produced by the last machine over a predetermined period of time during steady state operation of a production system. The work-in-process is an average number of parts contained in a buffer during steady state operation of a production system. The blockage probability is a probability that a given machine is up, a buffer of the given machine is full, and a subsequent machine does not take a part from the buffer of the given machine. The starvation probability is a probability that a given machine is up and a buffer preceding the given machine is empty. The system diagnostics computes digested system diagnostics using the performance metrics. The operations manager114may use the computed digested system diagnostics to identify causes of productivity losses in the production system200and identify potential actions for improvement. The digested system diagnostics include throughput loss, location of a bottleneck, and buffering potency. The throughput loss is an amount of loss in throughput due to various factors, such as machine failure, buffering deficiency, lack of carriers, quality, etc. The throughput loss may be computed by subtracting the actual throughput from a nominal throughput. The nominal throughput is the throughput when all resources are unconstrained (e.g., no machine failures, no quality issues, infinite buffers, no impedance due to carriers, etc.). The bottleneck occurs at a machine that affects the throughput the most (not necessarily the machine with the lowest stand-alone throughput or the highest utilization). The buffering potency is a measure of a buffer's effectiveness. Before performing any potential actions for improvement of the production system200, it may be important to know what effect modifying the parameters may have on the performance metrics. What-if analysis allows the operations manager114to explore the effects of various potential actions for improvement. For example, the operations manager114may select any parameter and adjust the range of values of the selected parameter. The resulting performance metric may be presented to the operations manager114, for example, on an interactive graph. The system health generates and presents (e.g., electronically displays) a dashboard that provides an overview of the performance metrics with color-coded indicators (e.g., red=bad, yellow=caution-needed, green=good) showing the health status of the performance metrics. Using the system health, the operations manager114can quickly assess the health status of the production system200and identify potential actions for improvement. The optimization unit110allows the operations manager114to create one or more scenarios that improve the performance metric(s) and achieve a desired improvement. Each scenario includes a desired productivity improvement and an admissible action space. In the example embodiment, the operations manager114may enter a desired value (or range) of the performance metric in the desired productivity improvement or request that the performance metric be minimized/maximized by placing constraints on the parameters under the admissible action space. The optimization unit110optimizes multiple scenarios simultaneously allowing the operations manager114to compare different improvements in parallel. For example, the optimization unit110may determine the feasibility of each scenario using Artificial Intelligence (AI) inspired optimization algorithms. In the example embodiment, the optimization unit110may present the feasibility of each scenario to the operations manager114. For example, a scenario that is able to successfully achieve the desired value is indicated by a green-colored report card, meanwhile a scenario that is unsuccessful in achieving the desired value is indicated by a red-colored report card. In the example embodiment, the optimization unit110may determine the optimal steps to achieve the desired value specified in each scenario, for example, using system performance evaluation algorithms, as described below inFIG.10. A list of detailed steps that describe how to modify the parameters to achieve the desired value of each selected scenario is provided for each successful solution. For each unsuccessful solution, a list of steps to achieve the best attainable performance metric is provided. The operations manager114may select and approve a desirable scenario. The desirable scenario is stored in an approved improvements database116. An implementation team118may implement the desirable scenario. The implementation team118may reconfigure a particular machine based on steps listed in the desirable scenario. For example, the implementation team118may reconfigure the particular machine by rebuilding the machine, adjusting cycle time of the machine, assigning skilled trade worker priority, modifying a buffer, modifying raw materials release policy, etc. After the desirable scenario is implemented, the operations manager114may compare the predicted productivity improvement with the actual productivity improvement of the implemented actions. Referring now toFIG.9, a flowchart illustrating an exemplary method for recommending, by the programmable manufacturing advisor (PMA)100, a productivity improvement for a manufacturing process is presented. Control begins at904, where the PMA100electronically receives measurements for at least one parameter of each operation of the manufacturing process. As can be appreciated, the manufacturing process is partitioned into multiple operations and mathematically modeled by the PMA100prior to904. The mathematical model of the manufacturing process can also be validated using a method as further described below and shown inFIG.11. For example, the information unit106of the PMA100can electronically receive the measurements from the measurement database102and validate the mathematical model of the manufacturing process. Control continues to908, where the programmable manufacturing advisor (PMA)100determines a baseline performance metric for the manufacturing process using at least one parameter. In one example, the analytics unit108determines the baseline performance metric for the manufacturing process based on the measurements of the at least one parameter. Each operation in the plurality of operations is mathematically modeled with the same parameters, and the parameters, in one example, are selected from a group consisting of cycle time, mean time before failures (MTBF), and mean time to repair (MTTR). For example, each operation in the plurality of operations is modeled with at least one parameter in a mathematical model, as described below. The baseline performance metric that is determined at908provides a level of performance of the manufacturing process against which possible improvement actions can be judged in order to determine an improvement action that can deliver a desired performance improvement. The performance metric, for example, may include at least one of throughput, production rate, work-in-process, blockage probability, starvation probability, and lead time. In one example, the analytics unit108can determine the predicted performance metric for the manufacturing process. Control continues at912, where the PMA100determines a predicted performance metric for the manufacturing process. The predicted performance metric can be the same performance metric as the baseline performance metric. The predicted performance metric, for example, may include at least one of throughput, production rate, work-in-process, blockage probability, starvation probability, and lead-time. At912, the PMA100can undertake one or more sub-processes to determine the performance metric. For example, the optimization unit110can undertake the steps described inFIG.10to determine the predicted performance metric. In addition to the method described inFIG.10, the optimization unit110can electronically receive an indication of an admissible action space through a user input screen (FIG.5A, for example). The optimization unit110can limit the recommended improvement action determined by the PMA100to an action bounded by the admissible action space. The admissible action space can, for example, include a limitation of at least one of the parameters used to mathematically model the manufacturing process. At912(and as further described inFIG.10), the optimization unit110can vary one or more of the parameters used to mathematically model the manufacturing process and compute one or more performance metrics based on the varied parameters using the mathematical model. The optimization unit110can determine which parameter to vary using any suitable algorithm such as identifying a bottleneck as shown inFIG.10. The PMA100can also receive an indication of a predetermined performance metric threshold. Such predetermined performance metric threshold can be electronically received through an input interface of the PMA100. The predetermined performance metric threshold can be a value or a range that describes a desired improvement of the manufacturing process. At916, the PMA100compares the predicted performance metric for the manufacturing process to the baseline performance metric. The optimization unit110can perform this comparison. As part of916, the optimization unit110can also compare the difference between the predicted performance metric and the baseline performance metric to the predetermined performance metric threshold. Alternatively, the optimization unit110can compare different alternative improvement actions to determine which improvement action results in the greatest performance improvement to the performance metric. The PMA100can then select a recommended improvement action based on the difference between the baseline performance metric and the predicted performance metric. Control continues to920, where the PMA100automatically presents the recommended improvement action to the operations manager114. For example, the PMA100may electronically display a list of detailed steps or actions to the operations manager114. The detailed steps or actions indicate a target value for a particular parameter of a given operation in the plurality of operations. For example only, a step may indicate reducing cycle time of machine six from 59 seconds to 54 seconds in order to increase throughput of the manufacturing process. The process as described above and shown inFIG.9can be an automatic and continuous process that can continuously provide recommended improvement actions and present such recommended improvement actions to the operations manager114. Such automatic and continuous characteristic of the PMA100, allows the operations manager114to receive information in real-time as changes may occur to the manufacturing process. Additionally, the operations manager114can focus efforts on the implementation of such recommended improvement actions and have confidence that such changes will cause the desired improvements. FIG.10is a flowchart illustrating an exemplary method for determining a recommended improvement action as described above. The optimization unit110, for example, can perform the method shown inFIG.10. Control begins at1004, where control identifies a bottleneck in the plurality of operations. To identify the bottleneck, control identifies the blockage probability and the starvation probability for each operation in the plurality of operations. The bottleneck occurs at a selected operation in the plurality of operations when (i) the blockage probability of a preceding operation is greater than the starvation probability of the selected operation and (ii) the blockage probability of the selected operation is less than the starvation probability of a subsequent operation. In the example shown inFIG.4A, each of the plurality of operations is modeled to a corresponding machine, such as a wiring operation corresponding to machine five, a testing operation corresponding to machine six, and a packaging operation corresponding to machine seven. For each of the machines, control identifies the blockage probability and the starvation probability according to the principles found in “Production Systems Engineering”. Control compares the blockage probability of a given machine to the starvation probability of a subsequent machine. Machine five has a blockage probability of 0.14 and machine six has a starvation probability of 0.047. Since the blockage probability of machine five is greater than the starvation probability of machine six, an arrow is pointed rightwards (from machine five to six). Moreover, machine six has a blockage probability of 0.055 and machine seven has a starvation probability of 0.11. Since the blockage probability of machine six is less than the starvation probability of machine seven, an arrow is pointed leftwards (from machine seven to six). As such, control identifies that the bottleneck occurs at machine six when the arrow transitions from pointing rightward to pointing leftward (indicative of (i) the blockage probability of machine five being greater than the starvation probability of machine six and (ii) the blockage probability of machine six being less than the starvation probability of machine seven). Referring back toFIG.10, at1008, control identifies a given parameter from the parameters of the bottleneck operation (e.g., cycle time, MTBF, and MTTR) that improves a performance metric (e.g., the baseline performance metric determined at908). In order to identify a given parameter that improves the performance metric, a value of one of the parameters is varied slightly (e.g., by a small step size, such as 5%). The performance metric is recomputed using the varied value. The performance metric may be recomputed in the manner as described above by the analytics unit108. Step1008may be repeated iteratively for the remaining parameters. Control may determine which of the parameters is the most effective in changing the performance metric. In one example, when selecting a step size by which to vary a value of one of the parameters, control may limit the step size based on the admissible action space. For example, the admissible action space can include a maximum allowable range for one or more (e.g., all) of the parameters, and control may limit the step size for a parameter to maintain that parameter within its maximum allowable range (if applicable). Thus, if increasing or decreasing a parameter by 3% would respectively adjust the parameter to the upper end or lower end of its maximum allowable range, control may limit the step size for that parameter to 3%. In the example shown inFIG.4A, since the bottleneck occurs at machine six, cycle time of machine six may be decreased by three seconds (e.g., approximately 5%) from 59 seconds to 56 seconds. Next, the performance metric is recomputed using the cycle time of 56 seconds. The throughput may increase from 34.6 jobs per hour (JPH) to 34.7 JPH. Similarly, MTTR may be decreased slightly from 123.8 seconds to 117.6 and the throughput may increase from 34.6 JPH to 37 JPH. In this example, MTTR is the most effective parameter in changing the throughput. Referring back toFIG.10, once the given parameter is identified, control adjusts the given parameter (e.g., most effective parameter) for the bottleneck operation further (e.g., by a larger step size, such as 10-15%) at1012. In the example above, control may adjust MTTR from 123.8 seconds to 105.2 seconds (e.g., approximately 15%). At1016, control predicts or computes the performance metric based on the adjusted given parameter using the mathematical model of the bottleneck operation. The performance metric may be computed in the manner as described above by the analytics unit108. At1020, control compares the computed performance metric with a desired value or with a predetermined performance metric threshold for the performance metric. The desired value and/or the predetermined performance metric threshold for the performance metric can be received from the operations manager114. At1024, control determines whether the computed performance metric has reached the desired value or the predetermined performance metric threshold for the performance metric. If so, control presents (e.g., electronically displays) a list of steps (or actions) to the operations manager114and then ends; otherwise, control returns to1004. The list of steps (or actions) indicate a target value for a particular parameter of a given operation in the plurality of operations, as described inFIG.6. For example, the list of steps (or actions) may include adjusting the given parameter (e.g., most effective parameter) by a step size that achieves the desired value and/or the predetermined performance metric threshold for the performance metric. In addition, control may limit the recommended step size for the given parameter based on the admissible action space, which can include a limitation (e.g., a maximum value, a minimum value, a maximum allowable range) on the given parameter. In one example, when selecting a step size by which to vary a value of the given parameter at10212, control may limit the step size based on the admissible action space. For example, the admissible action space can include a maximum allowable range for one or more (e.g., all) of the parameters, and control may limit the step size for a parameter to maintain that parameter within its maximum allowable range (if applicable). Thus, if increasing or decreasing a parameter by 8% would respectively adjust the parameter to the upper end or lower end of its maximum allowable range, control may limit the step size for that parameter to 8%. FIG.11is a flowchart illustrating an exemplary method for validating the mathematical model of the manufacturing process as performed by the information unit106. Control begins at1104, where control computes a modeled performance metric of the production system using a mathematical model of the manufacturing process. The mathematical model can be a model that is created as previously described. At1106, control determines an actual performance metric using received measurements. Such measurements can be electronically received, for example, by the information unit106from the measurement database102. At1108, control compares the modeled performance metric with the actual performance metric computed by an analytics unit, such as the analytics unit108. For example, control may compute a percent error using the predicted performance metric and the actual performance metric. At1112, control determines whether the comparison is within a predetermined target range. For example, the predetermined target range may be ±5%. In various implementations, the predetermined target range may be ±10%. If the comparison is not within the predetermined target range, the mathematical model is adjusted until the desired accuracy is obtained at1116. For example, adjusting the mathematical model may include changing one or more parameters or adding or subtracting an operation in the mathematical model, merging operations, or splitting merged operations. If the comparison is within the predetermined target range, the mathematical model is validated and control ends. Referring back toFIG.3A, an example of creating a structural model using the information unit106in the PMA100is presented. A user of the PMA100, such as the operations manager114and/or a programmer, may create the structural model of a production system in the PMA100. The structural model may reduce the plurality of operations to one of the standard types of production systems. The user may select a layout type304of the production system, a reliability model308, a carrier return, number-of-jobs type, and quality features312. The layout type304may include a serial line, an assembly system, or a serial line with rework. The reliability model308may include a Bernoulli model, an exponential model, or a general model. The quality features312may include perfect quality, non-perfect quality, or quality-quantity coupling. Referring now toFIG.3B, an example of creating a mathematical model using the information unit106in the PMA100is presented. The user may create the initial mathematical model by selecting the number of job types316that may correspond to the number of job types that may be produced by the production system. For each of the machines, the user may enter a machine name320. Moreover, for each of the machines, the user may enter initial values for MTBF328, MTTR332, and buffer capacity336. The user may also enter cycle time324for job type316for each machine. Any of the tasks involved in creating a structural or mathematical model using the information unit106may be performed by a programmer, and the results of the task may be stored in the information unit106so that a user such as the operations manager114does not need to have knowledge of analytics used to perform the tasks. Referring now toFIG.3C, an example information unit output screen of an information unit106of the PMA100is presented. In this example, after the structural model and the mathematical model have been created, the information unit output screen presents (e.g., electronically displays) the initial mathematical model and the structural model. For each machine, the initial mathematical model may include the initial values for the parameters that were entered into the mathematical model, as shown inFIG.3B. The production system shown in this example is an asynchronous single-product serial line. The information unit may update the initial mathematical model at periodic intervals, such as once a day, once a week, once a month, etc., based on measurements received for each of the parameters. The information unit screen provides an updated mathematical model of the production system for the current day and four days in the past. The PMA100may automatically and continuously update and validate the mathematical model of the production system. Referring now toFIG.4A, an example performance analysis screen404of the analytics unit108of the PMA100is presented. The performance analysis screen404presents (e.g., electronically displays) the performance of the production system to the operations manager114. The performance analysis screen404includes a structural model408, a mathematical model412, and performance metrics416. The structural model408is shown for seven machines and each machine may be identified by a machine name. For each machine, the mathematical model412includes measurements for each of the parameters including the cycle time, MTBF, and MTTR. The mathematical model412may also include buffer parameters (e.g., buffer capacity) for each buffer. The performance metrics416may include a starvation and blockage probability for each machine, and a work-in-process for each buffer. The performance metrics416may also include an actual throughput, as well as a nominal throughput that corresponds to a throughput if all machines were reliable (e.g., without random breakdowns) and all buffers' capacities were infinite. As shown, the actual throughput is 34.6 JPH and the nominal throughput is 57.1 JPH, implying a production loss of approximately 39.43%. Referring now toFIG.4B, an example diagnostics screen420of the analytics unit108of the PMA100is presented. The diagnostics screen420presents (e.g., electronically displays) the reasons for throughput loss, such as losses due to machine breakdown424and losses due to buffering428. The losses due to machine breakdown424are computed assuming that all machines are reliable. The losses due to buffering428are computed assuming that all buffers are infinite. The losses due to machine breakdown424are relatively large, while losses due to buffering428are small. The diagnostics screen420also presents (e.g., electronically displays) a location of a bottleneck. The production system has one bottleneck—located at machine six (e.g., testing operation). The diagnostics screen420also presents (e.g., electronically displays) a buffer potency. The buffer is weakly potent (WP) if the bottleneck of the machine is the worst machine in the production system (e.g., the machine with the smallest efficiency); potent (P) if the buffering is weakly potent and the production rate of the system is close to the stand-alone throughput of the bottleneck machine; strongly potent (SP) if the buffering is potent and the system has the smallest possible total buffer capacity necessary to ensure this throughput; otherwise, the buffer is not potent (NP). Referring now toFIG.4C, an example what-if analysis screen432of the analytics unit108of the PMA100is presented. The what-if analysis screen432presents (e.g., electronically displays) the effects of changing the machine parameters and buffer parameters. The effects of decreasing the bottleneck machine downtime is illustrated. As illustrated, if MTTR of machine six (e.g., testing operation) is changed from 124 seconds to 30 seconds, the throughput increases almost linearly from 34.6 JPH to 40 JPH but remains constant after that. This is because the bottleneck switches to machine seven (e.g., packaging operation) and immediately after to machine two (e.g., label attach operation) between approximately 75-85 seconds. As a result, further reduction of MTTR of machine six (e.g., testing operation) results in no throughput improvement. The effects of changing cycle time, MTBF, and buffer capacity may be explored similarly using the corresponding buttons. The what-if analysis screen432is intended to assist the operations manager114in formulating various options for admissible action space, as described below inFIG.5A. Referring now toFIG.4D, an example system health screen436of the analytics unit108of the PMA100is presented. The system health screen436presents (e.g., electronically displays) a summary of the production system performance. The system health screen436may include an effectiveness of machines440and an effectiveness of buffers444. The effectiveness of machines440may be calculated by dividing the actual throughput by throughput of the production system without machine breakdowns. The effectiveness of buffers444may be calculated by dividing the actual throughput by throughput of the production system with infinite buffers. The system health screen436is intended to assist the operations manager114in formulating various options for desired productivity improvement, as described below. In this particular example, the most effective way to improve production system performance is to decrease machine downtimes, for example, by introducing priorities in skilled trade workers. Referring now toFIG.5A, an example managerial input screen of an optimization unit110of the PMA100is presented. The operations manager114may create one or more scenarios that correspond to an improvement of the performance metric by modifying the parameters. Each scenario includes a desired productivity improvement504and an admissible action space508. The operations manager114may enter a desired value of the performance metric in the desired productivity improvement504. For example, the operations manager114has entered a desired value of 37 JPH for throughput. Along with the performance metrics, the operations manager114may improve lead time, leanness of the production system, and/or product quality. Additionally or alternatively, the operations manager114may minimize/maximize the performance metric by placing constraints on the parameters in the admissible action space508. For example, the operations manager114may seek to increase throughput to 37 JPH by placing a constraint to reduce MTTR of at most one machine by 15%. The operations manager114may save each scenario. Referring now toFIG.5B, an example summary of scenarios screen of the optimization unit110of the PMA100is presented. The summary of scenarios screen provides a list of the saved scenarios. Scenario one corresponds to the scenario shown in the example ofFIG.5A. In scenario two, the goal is to increase throughput to 37 JPH under the constraints on reduction of MTTR and cycle time. The goal of scenario three and four is to maximize throughput. In scenario three, one of the seven machines could be rebuilt, eliminating the machine's breakdown. In scenario four, a skilled trade worker could be assigned to service the production system, in addition to the skilled trade worker already in place. Referring now toFIG.6, an example optimization unit output screen of an optimization unit110of the PMA100is presented. The optimization unit output screen presents (e.g., electronically displays) a summary of whether the goal of each scenario can be achieved. For each scenario, a list of steps (or actions) is presented to the operations manager114. The list of steps (or actions) indicate a target value for a particular parameter of a given operation in the plurality of operations, as described in scenario two below. In scenario one, the desired throughput of 37 JPH cannot be achieved with a 15% reduction of any machine MTTR. Scenario two provides that if, in addition to scenario one, the cycle time of a machine can be reduced by 5%, the desired throughput can be achieved. For example, reducing MTTR of machine six (e.g., testing operation) by 18 seconds and reducing the cycle time by 5 seconds results in a throughput of 37 JPH. In scenario three, throughput is maximized if machine six (e.g., testing operation) is rebuilt, resulting in a throughput of approximately 38.2 JPH. In scenario four, the optimal allocation to reduce total MTTR results in a throughput of approximately 45.27 JPH (e.g., approximately 30% throughput improvement). Referring now toFIG.7, an example managerial approval screen of the optimization unit110of the PMA100is presented. The managerial approval screen allows the operations manager114to select a particular improvement scenario, evaluated by the optimization unit, to implement on the factory floor. For example, the operations manager114may decide to select the particular scenario based on engineering, financial, and/or business information available to the operations manager114. Once selected, an implementation team, such as the implementation team118, may execute the selected scenario. For example, the implementation team may reconfigure a particular machine by rebuilding the machine, adjusting machine cycle time, assigning skilled trade worker priority, modifying raw materials release policy, etc. Scenario four will be submitted for implementation, as shown by the highlight. Referring now toFIG.8, an example measured productivity improvement screen of the PMA100is presented. The measured productivity improvement screen presents (e.g., electronically displays) the performance improvement obtained by a discrete-event simulation of the improved production system and compares it with that predicted by the PMA100. In terms of throughput improvement, the predicted and measured values are close to each other. Within each scenario, the predicted and measured throughputs vary by approximately 3-5% which is within the predetermined target range and the mathematical model is validated. Turning now toFIG.12, a method for determining a reliable mean time between failures is presented. The example method shown inFIG.12describes methodology for determining a reliable mean time between failures but the principles described below and shown inFIG.12can be applied to reliably determine any suitable performance metric including throughput, production rate, work-in-process, blockage probability, starvation probability, lead time, and starvation rate. Further information regarding the method ofFIG.12is found in Alavian, Pooya, et al. “9th International Conference MIM.” The (Alpha, Beta)-Precise Estimates of MTBF and MTTR: Definitions, Calculations, and Induced Effect on Machine Efficiency Evaluation, 2019, pp. 1-6, the contents of which are incorporated herein by reference. As can be appreciated, the evaluation of the production system, as described above, can rely on the reliability of the performance metrics that are used by the PMA100. The use of reliable performance metrics can increase the likelihood that the recommended performance improvements will deliver actual performance improvements. It is desirable, therefore, to determine reliable performance metrics. The method begins at1204where control identifies a desired accuracy of the mean time between failures. Such desired accuracy can be identified by the operations manager114, for example. The desired accuracy can be input into the PMA100, in one example, using a user input screen or other device. In one example, the desired accuracy is expressed as a percentage of the performance metric such as ±5%, ±10%, or the like. At1208, control identifies a desired probability of achieving the reliable mean time between failures at the desired accuracy. Such desired probability can be identified by the operations manager114, for example. The desired probability can be input into the PMA100, in one example, using a user input screen or other device. In one example, the desired probability is expressed as a number between zero and one, such as for example 0.8, 0.85, 0.9, or 0.95. At1212, control identifies a distribution curve that represents a relationship between the desired probability and a number of samples at the desired accuracy. Such distribution curves can be created or retrieved from a suitable database, for example. At1216, a minimum number of sample measurements is identified using the distribution curve identified at1212. The minimum number of sample measurements indicates a minimum number of actual measurements that should be used to reliably determine the mean time between failures (or other performance metric). At1220, the minimum number of sample measurements identified at1216is used to determine the reliable mean time between failures. In one example, the PMA100can use this method to determine one or more of the performance metrics. In this manner, the PMA100can determine reliable performance metrics based on actual measurements of the production system200when it creates and validates the mathematical model of the production system200. The PMA100can continuously and automatically update and re-validate the mathematical model of the production system200using reliable performance metrics to increase the likelihood that the recommended performance improvements will deliver the anticipated performance improvements. In this application, including the definitions below, the terms “unit,” “module” or the term “controller” may be replaced with the term “circuit.” The terms “module,” “unit,” and/or “PMA” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The PMA and/or its components may include one or more interface circuits. In some examples, the interface circuits may include wired or wireless interfaces that are connected to a local area network (LAN), the Internet, a wide area network (WAN), or combinations thereof. The functionality of any given module of the present disclosure may be distributed among multiple modules that are connected via interface circuits. For example, multiple modules may allow load balancing. In a further example, a server (also known as remote, or cloud) module may accomplish some functionality on behalf of a client module. The term “code”, as used above, may include software, firmware, and/or microcode, and may refer to programs, routines, functions, classes, data structures, and/or objects. The term “shared processor circuit” encompasses a single processor circuit that executes some or all code from multiple modules. The term “group processor circuit” encompasses a processor circuit that, in combination with additional processor circuits, executes some or all code from one or more modules. References to multiple processor circuits encompass multiple processor circuits on discrete dies, multiple processor circuits on a single die, multiple cores of a single processor circuit, multiple threads of a single processor circuit, or a combination of the above. The term “shared memory circuit” encompasses a single memory circuit that stores some or all code from multiple modules. The term “group memory circuit” encompasses a memory circuit that, in combination with additional memories, stores some or all code from one or more modules. The term “memory circuit” is a subset of the term “computer-readable medium.” The term “computer-readable medium,” as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only memory circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc™). The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer. The computer programs include processor-executable instructions that are stored on at least one non-transitory, tangible computer-readable medium. The computer programs may also include or rely on stored data. The computer programs may encompass a basic input/output system (BIOS) that interacts with hardware of the special purpose computer, device drivers that interact with particular devices of the special purpose computer, one or more operating systems, user applications, background services, background applications, etc. The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.
51,982
11861740
DETAILED DESCRIPTION The present invention extends to methods, systems, and computer program products for securely and efficiently targeting and monitoring utility resource usage and assigning conservation actions designed to save the targeted utility customer's consumption—thus, saving them money too. Further, these conservation actions themselves may also be monitored for verifying the efficacy of an audit, conservation, or rebate program. Note that although the following described embodiments may refer to a particular type of utility company (e.g., water utility) as needing the unique, innovative utility resource conservation system described herein, one of skill in the art will recognize that exemplary embodiments herein can be utilized by any number of utility or resource providers where consumption may need monitoring and/or conserving—including: power supply companies; water utilities; internet providers; cellular phone service providers; and the like. Accordingly, the use of a specific resource (e.g., water) or a particular type of utility company (e.g., public water company) in any of the described exemplary embodiments herein is for illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention unless otherwise explicitly claimed. As previously mentioned, utilities may not need to communicate directly with all of their customers in achieving their conservation goals and mandates. Instead, they may wish to focus on one specific set of consumers (e.g., only the biggest offenders or highest resource consumption users). While a consistent message to conserve may be useful from time-to-time, canvasing average and low use utility customers with audit or conservation programs is generally not cost effective—nor an efficient and effective way to communicate with utility customers. Accordingly, exemplary embodiments employ a utility conservation system (including utility resource analytics, communications, monitoring and/or auditing tools) to help utilities or conservation managers target specific types of consumers (e.g., highest utility consumers), compared with customers across similar industry classes (e.g., the restaurant business), and/or within certain geographical locations (e.g., within a state, city, town, or other defined location). Further, example embodiments can set conservation threshold values—for determining specific types of consumers—by taking into account a myriad of different data sets from various sources and reporting agencies (e.g., weather reporting centers, satellite imaging, customer input, admin preferences, audit results, etc.). Thus, exemplary embodiments allow for dynamic or adjustable conservation threshold values, which may vary based on conservation data received internally (e.g., via: administrator input, settings, and/or preferences; input of available audits, conservation, or rebate programs; customer classification codes or settings; analytics of historical data for customer's pre and post conservation action utility resource consumption; etc.), external (e.g., via: data over the internet; user input from utility audit tools; customer settings or preferences; etc.), or both. The utility conservation program described in other exemplary embodiments herein takes conservation actions (e.g., setting conservation program threshold values; assigning customers to an audit, conservation, or rebate program; monitoring and reporting of conservation program or action efficacy; etc.) based on historical monitoring of customer program completion and records of consumption. In other words, example embodiments monitor the progress of a customer's completion of an audit, conservation, or rebate program; wherein the system may also look at further customer consumption after implementing a recommended resource conservation action item. Thus, the efficacy of any particular resource conservation action (e.g., assigning, reporting, or otherwise communicating to the customer their utility usage and available audits, conservation, or rebate programs, etc.) may be continually evaluated for purposes of adjusting, adding, or otherwise deleting conservation programs. Of course, as one of skilled in the art would understand, other actions and use of the conservation efficacy information are available and contemplated herein. For example, utility companies in different regions may share information for purposes of trying programs proven effective elsewhere. Accordingly, any specific use of the efficacy reporting or direct actions taken based on the monitoring of a customer's progress, completion, or implementation of an audit, conservation, or rebate program, as described herein, is used for illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention unless otherwise explicitly claimed. As previously mentioned, overloaded IT departments can have a hard time supplying conservation managers with the data or tools they need to fulfill their conservation mandates. Data collected from these systems allow for appropriate programs, staffing and budgeting for specific conservation goals. Accordingly, example embodiments provide for a comprehensive method, system and computer program product to help conservation managers or administrators more readily target, communicate, audit or otherwise monitor utility resource consumption for customers of specific types (e.g., highest users). In addition, example embodiments consider and provide mechanisms for ensuring the security of the data stored on behalf of the utility—thus reducing the risk that users' personal data or personally identifiable information (PII) can be inappropriately accessed or misused. Turning now to the various Figures, e.g., as shown inFIGS.1and2(a)-(e), example embodiments provide for efficiently and effectively targeting specific types of utility consumers (e.g., high utility users) based on existing public utility data that allows for filtering of customers based on an industry standard classification system; thus, allowing resource managers to develop usage thresholds values by industry class. In other words, example embodiments more accurately compare users across similar industry classes, thus, more accurately determine an average utility usage by industry; and therefore, for assigning customers to various audits, conservation, and/or rebate programs. More specifically, as shown inFIG.1, example embodiments provide for a utility resource conservation system100that includes a myriad of tools (e.g., utility use analytics tool110, client relationship management (CRM) tool115, and other utility audit tools120, which can be used across varying platforms and devices; and that can be utilized or built to work independently or collaboratively. Such embodiments may be thought of as a conservation menu that administrators or resource managers may use to effectively and efficiently monitor and communicate available audit, conservation, or rebate programs to utility customers. As shown, and as described in greater detail below, these tools may include: (1) a utility use analytics tool110, which helps utility resource managers identify, for example, the highest utility resource users by segment or industry classification; (2) a client relationship management tool115(e.g., an enterprise communications portal such as Salesforce) to help resource managers, e.g., with automated communications regarding utility customers' consumption, and/or available audit, conservation, or rebate programs; and various utility audit tools120(e.g., a residential utility audit tool125; a commercial utility audit tool130; a governmental utility auditing tool133, or any other utility auditing tool135as described or otherwise contemplated herein), which helps administrators or resource managers evaluate the efficacy of employing the various audit, conservation, and rebate programs available. As previously mentioned, because example embodiments contemplate that the utility resource conservation system200will reside both internally (i.e., with the utility company's infrastructure) and externally (e.g., from the internet or based on user reports for progress or completion of an audit, conservation, or rebate program), other example embodiments allow for data portability and cross platform communications in each tool, while ensuring the security of personal identifiable information (PII). Note that while specific names may be associated with the above and other modules or tools as used in describing various exemplary embodiments, such naming conventions are for illustrative purposes only and that the varying embodiments can be employed with other similar tools and mechanisms. As such, any mention of a specific name, brand, or type of tool used in the utility conservation analytics, communication or auditing system described herein is for illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention; unless, of course, otherwise explicitly claimed. As illustrated inFIGS.2(a)-(e); example embodiments provide for mechanisms whereby utility efficiency targets (i.e., the setting of one or more resource consumption threshold values—e.g., high, medium, or low resources user) may be compared to help find specific utility users (e.g., high users) by sector or other classification (e.g., hotels, with 30 or more rooms, swimming pool, etc.). In other words, example embodiments may use an industry standard classification system (e.g., the North American Industry Classification System (NAICS), which is commonly used in water utilities) for analyzing and comparing utility customers' consumption across similar industry classes (e.g., Restaurants), subclass (e.g., Fast Food, Restaurants), sub-subclass (e.g., Hamburger, Fast Food, Restaurants), and so on and so forth. Note that the granularity of class definition or sub-classification may vary based on any number of desired results in categorizing customers for normalization or other conservation purposes. In other words, a resource manager or utility administrator can set the class and subclass to capture utility customers of similar use standards. Further analysis of the resource usage across the defined classification then allows for the setting of industry standard threshold consumption values used in the determination of targeted conservation customers (e.g., utility consumers with utility usage of “only high”; “high and low”; “medium”, etc.). Thus, by using industry classifications to first filter customer utility usage data, example embodiments allow for comparing resource consumption for customers of the same classes. This allows for an overall averaging or normalizing of consumption usage across the specified class; thus, more accurately and efficiently defining industry standard resource consumption thresholds. More specifically, example embodiments provide that once a industry class is specified and used to define industry standard consumption thresholds, specific types of utility users within the specified industry classification (e.g. high utility users—as defined by industry standard thresholds) may then be determined and targeted with one or more resource conservation actions (e.g., automatically assigning customers to an audit, conservation, or rebate program; sending a message to the resource manager, customer, or both, which reports on the targeted customer's utility consumption, available audit, conservation or rebate programs, or other related information). Note that although the above example embodiment used the data from the resulting industry standard classification tool or filter to set or define industry standard consumption thresholds, for specific types of utility users (e.g., low, medium, or high utility resource consumers), this value may also be set using other means for quantifying utility consumption for customers across similar industry standard classifications. For example, the value may be set based on historical data, collected, and stored by the utility conservation system described herein. Alternatively, the threshold values may be based on other data received from external sources (e.g., weather reports over the internet, user input from an audit tool, etc.) or based on collaborative feedback and accord from industry leaders. Accordingly, any specific means for determining or setting industry standard classification codes, or any specific ordering of when the industry standard classification is derived based on further actions from other conservation tools described herein, is used for exemplary illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention; unless otherwise specifically claimed. FIGS.2(a)-2(e)illustrate the use of various resource conservation tools, used in accordance with exemplary embodiments, whereby a resource manager or other administrator may search and filter resulting customer consumption data by multiple data points to identify, for example, the customers most in need of resource conservation. For example, as shown inFIG.2(a), a water utility manager may be presented with a utility conservation program interface200. In such example, the administrator or resource manager may search and filter utility consumers based on choosing the various fields for the property classes204and setting the appropriate industry classification parameters206. Moreover, the administrator may then set the parameters208for any available conservation programs and then define the type of utility customer (e.g., “high” users) targeted using the utility the consumption fields210. More specifically, as shown inFIG.2(a), a resource manager for a water utility company has chosen to search and analyze utility usage for a targeted customer by setting the classification code204to Food Services, with a subclass of drinking places. Further, because the utility is a water company, the Primary NAICS code is set as the industry classification206and the available conservation programs are listed and defined by the fields208. Next, the utility consumption parameters210are set such that the utility use analytics tool will search for customers in the food service industry, with 3 or more bills; in tier four (defined here as the highest user level); and in the year 2019. Of course, other industry standard classification systems other than the NAICS may be used or deployed in accordance with example embodiments described herein. Similarly, other industry classification codes or fields, conservation programs and participation parameters, and/or utility consumption usage tiers or usage definitions may be used in searching and targeting specific utility use customers in accordance with example embodiments described herein. Thus, any specific field, code, parameter, or industry standard classification as used herein is for illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention; unless otherwise specifically claimed. Likewise, any setting or defining of specific audit, conservation, or rebate programs, or the use of any particular parameters in setting utility consumption thresholds, is used herein for illustrative purposes only and is not meant to limit or otherwise narrow the scope of the present invention; unless otherwise specifically claimed. FIG.2(b)shows an example of a conservation action, which may result based on the query performed inFIG.2(a). In this example, the utility conservation system generated a report215, which lists those water utility users that met the criteria of: (1) food service industry classification; with three (3) or more bills; in tier four water usage (defined here as the highest water usage level); and in the year 2019. The report215may include any number of related customer elements220(such as address, customer, classification type, classification code, etc.), used to further assist in communicating the conservation results to the resource manager, the customer, or both. Further, the related customer elements220may be used to further analyze and categorize utility customer resource usage and/or adding those customers into available audit, conservation, or rebate programs. Likewise, the resulting conservation action may communicate the customer's utility usage along with the availability of an audit, conservation, or rebate program thru, e.g., a customer relationship management portal (e.g., an enterprise portal, such as, Salesforce). Thus, the customer may automatically receive information about the utility conservation actions, get enrolled in an available audit, conservation, or rebate program, and subsequently—as described in other exemplary embodiments below—use the auditing tools to report progress in completing the assigned audit, conservation, or rebate program, which can then be used to track the efficacy of any such conservation action. Of course, as previously mentioned, the conservation action performed may vary depending on the desired use of the resulting information. For example,FIG.2(c)shows the result of a similar query used inFIG.2(a); however, the query sorts customers based on highest water usage in the industry sector. More specifically, as shown in this example, the results225can be sorted by the usage category of highest centum cubic feet (CCF), but also showing such things as percent of average or median service usage in CCF230; or any other customer fields235to assist the resource manager in assigning or otherwise communicating available audit, conservation, or rebate programs to the utility customer. Furthermore, by allowing the resource manager to sort the resource consumption usage data based on a myriad of comparable customer usage elements, conservation administrators can more accurately define consumption threshold values for targeting specific types of utility within industry standard classes. For example, as illustrated inFIG.2(c), the “Full-Service” Restaurant with a Service ID of 310894 has a CCF usage value of 3010, which is well outside the average and median utility consumption usage for this industry class. Thus, the resource manager or utility analytics tool may identify such customer usage as “abnormally” or “excessively” high—excluding it from the calculation in the industry standard resource consumption threshold value, which will more accurately reflect the actual average and median utility resource consumption based within that sector. Of course, there are other benefits to example embodiments that enable targeted communications about auditing, conservation, or rebate program tools to specific types of utility consumers (e.g., highest users, in the last six months, etc.) within the same industry class (e.g., restaurants, with bars, etc.). For example, because customers receive target specific communications from the utility—rather than continually receiving irrelevant or unnecessary communications about all conservation programs; or likewise, requiring the customer to filter thru a “laundry list” of all available programs across all industry sectors, hoping to find one that s/he qualifies for—they're more likely to pay attention to the conservation message and make informed decisions about their conservation habits. Further, because the evaluation of utility consumption is measured based on industry standard classifications, the customer will find the conservation or consumption analysis highly reliable; and thus, being more compelled or motivated to use the available conservation programs—especially knowing their competitors (i.e., likely those using a similar or the same industry standard classification code) are using less utility resources; and thus, saving on their own bottom lines. Of course, as mentioned before, the resulting conservation report may be further evaluated by narrowing the scope of the customer fields used in the conservation query. For example,FIG.2(d)illustrates the conservation query results ofFIG.2(a), with the added granularity of filtering down to the service ID level. The conservation action may then be a report showing utility usage data for a specific customer (i.e., the customer with the set service ID level—in this instance, Service ID 81081, over a specific period of time (as shown in this example, a utility use analytics tool or a resource manager can evaluate utility usage data for customer with Service ID 81081, over a thirteen (13) month period) in order to identify outlying trends or other evidence that might explain the utility consumption irregularity. For instance, as noted in the utility conservation result250ofFIG.2(d), the bar graph245for the water consumption of the targeted water utility customer (further identified by the customer usage information240) shows a discrepancy in the targeted customer's consumption over the summer months of the year 2019 (i.e., June, July and August). As such, the analytics tool or resource manager may inquire further and learn that a water leak was found in these months, which was subsequently repaired. Otherwise, the utility conservation system or utility administrator may find that this customer is in an unusually warmer climate than other customers of the similar class; thus, using more water over the summer months than its industry classmates. Of course, the utility conservation analytics tool or conservation managers can utilize and sort the customer usage data in a myriad of different forms or resulting formats (a report via an administrative tool or application program interface for the utility conservation system; a message sent to the customer reporting their utility usage and/or assigning them to a conservation rebate program intended to help customers stay away from high water usage threshold values (in this instance, the program would assist in keeping them out of tier 4, wherein tier 4 is the maximum usage level set by this particular water utility). Of course, other utilities may have varying levels of resource usage thresholds (set based on industry standard classifications for customers across similar industry classes), which may also result in any number of conservation actions. For example, a resource manager may want to reward utility users that improve their utility consumption over a period of time. Similarly, the analytics tool may be set for identifying “low” utility consumers, which it then automatically signs them up for a rebate program. Alternatively, or in conjunction, the result may be a report that shows trending utility consumption for a “low” utility usage customer and may produce a pie chart, bar graph, or other visual representation of the customer utility usage for explaining abnormal or irregular low utility usage. For example, the utility manager may find that the company went out of business, however, they failed to turn off the utility service, which explains their current trend of being a low utility resource user. Accordingly, any specific utility usage threshold value or consumption level as used herein for determining any specific type of conservation result is made by way of illustration only and is not meant to limit or otherwise narrow the scope of the present invention; unless otherwise specifically claimed. Still other example embodiments allow for the use of one or more utility audit tools in providing added input related to the targeted customer's consumption of utility resources. For example, as illustrated inFIG.2(e)a utility customer can use the commercial utility audit tool260that provides the customer with a user interface261for entering additional information related to the customer's building and property263, which consumes the utility resources. More specifically, in this instance, the water utility customer can capture building and lot info263including, for example: building size; age of the building, date of last renovations, etc. Similarly, the customer (or the utility usage analytics tool or the resource manager or administrator) may set fields for the industry classification for the business and other related attributes265for business information such as number of employees, average customers per day, number of units/rooms, etc. Next, the customer, utility use analytics tool, and/or the resource manager may use the interface262for providing added information about the types of amenities that affect the utility usage for the targeted customer. For instance, the utility resource conservation system shown allows for input of water-cooling info264, swimming pool info266, commercial kitchen info268, car wash info269, laundry info270, or any other information about how the targeted customer consumes utility resources—in this case, water. As one skilled in the art will further recognize, exemplary embodiments may use these and other similar added metrics for assisting in or determining appropriate utility consumption levels relative to similar entities of a similar class. For example, the conservation tools may want to compare water usage for industry standard entities in a “hotel” class, which further include “swimming pool” information. Thus, this subclass or added information related to the consumption of the utility resource further defines the industry standard classification code used for setting utility consumption threshold values. Of course, other data input from external sources outside the utility company or customer may also be used in further defining a class for a targeted customer or for defining industry standard consumption thresholds. For example, the utility usage analytics tool described herein can use public information gleaned thru the internet for setting parameters or passing information related to the use or consumption of the utility resources. For instance, in the above example forFIG.2(e), information about the property amenities may be pulled and downloaded from a description of the hotel on the web. The known, related utility consumption data may then automatically populate into the appropriate fields for use in setting industry standard classification thresholds based on the added utility consumption information and/or used in defining customers of a targeted industry standard class. As shown inFIG.3, after classifying and targeting of utility consumers based on a defined industry standard classification code(s) and utility resource consumption threshold value(s) for the industry class—and/or the use of added information about the targeted customer's utility usage for further defining the industry classification code and/or threshold value(s)—other example embodiments allow for targeted customers to securely receive communication about their utility usage, as well as information about available audit, conservation, or rebate programs available. For instance, inFIG.3, the resulting conservation action may allow targeted customers to receive message(s) about their utility consumption via a Customer Relations Management (CRM)323tool or enterprise communications portal (e.g., Salesforce™). Also shown inFIG.3, there are various servers utilized across varying platforms and infrastructures within the utility resource conservation system300as described in various exemplary embodiments herein. More specifically, first, the utility company309incudes various data sources (e.g., information from accounting/billing; resource manager or admin input303; and other services tools307used to gather information about the targeted customer's330's consumption of the utility resources. Next, the utility usage processor321(i.e., the utility use analytics tool), which filters or otherwise processes the customer usage information and aggregates311or otherwise processes the data in accordance with exemplary embodiments described herein. Third, the CRM323(e.g., the Salesforce) server, which then associates costumer utility usage reporting data with the customer for communication and other purposes in accordance with example embodiments described in greater detail below. Although the utility usage processor321can ideally be deployed behind the utility company309's firewall314, such does not allow for the added use of consumption information external to the utility company309. In other words, because the utility analytics tool or usage processor321acts as a communication bridge between the utility company309the web350, and the CRM323, the security of the utility company's data remains of high importance—especially in a world where users' personal data or personally identifiable information (PII) is at risk of being stolen and misused. In accordance with an example embodiment described herein, added data security is achieved by deploying a docker container, which is simply an instantaneous copy of the existing public utility database as if behind the utility's firewall. This allows a one-way communication feed from a master database on the utility company's infrastructure309to a conservation database utilized by the utility usage processor321, which resides outside the firewall314of the utility company309. Thus, example embodiments do not allow data from external sources (i.e., sources outside the utility company309and customer330but, that still relate to the customer's usage of the utility resource), to go back into the master database behind the utility company's309's firewall314. In other words, example embodiments use a docketing mirror copy of the Utility company's database for ensuring that information from external sources (e.g., the utility use analytics tool or usage processor321, input from customers or user of one or more utility audit tools343, data pulled from the web342, or other data from other external sources never gets stored on the master database behind the utility company's309's firewall314. In other words,FIG.3illustrates a utility resource conservation system300, which shows an aggregation of data311from, for example: (1) internal source of the utility company309, (e.g., accounting/billing305, admin input303, or other service information307); (2) additional data supplied by an admin/customer345(such as number of people in a household, number and tonnage of the cooling towers, swimming pool size, etc.); (3) publicly available data342, e.g., climate data from a National Oceanic and Atmospheric Administration (NOAA) API; (4) the utility usage analytics tool321; and (5) all the utility auditing tools343; and other data sources—as described herein and below. The utility resource usage data may then be processed in the utility usage server321and communicated to the customer either through the communications portal (e.g., CRM323), or directly via email or other message sent to the customer330. To maintain even tighter control over security concerns, other example embodiments tokenize utility consumption data transferred between servers to protect customer Personal Identifiable Information (PII)310and other data—thereby autotomizing or anonymizing any association between consumer consumption and PPI info310(e.g., service id, address, etc.) The utility consumption data and communications may come from the utility alias, however, the messages and information lives on and is served from the CRM323—not on the utility usage processor321or utility conservation system300. More specifically, example embodiments provide that whenever PII data310is used (e.g. name, account number, etc.), a new, partially random ID for the corresponding object (e.g. customer, account, etc.) is created (i.e., tokenized customer PII312). This ID may be a twelve (12) byte value stored as a hexadecimal string, but of course any other format or string type may be used. According to one example embodiment, part of the value may be based on the current date/time, while the rest of the bytes are randomized. In accordance with other embodiments, the PII data310may then be transferred and stored in the CRM323(e.g., SalesForce) alongside this new token or ID312, whereas only the new ID312and any non-PII data316are stored in the processor database321(e.g., a cloud or other server). The data also remains on the utility company's system (as it would if they were not using the present innovative utility conservation system and tools)—since as described above, example embodiments only pull a copy of the data set. In other words, when PII data310is needed (e.g., when sending a message to the customer), the processor uses the new ID312(token) to pull out the data from the CRM323and unite it with any non-PII data316, which then gets sent to the customer330for reporting and other conservation uses. Note that the “re-constructed” data318and316is not stored (albeit, short-term caching may be implemented) on the processor321. Instead, it is only used as the “go-between.” In one embodiment, different tokens may be used for different use cases to prevent discovery of PII through cross-reference or cross-correlation analysis. Under this approach, a customer may have multiple tokens in use simultaneously. Although each token may resolve to similar or identical PII, the use of different tokens for different use cases avoids the possibility of compromising PII by unnecessarily associating records—which can compromise the security and confidentiality of the PII under a token system. As previously mentioned, and as shown inFIG.5(a), other example embodiments provide a conservation manager with the ability to add participants to email lists, or to an audit, conservation, or rebate program, based on the industry standard classification code set and one or more consumption threshold values. For instance, as shown, an admin may utilize the public utilities portal500, which provides an interface with varying tools505and other admin input (e.g., available audit, conservation, or rebate programs510available). In this particular case, thru the administrative portal500, the conservation manager is able to target and email customers about the customer's utility use for Full Service Restaurants, with 3 or more billing periods in the year 2019, for customers in tier 4 water usage. Still, other example embodiments, as shown inFIG.5(b)provide conservation managers or other admin to create custom and/or automated messages when reporting the customer's utility usage. For example, as shown inFIG.5(b), any time a restaurant hits their third (or other predefined) billing cycle in tier 4 (also definable), they may get an automated message about water usage, and/or be added to an audit, conservation, or rebate program. Of course, any number of other predefined parameters and resulting actions may be used to define usage alerts and conservation program communications. In other embodiments, utility audit participants may get subsequent emails at predefined periods of time (e.g., 1-3 weeks) after receiving and/or requesting information about the auditing results. Some embodiments also contemplate that if users make recommended changes to one part of their utility usage (e.g., changes to their irrigation system), they may get additional rebates. For example, as described and shown in greater detail below, as users make use of the monitoring and audit tools provided herein, example embodiments can automatically determine the changes made and provide additional rebates or incentives as the suggested repairs or changes are made. Still other example embodiments contemplate the use of historical data that monitors a utility customer's progress for completing an assigned audit, conservation, or rebate program. Such monitoring may allow for verification of the efficacy of similar programs across customers in similar industry standard classifications. In other words, the conservation system may consider monitored audit, conservation, and rebate programs as a success as more and more customers complete assigned tasks within the audit, conservation, or rebate program. In fact, even as individual targeted customers gradually complete steps for the assigned programs, the conservation system may use such information in determining the efficacy thereof. Such information may also be shared with other utility companies in other geographical locations, such that they can try similar audit, conservation or rebate programs proven successful in accordance with exemplary embodiments described herein. As previously noted inFIG.1, exemplary embodiments utilize various auditing tools for providing added information for industry standard classification or for setting consumption threshold values. Such auditing tools may be residential320, commercial325, governmental333, or other utility auditing tools335. Note that the input data for the auditing tools may vary based on the type of auditing tool used (e.g., residential v. commercial, etc.). Further, even within the similar auditing tool (e.g., commercial utility auditing tool325), other parameters or fields may be used for further defining industry standard classifications or setting threshold values within each industry sector. For instance, as shown inFIG.6, example embodiments provide for at least two irrigation audit tools601, including outdoor utility use audit tool603, plus a Geographical Information Systems (GIS) Landscape™ tool605for larger commercial applications like schools, parks, golf courses, and other large commercial properties. As previously referenced, these tools allow for in-field evaluations with customized irrigation system reports to increase irrigation system efficiency. Moreover, for data security, example embodiments contemplate that when a customer authorizes an audit they may indemnify both the utility and third parties from liability. Nevertheless, regulations on data collection and storage vary from state to state and need consideration on an individual basis. The outdoor utility use audit tool603,605(and even the residential utility audit tool602and commercial utility audit tools) may be tablet-based applications that capture a myriad of landscape and irrigation system information including, but not limited to: Plant type, Landscape Slope, Sun Exposure; etc. As shown, additional information may also be useful in making overall assessment and analysis of such outdoor irrigation systems, e.g.: controller make and model; irrigation days per week; zone run times, etc., for identifying landscape or irrigation parameters that influence industry classification and consumption threshold settings in accordance with exemplary embodiments of the present invention. Other example embodiments may also collect outdoor irrigation information for use in classification and normalization for threshold settings. For example, as shown inFIG.7(a), example embodiments provide for an outdoor water auditing tool701that allows a customer, grounds keeper, resource manager, or other user to use the auditing features703for inputting added information about the irrigation system. For instance, as shown inFIGS.7(a) and7(b), a water utility customer may use the interfaces704,710,720for inputting information through the irrigation system fields706,712,722, which provides added information about the outdoor irrigation system's water usage including, for example: irrigation system components (e.g., types of heads and volume output); distribution uniformity information by zone (e.g., zone run times, sprinkler head volume output, etc.); parcel, landscape and zone measurements; and other outdoor irrigation information used in accordance with other example embodiments. As shown inFIG.7(c), this data may then be associated with local evaporation transpiration (ET) to develop optimized run times and suggested landscape or irrigation action items725, which can then be relayed to the homeowner or property manager via an output report750,755with optimized zone run times as described in other example embodiments herein. In addition, the output report750,755may include irrigation days per week, water conservation action items, and zone test results, etc. As shown inFIGS.8(a) and8(b), other auditing tools may include GIS Landscape tool, which helps inventory and analyze irrigation system data from large commercial applications like parks and schools—systems typically known to be older and less efficient. The GIS Landscape audit tool allows for multiple users, individually, simultaneously, and/or even remotely, to collect, record, update, modify, delete, or otherwise create data that defines various landscapes, zone areas, various plant types, soil conditions, turn on/off locations, and many other zone attributes and properties. In accordance with exemplary embodiments described herein, as heads and pipes get updated or fixed, or controllers changed, the customer, facilities manager, grounds keeper, utility administrator or other assigned user may have the ability to update the irrigation system information and report to, e.g., succeeding managers and administrators in accordance with example embodiments described herein. As illustrated inFIG.9, example embodiments also contemplate use of two indoor water auditing tools900, i.e., a residential water utility auditing tool910and a commercial one915—either or both of which may be tablet-based tool to assist in calculating and reporting on such things as a water balance1100report, which may include a pie chart1110and comparative water consumption for the property amenities1105, suggesting repairs and water conservation return on investments (ROI). For example, as shown inFIG.10(a)-10(b), residential water audit tool1050can collect property information1060and water fixtures1065for determining such things as water fixture flow rates; calculating repair or replacement costs; calculating savings and payback based on water and sewer rates; and send audit reports with a water balance to the customer in accordance with exemplary embodiments. Similarly, as shown inFIG.11(a)-11(b), commercial audits may include similar information as noted in the residential water audit plus such things as: cooling towers, kitchens, pools, ice machines, laundry facilities, pools, etc.—with a similar goal of providing a detailed water balance report in accordance with example embodiments. If utilities have demand management problems, it is typically caused by resource depletion, increasing demand or legislative mandate. In any case, reducing demand through conservation is far more cost effective than building infrastructure. In accordance with exemplary embodiments described herein, targeting and communications tools provide for rapid, cost-effective deployment. In other words, targeting and communicating with only those customers of a set type of utility consumer (e.g., highest water users), which may be determined based on the set industry standard classification code, which saves the utility time and money. In summary, example embodiments descried herein (including the utility conservation system) allows conservation managers to target and communicate with specified utility consumer types (e.g., highest water users), further increasing their conservation program ROI. Using the example embodiments described herein further empowers conservation mangers to quickly and efficiently identify the highest offenders and assign them to potential audits, conservation, or rebate programs that may show rapid results and program benefits. FIG.14is a flowchart1400for an exemplary method for a utility management and communication system identifying specific types of customers based on the need to communicate one or more audit, conservation, or rebate programs of availability. At step1410the system may receive, from a utility company, utility usage data for a plurality of the utility company's customers, wherein the plurality of utility customers were chosen based on an industry standard classification associated therewith. The standard classification system may be the NAICS. In one embodiment the utility company may be a water utility company. In one embodiment, the data received from the utility company may include personally identifiable information (“PII”), and the system may replace the PIT with tokenized data and store the PII and tokenized data at a customer relations management system for subsequently targeting the plurality of customers in a secure manner. At step1420the system may compare the utility usage data for each of the plurality of customers to a resource usage threshold value, which is set based on industry standard classification for each of the plurality of utility user, which is further set based on a desired audit, conservation, or rebate program available to one or more of the plurality of customers. At step1430the system may, based on the comparison, identify one or more of the plurality of customers that meet the utility usage threshold value. At step1440the system may automatically send a message regarding the identified one or more of the plurality of customer's utility usage and the desired audit, conservation, or rebate program available, wherein the message is sent to a utility administrator, the identified one or more of the plurality of customers, or both. In one embodiment the message may be sent via a customer relations management system, which may be Salesforce. FIG.15is a flowchart for an exemplary method for a utility management and communication system to communicate the efficacy of one or more audits, conservation programs, or rebate offers of interest. At step1510the system may receive, from a utility company, utility usage data for a plurality of the utility company's customers. In one embodiment the utility company may be a public utility company. In one embodiment the data received from the utility company may include personally identifiable information (“PII”), and the system may replace the PII with tokenized data and store the PII and tokenized data at a customer relations management system for subsequently targeting the plurality of customers in a secure manner. At step1520the system may identify one or more of the plurality of customers as enrolled in a specific audit, conservation program, or rebate offer. At step1530the system may, over a period of time after said enrollment, monitor the utility consumption usage of the one or more the plurality of customers. At step1540the system may compare utility consumption usage prior to said enrollment with the monitored utility consumption usage post said enrollment. At step1550the system may, based on the comparison, report on current results of enrollment in the specific, audit, conservation program, or rebate offer for the one or more of the plurality of customers. In one embodiment, a report may be sent via a customer relations management portal. FIG.16is a flowchart for an exemplary method for a utility management system target specific types of users within a specific industry standard class, which is used to set utility usage thresholds for that industry class. At step1610the system may access utility customer usage data for a public utility, wherein the utility customer usage data includes data from a plurality multitude of utility customers across a multitude plurality of industry classes. In one embodiment, the utility may be a water utility and the standard classification system may be the NAICS. In one embodiment the accessed customer usage data may include personally identifiable information (“PII”), and the system may replace the PII with tokenized data and store the PIT and tokenized data at a customer relations management system for subsequently targeting the plurality of customers in a secure manner. At step1620the system may choose one or more industry standard classification codes to compare utility customers' usage to those in similar class structures. At step1630the system may, based on the chosen one or more industry standard classification codes, compare utility usage data from the multitude plurality of utility customers within the chosen one or more industry standard classification codes. At step1640the system may, based on the comparison, define one or more utility usage thresholds for the chosen one or more industry standard classification codes, which threshold value can be used for targeting specific user types for an audit, conservation or rebate program. The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
48,921
11861741
DETAILED DESCRIPTION Overview Referring generally to the FIGURES, a building energy cost optimization system with maintenance contract incorporation and components thereof are shown according to various exemplary embodiments. The systems and methods described herein can be configured to incorporate maintenance contracts into an economic cost function J(x) used to optimize the utilization of various assets in a building, group of buildings, or a central plant. Assets can include individual pieces of equipment (e.g., boilers, chillers, heat recovery chillers, steam generators, electrical generators, thermal energy storage tanks, batteries, etc.), groups of equipment, or entire subplants of a central plant. A maintenance contracts module can be configured to modify the cost function J(x) to include a maintenance cost term. The maintenance cost term may account for an economic or monetary cost (e.g., dollars) of performing maintenance on the assets. An example of a modified cost function Ja(x) which can be generated by the maintenance contracts module is shown in the following equation: Ja(x)=J⁡(x)+∑k∈horizonchourly⁢bk where J(x) is the original cost function, chourlyis the hourly cost of maintenance, and bkis a binary variable representing the on/off state of the asset at hour k of the optimization period. For example, bkmay have a value of bk=1 if the asset is on during hour k or a value of bk=0 if the asset is off during hour k. The value of chourlymay be determined by the maintenance contracts module, as described in detail below. Many assets have a fixed maintenance schedule that is dependent on the number of run hours since the last time the maintenance was performed. For example, the maintenance schedule for a chiller may involve cleaning the chiller tubes every X run hours (e.g., every 5000 run hours). A fixed cost $C may be incurred each time the maintenance is performed. For fixed maintenance schedules, the maintenance contracts module can determine the hourly cost chourlyby taking the maintenance cost $C and dividing by the number of run hours X between performances of the maintenance (e.g., chourly=$C/X). The maintenance contracts module can then incorporate the hourly cost of maintenance chourlyinto the cost function Ja(x) as a fixed cost per run hour of the asset. In some scenarios, an owner of an asset might contract with a maintenance provider to perform maintenance under a fixed contract. In this case, the contract terms may specify a base cost cbasewhich covers base number of run hours tbaseand a marginal cost cmfor each hour that the asset is operated exceeding the base number of run hours tbase. For example, the contract might stipulate $500,000 dollars for the first 4,000 run hours and an additional $42 per hour exceeding 4,000 hours. The maintenance cost can be expressed as a piecewise-defined function, as shown in the following equation: Annual⁢Cost={cbaset<tbasecbase+cm(t-tbase)t≥tbase where AnnualCost is the total maintenance cost per year, cbaseis the base cost, tbaseis the base number of hours covered by the base cost cbase, cmis the marginal cost for each run hour exceeding the base number of run hours tbase, and t is the number of run hours of the asset. Incorporating this type of maintenance contract into the optimization algorithm can be significantly more complicated than incorporating a fixed cost per run hour. The maintenance contracts module can be configured to determine the value of chourlythat will yield the optimal solution to the high level optimization problem. The value of chourlymay reflect the true hourly cost of maintenance when the cost changes from an already sunk cost with no marginal cost to a marginal cost of cmafter the equipment has be used for a stipulated number of run hours. The maintenance contracts module can use the value of chourlyto define the maintenance cost term in the modified cost function Ja(x). In some embodiments, the maintenance contracts module is configured to run in an offline mode (e.g., a planning mode) and an online mode (e.g., an operational mode). In the offline mode, the maintenance contracts module can perform several simulations of the year with assumed loads and utility costs (i.e., a planning run) with different values of chourlyin the cost function Ja(x) to determine the value of chourlythat yields the optimal results under the terms of the maintenance contract. In the online mode, the maintenance contracts module can run the plan at different hourly costs chourlyto determine how to adjust the hourly cost chourlyat periodic times during the course of the year. This allows the maintenance contracts module to incorporate feedback as to how many of the base hours tbasehave actually been used as opposed to how many were expected to be used. The maintenance contracts module can update the hourly cost chourlythroughout the year based on the actual number of run hours of the assets covered by the maintenance contracts. These and other features of the maintenance contracts module are described in greater detail below. Frequency Response Optimization Referring now toFIG.1, a frequency response optimization system100is shown, according to an exemplary embodiment. System100is shown to include a campus102and an energy grid104. Campus102may include one or more buildings116that receive power from energy grid104. Buildings116may include equipment or devices that consume electricity during operation. For example, buildings116may include HVAC equipment, lighting equipment, security equipment, communications equipment, vending machines, computers, electronics, elevators, or other types of building equipment. In some embodiments, buildings116are served by a building management system (BMS). A BMS is, in general, a system of devices configured to control, monitor, and manage equipment in or around a building or building area. A BMS can include, for example, a HVAC system, a security system, a lighting system, a fire alerting system, and/or any other system that is capable of managing building functions or devices. An exemplary building management system which may be used to monitor and control buildings116is described in U.S. patent application Ser. No. 14/717,593 filed May 20, 2015, the entire disclosure of which is incorporated by reference herein. In some embodiments, campus102includes a central plant118. Central plant118may include one or more subplants that consume resources from utilities (e.g., water, natural gas, electricity, etc.) to satisfy the loads of buildings116. For example, central plant118may include a heater subplant, a heat recovery chiller subplant, a chiller subplant, a cooling tower subplant, a hot thermal energy storage (TES) subplant, and a cold thermal energy storage (TES) subplant, a steam subplant, and/or any other type of subplant configured to serve buildings116. The subplants may be configured to convert input resources (e.g., electricity, water, natural gas, etc.) into output resources (e.g., cold water, hot water, chilled air, heated air, etc.) that are provided to buildings116. An exemplary central plant which may be used to satisfy the loads of buildings116is described U.S. patent application Ser. No. 14/634,609 filed Feb. 27, 2015, the entire disclosure of which is incorporated by reference herein. In some embodiments, campus102includes energy generation120. Energy generation120may be configured to generate energy that can be used by buildings116, used by central plant118, and/or provided to energy grid104. In some embodiments, energy generation120generates electricity. For example, energy generation120may include an electric power plant, a photovoltaic energy field, or other types of systems or devices that generate electricity. The electricity generated by energy generation120can be used internally by campus102(e.g., by buildings116and/or central plant118) to decrease the amount of electric power that campus102receives from outside sources such as energy grid104or battery108. If the amount of electricity generated by energy generation120exceeds the electric power demand of campus102, the excess electric power can be provided to energy grid104or stored in battery108. The power output of campus102is shown inFIG.1as Pcampus. Pcampusmay be positive if campus102is outputting electric power or negative if campus102is receiving electric power. Still referring toFIG.1, system100is shown to include a power inverter106and a battery108. Power inverter106may be configured to convert electric power between direct current (DC) and alternating current (AC). For example, battery108may be configured to store and output DC power, whereas energy grid104and campus102may be configured to consume and generate AC power. Power inverter106may be used to convert DC power from battery108into a sinusoidal AC output synchronized to the grid frequency of energy grid104. Power inverter106may also be used to convert AC power from campus102or energy grid104into DC power that can be stored in battery108. The power output of battery108is shown as Pbat. Pbatmay be positive if battery108is providing power to power inverter106or negative if battery108is receiving power from power inverter106. In some embodiments, power inverter106receives a DC power output from battery108and converts the DC power output to an AC power output. The AC power output can be used to satisfy the energy load of campus102and/or can be provided to energy grid104. Power inverter106may synchronize the frequency of the AC power output with that of energy grid104(e.g., 50 Hz or 60 Hz) using a local oscillator and may limit the voltage of the AC power output to no higher than the grid voltage. In some embodiments, power inverter106is a resonant inverter that includes or uses LC circuits to remove the harmonics from a simple square wave in order to achieve a sine wave matching the frequency of energy grid104. In various embodiments, power inverter106may operate using high-frequency transformers, low-frequency transformers, or without transformers. Low-frequency transformers may convert the DC output from battery108directly to the AC output provided to energy grid104. High-frequency transformers may employ a multi-step process that involves converting the DC output to high-frequency AC, then back to DC, and then finally to the AC output provided to energy grid104. System100is shown to include a point of interconnection (POI)110. POI110is the point at which campus102, energy grid104, and power inverter106are electrically connected. The power supplied to POI110from power inverter106is shown as Psup. Psupmay be defined as Pbat+Ploss, where Pbattis the battery power and Plossis the power loss in the battery system (e.g., losses in power inverter106and/or battery108). Pbatand Psupmay be positive if power inverter106is providing power to POI110or negative if power inverter106is receiving power from POI110. Pcampusand Psupcombine at POI110to form PPOI. PPOImay be defined as the power provided to energy grid104from POI110. PPOImay be positive if POI110is providing power to energy grid104or negative if POI110is receiving power from energy grid104. Still referring toFIG.1, system100is shown to include a frequency response controller112. Controller112may be configured to generate and provide power setpoints to power inverter106. Power inverter106may use the power setpoints to control the amount of power Psupprovided to POI110or drawn from POI110. For example, power inverter106may be configured to draw power from POI110and store the power in battery108in response to receiving a negative power setpoint from controller112. Conversely, power inverter106may be configured to draw power from battery108and provide the power to POI110in response to receiving a positive power setpoint from controller112. The magnitude of the power setpoint may define the amount of power Psupprovided to or from power inverter106. Controller112may be configured to generate and provide power setpoints that optimize the value of operating system100over a time horizon. In some embodiments, frequency response controller112uses power inverter106and battery108to perform frequency regulation for energy grid104. Frequency regulation is the process of maintaining the stability of the grid frequency (e.g., 60 Hz in the United States). The grid frequency may remain stable and balanced as long as the total electric supply and demand of energy grid104are balanced. Any deviation from that balance may result in a deviation of the grid frequency from its desirable value. For example, an increase in demand may cause the grid frequency to decrease, whereas an increase in supply may cause the grid frequency to increase. Frequency response controller112may be configured to offset a fluctuation in the grid frequency by causing power inverter106to supply energy from battery108to energy grid104(e.g., to offset a decrease in grid frequency) or store energy from energy grid104in battery108(e.g., to offset an increase in grid frequency). In some embodiments, frequency response controller112uses power inverter106and battery108to perform load shifting for campus102. For example, controller112may cause power inverter106to store energy in battery108when energy prices are low and retrieve energy from battery108when energy prices are high in order to reduce the cost of electricity required to power campus102. Load shifting may also allow system100reduce the demand charge incurred. Demand charge is an additional charge imposed by some utility providers based on the maximum power consumption during an applicable demand charge period. For example, a demand charge rate may be specified in terms of dollars per unit of power (e.g., $/kW) and may be multiplied by the peak power usage (e.g., kW) during a demand charge period to calculate the demand charge. Load shifting may allow system100to smooth momentary spikes in the electric demand of campus102by drawing energy from battery108in order to reduce peak power draw from energy grid104, thereby decreasing the demand charge incurred. Still referring toFIG.1, system100is shown to include an incentive provider114. Incentive provider114may be a utility (e.g., an electric utility), a regional transmission organization (RTO), an independent system operator (ISO), or any other entity that provides incentives for performing frequency regulation. For example, incentive provider114may provide system100with monetary incentives for participating in a frequency response program. In order to participate in the frequency response program, system100may maintain a reserve capacity of stored energy (e.g., in battery108) that can be provided to energy grid104. System100may also maintain the capacity to draw energy from energy grid104and store the energy in battery108. Reserving both of these capacities may be accomplished by managing the state-of-charge of battery108. Frequency response controller112may provide incentive provider114with a price bid and a capability bid. The price bid may include a price per unit power (e.g., $/MW) for reserving or storing power that allows system100to participate in a frequency response program offered by incentive provider114. The price per unit power bid by frequency response controller112is referred to herein as the “capability price.” The price bid may also include a price for actual performance, referred to herein as the “performance price.” The capability bid may define an amount of power (e.g., MW) that system100will reserve or store in battery108to perform frequency response, referred to herein as the “capability bid.” Incentive provider114may provide frequency response controller112with a capability clearing price CPcap, a performance clearing price CPperf, and a regulation award Regaward, which correspond to the capability price, the performance price, and the capability bid, respectively. In some embodiments, CPcap, CPperf, and Regawardare the same as the corresponding bids placed by controller112. In other embodiments, CPcap, CPperf, and Regawardmay not be the same as the bids placed by controller112. For example, CPcap, CPperf, and Regawardmay be generated by incentive provider114based on bids received from multiple participants in the frequency response program. Controller112may use CPcap, CPperf, and Regawardto perform frequency regulation. Frequency response controller112is shown receiving a regulation signal from incentive provider114. The regulation signal may specify a portion of the regulation award Regawardthat frequency response controller112is to add or remove from energy grid104. In some embodiments, the regulation signal is a normalized signal (e.g., between −1 and 1) specifying a proportion of Regaward. Positive values of the regulation signal may indicate an amount of power to add to energy grid104, whereas negative values of the regulation signal may indicate an amount of power to remove from energy grid104. Frequency response controller112may respond to the regulation signal by generating an optimal power setpoint for power inverter106. The optimal power setpoint may take into account both the potential revenue from participating in the frequency response program and the costs of participation. Costs of participation may include, for example, a monetized cost of battery degradation as well as the energy and demand charges that will be incurred. The optimization may be performed using sequential quadratic programming, dynamic programming, or any other optimization technique. In some embodiments, controller112uses a battery life model to quantify and monetize battery degradation as a function of the power setpoints provided to power inverter106. Advantageously, the battery life model allows controller112to perform an optimization that weighs the revenue generation potential of participating in the frequency response program against the cost of battery degradation and other costs of participation (e.g., less battery power available for campus102, increased electricity costs, etc.). An exemplary regulation signal and power response are described in greater detail with reference toFIG.2. Referring now toFIG.2, a pair of frequency response graphs200and250are shown, according to an exemplary embodiment. Graph200illustrates a regulation signal Regsignal202as a function of time. Regsignal202 is shown as a normalized signal ranging from −1 to 1 (i.e., −1≤Regsignal≤1). Regsignal202may be generated by incentive provider114and provided to frequency response controller112. Regsignal202may define a proportion of the regulation award Regaward254that controller112is to add or remove from energy grid104, relative to a baseline value referred to as the midpoint b256. For example, if the value of Regaward254is 10 MW, a regulation signal value of 0.5 (i.e., Regsignal=0.5) may indicate that system100is requested to add 5 MW of power at POI110relative to midpoint b (e.g., P*POI=10 MW×0.5+b), whereas a regulation signal value of −0.3 may indicate that system100is requested to remove 3 MW of power from POI110relative to midpoint b (e.g., P*POI=10 MW×−0.3+b). Graph250illustrates the desired interconnection power P*POI252as a function of time. P*POI252may be calculated by frequency response controller112based on Regsignal202, Regaward254, and a midpoint b256. For example, controller112may calculate P*POI252using the following equation: P*POI=Regaward×Regsignal+b where P*POIrepresents the desired power at POI110(e.g., P*POI=Psup+Pcampus) and b is the midpoint. Midpoint b may be defined (e.g., set or optimized) by controller112and may represent the midpoint of regulation around which the load is modified in response to Regsignal202. Optimal adjustment of midpoint b may allow controller112to actively participate in the frequency response market while also taking into account the energy and demand charge that will be incurred. In order to participate in the frequency response market, controller112may perform several tasks. Controller112may generate a price bid (e.g., $/MW) that includes the capability price and the performance price. In some embodiments, controller112sends the price bid to incentive provider114at approximately 15:30 each day and the price bid remains in effect for the entirety of the next day. Prior to beginning a frequency response period, controller112may generate the capability bid (e.g., MW) and send the capability bid to incentive provider114. In some embodiments, controller112generates and sends the capability bid to incentive provider114approximately 1.5 hours before a frequency response period begins. In an exemplary embodiment, each frequency response period has a duration of one hour; however, it is contemplated that frequency response periods may have any duration. At the start of each frequency response period, controller112may generate the midpoint b around which controller112plans to perform frequency regulation. In some embodiments, controller112generates a midpoint b that will maintain battery108at a constant state-of-charge (SOC) (i.e. a midpoint that will result in battery108having the same SOC at the beginning and end of the frequency response period). In other embodiments, controller112generates midpoint b using an optimization procedure that allows the SOC of battery108to have different values at the beginning and end of the frequency response period. For example, controller112may use the SOC of battery108as a constrained variable that depends on midpoint b in order to optimize a value function that takes into account frequency response revenue, energy costs, and the cost of battery degradation. Exemplary techniques for calculating and/or optimizing midpoint b under both the constant SOC scenario and the variable SOC scenario are described in detail in U.S. patent application Ser. No. 15/247,883 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,885 filed Aug. 25, 2016, and U.S. patent application Ser. No. 15/247,886 filed Aug. 25, 2016. The entire disclosure of each of these patent applications is incorporated by reference herein. During each frequency response period, controller112may periodically generate a power setpoint for power inverter106. For example, controller112may generate a power setpoint for each time step in the frequency response period. In some embodiments, controller112generates the power setpoints using the equation: P*POI=Regaward×Regsignal+b where P*POI=Psup+PcampusPositive values of P*POIindicate energy flow from POI110to energy grid104. Positive values of Psupand Pcampusindicate energy flow to POI110from power inverter106and campus102, respectively. In other embodiments, controller112generates the power setpoints using the equation: P*POI=Regaward×ResFR+b where ResFRis an optimal frequency response generated by optimizing a value function. Controller112may subtract Pcampusfrom P*POIto generate the power setpoint for power inverter106(i.e., Psup=P*POI−Pcampus) The power setpoint for power inverter106indicates the amount of power that power inverter106is to add to POI110(if the power setpoint is positive) or remove from POI110(if the power setpoint is negative). Exemplary techniques which can be used by controller112to calculate power inverter setpoints are described in detail in U.S. patent application Ser. No. 15/247,793 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,784 filed Aug. 25, 2016, and U.S. patent application Ser. No. 15/247,777 filed Aug. 25, 2016. The entire disclosure of each of these patent applications is incorporated by reference herein. Photovoltaic Energy System with Frequency Regulation and Ramp Rate Control Referring now toFIGS.3-4, a photovoltaic energy system300that uses battery storage to simultaneously perform both ramp rate control and frequency regulation is shown, according to an exemplary embodiment. Ramp rate control is the process of offsetting ramp rates (i.e., increases or decreases in the power output of an energy system such as a photovoltaic energy system) that fall outside of compliance limits determined by the electric power authority overseeing the energy grid. Ramp rate control typically requires the use of an energy source that allows for offsetting ramp rates by either supplying additional power to the grid or consuming more power from the grid. In some instances, a facility is penalized for failing to comply with ramp rate requirements. Frequency regulation is the process of maintaining the stability of the grid frequency (e.g., 60 Hz in the United States). As shown inFIG.4, the grid frequency may remain balanced at 60 Hz as long as there is a balance between the demand from the energy grid and the supply to the energy grid. An increase in demand yields a decrease in grid frequency, whereas an increase in supply yields an increase in grid frequency. During a fluctuation of the grid frequency, system300may offset the fluctuation by either drawing more energy from the energy grid (e.g., if the grid frequency is too high) or by providing energy to the energy grid (e.g., if the grid frequency is too low). Advantageously, system300may use battery storage in combination with photovoltaic power to perform frequency regulation while simultaneously complying with ramp rate requirements and maintaining the state-of-charge of the battery storage within a predetermined desirable range. Referring particularly toFIG.3, system300is shown to include a photovoltaic (PV) field302, a PV field power inverter304, a battery306, a battery power inverter308, a point of interconnection (POI)310, and an energy grid312. PV field302may include a collection of photovoltaic cells. The photovoltaic cells are configured to convert solar energy (i.e., sunlight) into electricity using a photovoltaic material such as monocrystalline silicon, polycrystalline silicon, amorphous silicon, cadmium telluride, copper indium gallium selenide/sulfide, or other materials that exhibit the photovoltaic effect. In some embodiments, the photovoltaic cells are contained within packaged assemblies that form solar panels. Each solar panel may include a plurality of linked photovoltaic cells. The solar panels may combine to form a photovoltaic array. PV field302may have any of a variety of sizes and/or locations. In some embodiments, PV field302is part of a large-scale photovoltaic power station (e.g., a solar park or farm) capable of providing an energy supply to a large number of consumers. When implemented as part of a large-scale system, PV field302may cover multiple hectares and may have power outputs of tens or hundreds of megawatts. In other embodiments, PV field302may cover a smaller area and may have a relatively lesser power output (e.g., between one and ten megawatts, less than one megawatt, etc.). For example, PV field302may be part of a rooftop-mounted system capable of providing enough electricity to power a single home or building. It is contemplated that PV field302may have any size, scale, and/or power output, as may be desirable in different implementations. PV field302may generate a direct current (DC) output that depends on the intensity and/or directness of the sunlight to which the solar panels are exposed. The directness of the sunlight may depend on the angle of incidence of the sunlight relative to the surfaces of the solar panels. The intensity of the sunlight may be affected by a variety of environmental factors such as the time of day (e.g., sunrises and sunsets) and weather variables such as clouds that cast shadows upon PV field302. When PV field302is partially or completely covered by shadow, the power output of PV field302(i.e., PV field power PPV) may drop as a result of the decrease in solar intensity. In some embodiments, PV field302is configured to maximize solar energy collection. For example, PV field302may include a solar tracker (e.g., a GPS tracker, a sunlight sensor, etc.) that adjusts the angle of the solar panels so that the solar panels are aimed directly at the sun throughout the day. The solar tracker may allow the solar panels to receive direct sunlight for a greater portion of the day and may increase the total amount of power produced by PV field302. In some embodiments, PV field302includes a collection of mirrors, lenses, or solar concentrators configured to direct and/or concentrate sunlight on the solar panels. The energy generated by PV field302may be stored in battery306or provided to energy grid312. Still referring toFIG.3, system300is shown to include a PV field power inverter304. Power inverter304may be configured to convert the DC output of PV field302PPVinto an alternating current (AC) output that can be fed into energy grid312or used by a local (e.g., off-grid) electrical network. For example, power inverter304may be a solar inverter or grid-tie inverter configured to convert the DC output from PV field302into a sinusoidal AC output synchronized to the grid frequency of energy grid312. In some embodiments, power inverter304receives a cumulative DC output from PV field302. For example, power inverter304may be a string inverter or a central inverter. In other embodiments, power inverter304may include a collection of micro-inverters connected to each solar panel or solar cell. PV field power inverter304may convert the DC power output PPVinto an AC power output uPVand provide the AC power output uPVto POI310. Power inverter304may receive the DC power output PPVfrom PV field302and convert the DC power output to an AC power output that can be fed into energy grid312. Power inverter304may synchronize the frequency of the AC power output with that of energy grid312(e.g., 50 Hz or 60 Hz) using a local oscillator and may limit the voltage of the AC power output to no higher than the grid voltage. In some embodiments, power inverter304is a resonant inverter that includes or uses LC circuits to remove the harmonics from a simple square wave in order to achieve a sine wave matching the frequency of energy grid312. In various embodiments, power inverter304may operate using high-frequency transformers, low-frequency transformers, or without transformers. Low-frequency transformers may convert the DC output from PV field302directly to the AC output provided to energy grid312. High-frequency transformers may employ a multi-step process that involves converting the DC output to high-frequency AC, then back to DC, and then finally to the AC output provided to energy grid312. Power inverter304may be configured to perform maximum power point tracking and/or anti-islanding. Maximum power point tracking may allow power inverter304to produce the maximum possible AC power from PV field302. For example, power inverter304may sample the DC power output from PV field302and apply a variable resistance to find the optimum maximum power point. Anti-islanding is a protection mechanism that immediately shuts down power inverter304(i.e., preventing power inverter304from generating AC power) when the connection to an electricity-consuming load no longer exists. In some embodiments, PV field power inverter304performs ramp rate control by limiting the power generated by PV field302. Still referring toFIG.3, system300is shown to include a battery power inverter308. Battery power inverter308may be configured to draw a DC power Pbatfrom battery306, convert the DC power Pbatinto an AC power ubat, and provide the AC power ubatto POI310. Battery power inverter308may also be configured to draw the AC power ubatfrom POI310, convert the AC power ubatinto a DC battery power Pbat, and store the DC battery power Pbatin battery306. The DC battery power Pbatmay be positive if battery306is providing power to battery power inverter308(i.e., if battery306is discharging) or negative if battery306is receiving power from battery power inverter308(i.e., if battery306is charging). Similarly, the AC battery power ubatmay be positive if battery power inverter308is providing power to POI310or negative if battery power inverter308is receiving power from POI310. The AC battery power ubatis shown to include an amount of power used for frequency regulation (i.e., uFR) and an amount of power used for ramp rate control (i.e., uRR) which together form the AC battery power (i.e., ubat=uFR+uRR). The DC battery power Pbatis shown to include both uFRand uRRas well as an additional term Plossrepresenting power losses in battery306and/or battery power inverter308(i.e., Pbat=uFR+uRR+Ploss). The PV field power uPVand the battery power ubatcombine at POI110to form PPOI(i.e., PPOI=uPV+ubat), which represents the amount of power provided to energy grid312. PPOImay be positive if POI310is providing power to energy grid312or negative if POI310is receiving power from energy grid312. Still referring toFIG.3, system300is shown to include a controller314. Controller314may be configured to generate a PV power setpoint uPVfor PV field power inverter304and a battery power setpoint ubatfor battery power inverter308. Throughout this disclosure, the variable uPVis used to refer to both the PV power setpoint generated by controller314and the AC power output of PV field power inverter304since both quantities have the same value. Similarly, the variable ubatis used to refer to both the battery power setpoint generated by controller314and the AC power output/input of battery power inverter308since both quantities have the same value. PV field power inverter304uses the PV power setpoint uPVto control an amount of the PV field power PPVto provide to POI110. The magnitude of uPVmay be the same as the magnitude of PPVor less than the magnitude of PPV. For example, uPVmay be the same as PPVif controller314determines that PV field power inverter304is to provide all of the photovoltaic power PPVto POI310. However, uPVmay be less than PPVif controller314determines that PV field power inverter304is to provide less than all of the photovoltaic power PPVto POI310. For example, controller314may determine that it is desirable for PV field power inverter304to provide less than all of the photovoltaic power PPVto POI310to prevent the ramp rate from being exceeded and/or to prevent the power at POI310from exceeding a power limit. Battery power inverter308uses the battery power setpoint ubatto control an amount of power charged or discharged by battery306. The battery power setpoint ubatmay be positive if controller314determines that battery power inverter308is to draw power from battery306or negative if controller314determines that battery power inverter308is to store power in battery306. The magnitude of ubatcontrols the rate at which energy is charged or discharged by battery306. Controller314may generate uPVand ubatbased on a variety of different variables including, for example, a power signal from PV field302(e.g., current and previous values for PPV), the current state-of-charge (SOC) of battery306, a maximum battery power limit, a maximum power limit at POI310, the ramp rate limit, the grid frequency of energy grid312, and/or other variables that can be used by controller314to perform ramp rate control and/or frequency regulation. Advantageously, controller314generates values for uPVand ubatthat maintain the ramp rate of the PV power within the ramp rate compliance limit while participating in the regulation of grid frequency and maintaining the SOC of battery306within a predetermined desirable range. An exemplary controller which can be used as controller314and exemplary processes which may be performed by controller314to generate the PV power setpoint uPVand the battery power setpoint ubatare described in detail in U.S. patent application Ser. No. 15/247,869 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,844 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,788 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,872 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,880 filed Aug. 25, 2016, and U.S. patent application Ser. No. 15/247,873 filed Aug. 25, 2016. The entire disclosure of each of these patent applications is incorporated by reference herein. Energy Storage System with Thermal and Electrical Energy Storage Referring now toFIG.5A, a block diagram of an energy storage system500is shown, according to an exemplary embodiment. Energy storage system500is shown to include a building502. Building502may be the same or similar to buildings116, as described with reference toFIG.1. For example, building502may be equipped with a HVAC system and/or a building management system that operates to control conditions within building502. In some embodiments, building502includes multiple buildings (i.e., a campus) served by energy storage system500. Building502may demand various resources including, for example, hot thermal energy (e.g., hot water), cold thermal energy (e.g., cold water), and/or electrical energy. The resources may be demanded by equipment or subsystems within building502or by external systems that provide services for building502(e.g., heating, cooling, air circulation, lighting, electricity, etc.). Energy storage system500operates to satisfy the resource demand associated with building502. Energy storage system500is shown to include a plurality of utilities510. Utilities510may provide energy storage system500with resources such as electricity, water, natural gas, or any other resource that can be used by energy storage system500to satisfy the demand of building502. For example, utilities510are shown to include an electric utility511, a water utility512, a natural gas utility513, and utility M514, where M is the total number of utilities510. In some embodiments, utilities510are commodity suppliers from which resources and other types of commodities can be purchased. Resources purchased from utilities510can be used by generator subplants520to produce generated resources (e.g., hot water, cold water, electricity, steam, etc.), stored in storage subplants530for later use, or provided directly to building502. For example, utilities510are shown providing electricity directly to building502and storage subplants530. Energy storage system500is shown to include a plurality of generator subplants520. In some embodiments, generator subplants520are components of a central plant (e.g., central plant118). Generator subplants520are shown to include a heater subplant521, a chiller subplant522, a heat recovery chiller subplant523, a steam subplant524, an electricity subplant525, and subplant N, where N is the total number of generator subplants520. Generator subplants520may be configured to convert one or more input resources into one or more output resources by operation of the equipment within generator subplants520. For example, heater subplant521may be configured to generate hot thermal energy (e.g., hot water) by heating water using electricity or natural gas. Chiller subplant522may be configured to generate cold thermal energy (e.g., cold water) by chilling water using electricity. Heat recovery chiller subplant523may be configured to generate hot thermal energy and cold thermal energy by removing heat from one water supply and adding the heat to another water supply. Steam subplant524may be configured to generate steam by boiling water using electricity or natural gas. Electricity subplant525may be configured to generate electricity using mechanical generators (e.g., a steam turbine, a gas-powered generator, etc.) or other types of electricity-generating equipment (e.g., photovoltaic equipment, hydroelectric equipment, etc.). The input resources used by generator subplants520may be provided by utilities510, retrieved from storage subplants530, and/or generated by other generator subplants520. For example, steam subplant524may produce steam as an output resource. Electricity subplant525may include a steam turbine that uses the steam generated by steam subplant524as an input resource to generate electricity. The output resources produced by generator subplants520may be stored in storage subplants530, provided to building502, sold to energy purchasers504, and/or used by other generator subplants520. For example, the electricity generated by electricity subplant525may be stored in electrical energy storage533, used by chiller subplant522to generate cold thermal energy, provided to building502, and/or sold to energy purchasers504. Energy storage system500is shown to include storage subplants530. In some embodiments, storage subplants530are components of a central plant (e.g., central plant118). Storage subplants530may be configured to store energy and other types of resources for later use. Each of storage subplants530may be configured to store a different type of resource. For example, storage subplants530are shown to include hot thermal energy storage531(e.g., one or more hot water storage tanks), cold thermal energy storage532(e.g., one or more cold thermal energy storage tanks), electrical energy storage533(e.g., one or more batteries), and resource type P storage534, where P is the total number of storage subplants530. The resources stored in subplants530may be purchased directly from utilities510or generated by generator subplants520. In some embodiments, storage subplants530are used by energy storage system500to take advantage of price-based demand response (PBDR) programs. PBDR programs encourage consumers to reduce consumption when generation, transmission, and distribution costs are high. PBDR programs are typically implemented (e.g., by utilities510) in the form of energy prices that vary as a function of time. For example, utilities510may increase the price per unit of electricity during peak usage hours to encourage customers to reduce electricity consumption during peak times. Some utilities also charge consumers a separate demand charge based on the maximum rate of electricity consumption at any time during a predetermined demand charge period. Advantageously, storing energy and other types of resources in subplants530allows for the resources to be purchased at times when the resources are relatively less expensive (e.g., during non-peak electricity hours) and stored for use at times when the resources are relatively more expensive (e.g., during peak electricity hours). Storing resources in subplants530also allows the resource demand of building502to be shifted in time. For example, resources can be purchased from utilities510at times when the demand for heating or cooling is low and immediately converted into hot or cold thermal energy by generator subplants520. The thermal energy can be stored in storage subplants530and retrieved at times when the demand for heating or cooling is high. This allows energy storage system500to smooth the resource demand of building502and reduces the maximum required capacity of generator subplants520. Smoothing the demand also allows energy storage system500to reduce the peak electricity consumption, which results in a lower demand charge. In some embodiments, storage subplants530are used by energy storage system500to take advantage of incentive-based demand response (IBDR) programs. IBDR programs provide incentives to customers who have the capability to store energy, generate energy, or curtail energy usage upon request. Incentives are typically provided in the form of monetary revenue paid by utilities510or by an independent service operator (ISO). IBDR programs supplement traditional utility-owned generation, transmission, and distribution assets with additional options for modifying demand load curves. For example, stored energy can be sold to energy purchasers504(e.g., an energy grid) to supplement the energy generated by utilities510. In some instances, incentives for participating in an IBDR program vary based on how quickly a system can respond to a request to change power output/consumption. Faster responses may be compensated at a higher level. Advantageously, electrical energy storage533allows system500to quickly respond to a request for electric power by rapidly discharging stored electrical energy to energy purchasers504. Still referring toFIG.5A, energy storage system500is shown to include an energy storage controller506. Energy storage controller506may be configured to control the distribution, production, storage, and usage of resources in energy storage system500. In some embodiments, energy storage controller506performs an optimization process determine an optimal set of control decisions for each time step within an optimization period. The control decisions may include, for example, an optimal amount of each resource to purchase from utilities510, an optimal amount of each resource to produce or convert using generator subplants520, an optimal amount of each resource to store or remove from storage subplants530, an optimal amount of each resource to sell to energy purchasers504, and/or an optimal amount of each resource to provide to building502. In some embodiments, the control decisions include an optimal amount of each input resource and output resource for each of generator subplants520. Controller506may be configured to maximize the economic value of operating energy storage system500over the duration of the optimization period. The economic value may be defined by a value function that expresses economic value as a function of the control decisions made by controller506. The value function may account for the cost of resources purchased from utilities510, revenue generated by selling resources to energy purchasers504, and the cost of operating energy storage system500. In some embodiments, the cost of operating energy storage system500includes a cost for losses in battery capacity as a result of the charging and discharging electrical energy storage533. The cost of operating energy storage system500may also include a cost of excessive equipment start/stops during the optimization period. Each of subplants520-530may include equipment that can be controlled by energy storage controller506to optimize the performance of energy storage system500. Subplant equipment may include, for example, heating devices, chillers, heat recovery heat exchangers, cooling towers, energy storage devices, pumps, valves, and/or other devices of subplants520-530. Individual devices of generator subplants520can be turned on or off to adjust the resource production of each generator subplant. In some embodiments, individual devices of generator subplants520can be operated at variable capacities (e.g., operating a chiller at 10% capacity or 60% capacity) according to an operating setpoint received from energy storage controller506. In some embodiments, one or more of subplants520-530includes a subplant level controller configured to control the equipment of the corresponding subplant. For example, energy storage controller506may determine an on/off configuration and global operating setpoints for the subplant equipment. In response to the on/off configuration and received global operating setpoints, the subplant controllers may turn individual devices of their respective equipment on or off, and implement specific operating setpoints (e.g., damper position, vane position, fan speed, pump speed, etc.) to reach or maintain the global operating setpoints. In some embodiments, controller506maximizes the life cycle economic value of energy storage system500while participating in PBDR programs, IBDR programs, or simultaneously in both PBDR and IBDR programs. For the IBDR programs, controller506may use statistical estimates of past clearing prices, mileage ratios, and event probabilities to determine the revenue generation potential of selling stored energy to energy purchasers504. For the PBDR programs, controller506may use predictions of ambient conditions, facility thermal loads, and thermodynamic models of installed equipment to estimate the resource consumption of subplants520. Controller506may use predictions of the resource consumption to monetize the costs of running the equipment. Controller506may automatically determine (e.g., without human intervention) a combination of PBDR and/or IBDR programs in which to participate over the optimization period in order to maximize economic value. For example, controller506may consider the revenue generation potential of IBDR programs, the cost reduction potential of PBDR programs, and the equipment maintenance/replacement costs that would result from participating in various combinations of the IBDR programs and PBDR programs. Controller506may weigh the benefits of participation against the costs of participation to determine an optimal combination of programs in which to participate. Advantageously, this allows controller506to determine an optimal set of control decisions that maximize the overall value of operating energy storage system500. In some instances, controller506may determine that it would be beneficial to participate in an IBDR program when the revenue generation potential is high and/or the costs of participating are low. For example, controller506may receive notice of a synchronous reserve event from an IBDR program which requires energy storage system500to shed a predetermined amount of power. Controller506may determine that it is optimal to participate in the IBDR program if cold thermal energy storage532has enough capacity to provide cooling for building502while the load on chiller subplant522is reduced in order to shed the predetermined amount of power. In other instances, controller506may determine that it would not be beneficial to participate in an IBDR program when the resources required to participate are better allocated elsewhere. For example, if building502is close to setting a new peak demand that would greatly increase the PBDR costs, controller506may determine that only a small portion of the electrical energy stored in electrical energy storage533will be sold to energy purchasers504in order to participate in a frequency response market. Controller506may determine that the remainder of the electrical energy will be used to power chiller subplant522to prevent a new peak demand from being set. In some embodiments, energy storage system500and controller include some or all of the components and/or features described in U.S. patent application Ser. No. 15/247,875 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,879 filed Aug. 25, 2016, and U.S. patent application Ser. No. 15/247,881 filed Aug. 25, 2016. The entire disclosure of each of these patent applications is incorporated by reference herein. Energy Cost Optimization System Referring now toFIG.5B, a block diagram of an energy cost optimization system550is shown, according to an exemplary embodiment. Energy cost optimization system550is shown to include many of the same components as energy storage system500(described with reference toFIG.5A) with the exception of storage subplants530. System550is an example of a system without thermal or electrical energy storage in which the peak load contribution cost optimization techniques can be implemented. Energy cost optimization system550is shown to include a building502. Building502may be the same or similar to buildings116, as described with reference toFIG.1. For example, building502may be equipped with a HVAC system and/or a building management system that operates to control conditions within building502. In some embodiments, building502includes multiple buildings (i.e., a campus) served by energy cost optimization system550. Building502may demand various resources including, for example, hot thermal energy (e.g., hot water), cold thermal energy (e.g., cold water), and/or electrical energy. The resources may be demanded by equipment or subsystems within building502or by external systems that provide services for building502(e.g., heating, cooling, air circulation, lighting, electricity, etc.). Energy cost optimization system550operates to satisfy the resource demand associated with building502. Energy cost optimization system550is shown to include a plurality of utilities510. Utilities510may provide system550with resources such as electricity, water, natural gas, or any other resource that can be used by system550to satisfy the demand of building502. For example, utilities510are shown to include an electric utility511, a water utility512, a natural gas utility513, and utility M514, where M is the total number of utilities510. In some embodiments, utilities510are commodity suppliers from which resources and other types of commodities can be purchased. Resources purchased from utilities510can be used by generator subplants520to produce generated resources (e.g., hot water, cold water, electricity, steam, etc.) or provided directly to building502. For example, utilities510are shown providing electricity directly to building502. Energy cost optimization system550is shown to include a plurality of generator subplants520. Generator subplants520are shown to include a heater subplant521, a chiller subplant522, a heat recovery chiller subplant523, a steam subplant524, an electricity subplant525, and subplant N, where N is the total number of generator subplants520. Generator subplants520may be configured to convert one or more input resources into one or more output resources by operation of the equipment within generator subplants520. For example, heater subplant521may be configured to generate hot thermal energy (e.g., hot water) by heating water using electricity or natural gas. Chiller subplant522may be configured to generate cold thermal energy (e.g., cold water) by chilling water using electricity. Heat recovery chiller subplant523may be configured to generate hot thermal energy and cold thermal energy by removing heat from one water supply and adding the heat to another water supply. Steam subplant524may be configured to generate steam by boiling water using electricity or natural gas. Electricity subplant525may be configured to generate electricity using mechanical generators (e.g., a steam turbine, a gas-powered generator, etc.) or other types of electricity-generating equipment (e.g., photovoltaic equipment, hydroelectric equipment, etc.). The input resources used by generator subplants520may be provided by utilities510and/or generated by other generator subplants520. For example, steam subplant524may produce steam as an output resource. Electricity subplant525may include a steam turbine that uses the steam generated by steam subplant524as an input resource to generate electricity. The output resources produced by generator subplants520may be provided to building502, sold to energy purchasers504, and/or used by other generator subplants520. For example, the electricity generated by electricity subplant525may be used by chiller subplant522to generate cold thermal energy, provided to building502, and/or sold to energy purchasers504. Still referring toFIG.5B, energy cost optimization system550is shown to include a controller552. Controller552may be configured to control the distribution, production, and usage of resources in system550. In some embodiments, controller552performs an optimization process determine an optimal set of control decisions for each time step within an optimization period. The control decisions may include, for example, an optimal amount of each resource to purchase from utilities510, an optimal amount of each resource to produce or convert using generator subplants520, an optimal amount of each resource to sell to energy purchasers504, and/or an optimal amount of each resource to provide to building502. In some embodiments, the control decisions include an optimal amount of each input resource and output resource for each of generator subplants520. Controller552may be configured to maximize the economic value of operating energy cost optimization system550over the duration of the optimization period. The economic value may be defined by a value function that expresses economic value as a function of the control decisions made by controller552. The value function may account for the cost of resources purchased from utilities510, revenue generated by selling resources to energy purchasers504, and the cost of operating system550. In some embodiments, the cost of operating system550includes a cost of excessive equipment start/stops during the optimization period. Each of subplants520may include equipment that can be controlled by controller552to optimize the performance of system550. Subplant equipment may include, for example, heating devices, chillers, heat recovery heat exchangers, cooling towers, pumps, valves, and/or other devices of subplants520. Individual devices of generator subplants520can be turned on or off to adjust the resource production of each generator subplant. In some embodiments, individual devices of generator subplants520can be operated at variable capacities (e.g., operating a chiller at 10% capacity or 60% capacity) according to an operating setpoint received from controller552. In some embodiments, one or more of subplants520includes a subplant level controller configured to control the equipment of the corresponding subplant. For example, controller552may determine an on/off configuration and global operating setpoints for the subplant equipment. In response to the on/off configuration and received global operating setpoints, the subplant controllers may turn individual devices of their respective equipment on or off, and implement specific operating setpoints (e.g., damper position, vane position, fan speed, pump speed, etc.) to reach or maintain the global operating setpoints. In some embodiments, energy cost optimization system550and controller552include some or all of the components and/or features described in U.S. patent application Ser. No. 15/247,875 filed Aug. 25, 2016, U.S. patent application Ser. No. 15/247,879 filed Aug. 25, 2016, and U.S. patent application Ser. No. 15/247,881 filed Aug. 25, 2016. The entire disclosure of each of these patent applications is incorporated by reference herein. Energy Storage Controller Referring now toFIG.6A, a block diagram illustrating energy storage controller506in greater detail is shown, according to an exemplary embodiment. Energy storage controller506is shown providing control decisions to a building management system (BMS)606. In some embodiments, BMS606is the same or similar the BMS described with reference toFIG.1. The control decisions provided to BMS606may include resource purchase amounts for utilities510, setpoints for generator subplants520, and/or charge/discharge rates for storage subplants530. BMS606may be configured to monitor conditions within a controlled building or building zone. For example, BMS606may receive input from various sensors (e.g., temperature sensors, humidity sensors, airflow sensors, voltage sensors, etc.) distributed throughout the building and may report building conditions to energy storage controller506. Building conditions may include, for example, a temperature of the building or a zone of the building, a power consumption (e.g., electric load) of the building, a state of one or more actuators configured to affect a controlled state within the building, or other types of information relating to the controlled building. BMS606may operate subplants520-530to affect the monitored conditions within the building and to serve the thermal energy loads of the building. BMS606may receive control signals from energy storage controller506specifying on/off states, charge/discharge rates, and/or setpoints for the subplant equipment. BMS606may control the equipment (e.g., via actuators, power relays, etc.) in accordance with the control signals provided by energy storage controller506. For example, BMS606may operate the equipment using closed loop control to achieve the setpoints specified by energy storage controller506. In various embodiments, BMS606may be combined with energy storage controller506or may be part of a separate building management system. According to an exemplary embodiment, BMS606is a METASYS® brand building management system, as sold by Johnson Controls, Inc. Energy storage controller506may monitor the status of the controlled building using information received from BMS606. Energy storage controller506may be configured to predict the thermal energy loads (e.g., heating loads, cooling loads, etc.) of the building for plurality of time steps in an optimization period (e.g., using weather forecasts from a weather service604). Energy storage controller506may also predict the revenue generation potential of IBDR programs using an incentive event history (e.g., past clearing prices, mileage ratios, event probabilities, etc.) from incentive programs602. Energy storage controller506may generate control decisions that optimize the economic value of operating energy storage system500over the duration of the optimization period subject to constraints on the optimization process (e.g., energy balance constraints, load satisfaction constraints, etc.). The optimization process performed by energy storage controller506is described in greater detail below. According to an exemplary embodiment, energy storage controller506is integrated within a single computer (e.g., one server, one housing, etc.). In various other exemplary embodiments, energy storage controller506can be distributed across multiple servers or computers (e.g., that can exist in distributed locations). In another exemplary embodiment, energy storage controller506may integrated with a smart building manager that manages multiple building systems and/or combined with BMS606. Energy storage controller506is shown to include a communications interface636and a processing circuit607. Communications interface636may include wired or wireless interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications with various systems, devices, or networks. For example, communications interface636may include an Ethernet card and port for sending and receiving data via an Ethernet-based communications network and/or a WiFi transceiver for communicating via a wireless communications network. Communications interface636may be configured to communicate via local area networks or wide area networks (e.g., the Internet, a building WAN, etc.) and may use a variety of communications protocols (e.g., BACnet, IP, LON, etc.). Communications interface636may be a network interface configured to facilitate electronic data communications between energy storage controller506and various external systems or devices (e.g., BMS606, subplants520-530, utilities510, etc.). For example, energy storage controller506may receive information from BMS606indicating one or more measured states of the controlled building (e.g., temperature, humidity, electric loads, etc.) and one or more states of subplants520-530(e.g., equipment status, power consumption, equipment availability, etc.). Communications interface636may receive inputs from BMS606and/or subplants520-530and may provide operating parameters (e.g., on/off decisions, setpoints, etc.) to subplants520-530via BMS606. The operating parameters may cause subplants520-530to activate, deactivate, or adjust a setpoint for various devices thereof. Still referring toFIG.6A, processing circuit607is shown to include a processor608and memory610. Processor608may be a general purpose or specific purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a group of processing components, or other suitable processing components. Processor608may be configured to execute computer code or instructions stored in memory610or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.). Memory610may include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data and/or computer code for completing and/or facilitating the various processes described in the present disclosure. Memory610may include random access memory (RAM), read-only memory (ROM), hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects and/or computer instructions. Memory610may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. Memory610may be communicably connected to processor608via processing circuit607and may include computer code for executing (e.g., by processor608) one or more processes described herein. Memory610is shown to include a building status monitor624. Energy storage controller506may receive data regarding the overall building or building space to be heated or cooled by system500via building status monitor624. In an exemplary embodiment, building status monitor624may include a graphical user interface component configured to provide graphical user interfaces to a user for selecting building requirements (e.g., overall temperature parameters, selecting schedules for the building, selecting different temperature levels for different building zones, etc.). Energy storage controller506may determine on/off configurations and operating setpoints to satisfy the building requirements received from building status monitor624. In some embodiments, building status monitor624receives, collects, stores, and/or transmits cooling load requirements, building temperature setpoints, occupancy data, weather data, energy data, schedule data, and other building parameters. In some embodiments, building status monitor624stores data regarding energy costs, such as pricing information available from utilities510(energy charge, demand charge, etc.). Still referring toFIG.6A, memory610is shown to include a load/rate predictor622. Load/rate predictor622may be configured to predict the thermal energy loads (k) of the building or campus for each time step k (e.g., k=1 . . . n) of an optimization period. Load/rate predictor622is shown receiving weather forecasts from a weather service604. In some embodiments, load/rate predictor622predicts the thermal energy loadskas a function of the weather forecasts. In some embodiments, load/rate predictor622uses feedback from BMS606to predict loadsk. Feedback from BMS606may include various types of sensory inputs (e.g., temperature, flow, humidity, enthalpy, etc.) or other data relating to the controlled building (e.g., inputs from a HVAC system, a lighting control system, a security system, a water system, etc.). In some embodiments, load/rate predictor622receives a measured electric load and/or previous measured load data from BMS606(e.g., via building status monitor624). Load/rate predictor622may predict loadskas a function of a given weather forecast ({circumflex over (ϕ)}w), a day type (day), the time of day (t), and previous measured load data (Yk−1). Such a relationship is expressed in the following equation: k=f({circumflex over (ϕ)}w,day,t|Yk−1) In some embodiments, load/rate predictor622uses a deterministic plus stochastic model trained from historical load data to predict loadsk. Load/rate predictor622may use any of a variety of prediction methods to predict loadsk(e.g., linear regression for the deterministic portion and an AR model for the stochastic portion). Load/rate predictor622may predict one or more different types of loads for the building or campus. For example, load/rate predictor622may predict a hot water loadHot,kand a cold water loadCold,kfor each time step k within the prediction window. In some embodiments, load/rate predictor622makes load/rate predictions using the techniques described in U.S. patent application Ser. No. 14/717,593. Load/rate predictor622is shown receiving utility rates from utilities510. Utility rates may indicate a cost or price per unit of a resource (e.g., electricity, natural gas, water, etc.) provided by utilities510at each time step k in the prediction window. In some embodiments, the utility rates are time-variable rates. For example, the price of electricity may be higher at certain times of day or days of the week (e.g., during high demand periods) and lower at other times of day or days of the week (e.g., during low demand periods). The utility rates may define various time periods and a cost per unit of a resource during each time period. Utility rates may be actual rates received from utilities510or predicted utility rates estimated by load/rate predictor622. In some embodiments, the utility rates include demand charges for one or more resources provided by utilities510. A demand charge may define a separate cost imposed by utilities510based on the maximum usage of a particular resource (e.g., maximum energy consumption) during a demand charge period. The utility rates may define various demand charge periods and one or more demand charges associated with each demand charge period. In some instances, demand charge periods may overlap partially or completely with each other and/or with the prediction window. Advantageously, demand response optimizer630may be configured to account for demand charges in the high level optimization process performed by high level optimizer632. Utilities510may be defined by time-variable (e.g., hourly) prices, a maximum service level (e.g., a maximum rate of consumption allowed by the physical infrastructure or by contract) and, in the case of electricity, a demand charge or a charge for the peak rate of consumption within a certain period. Load/rate predictor622may store the predicted loadskand the utility rates in memory610and/or provide the predicted loadskand the utility rates to demand response optimizer630. Still referring toFIG.6A, memory610is shown to include an incentive estimator620. Incentive estimator620may be configured to estimate the revenue generation potential of participating in various incentive-based demand response (IBDR) programs. In some embodiments, incentive estimator620receives an incentive event history from incentive programs602. The incentive event history may include a history of past IBDR events from incentive programs602. An IBDR event may include an invitation from incentive programs602to participate in an IBDR program in exchange for a monetary incentive. The incentive event history may indicate the times at which the past IBDR events occurred and attributes describing the IBDR events (e.g., clearing prices, mileage ratios, participation requirements, etc.). Incentive estimator620may use the incentive event history to estimate IBDR event probabilities during the optimization period. Incentive estimator620is shown providing incentive predictions to demand response optimizer630. The incentive predictions may include the estimated IBDR probabilities, estimated participation requirements, an estimated amount of revenue from participating in the estimated IBDR events, and/or any other attributes of the predicted IBDR events. Demand response optimizer630may use the incentive predictions along with the predicted loadskand utility rates from load/rate predictor622to determine an optimal set of control decisions for each time step within the optimization period. Still referring toFIG.6A, memory610is shown to include a demand response optimizer630. Demand response optimizer630may perform a cascaded optimization process to optimize the performance of energy storage system500. For example, demand response optimizer630is shown to include a high level optimizer632and a low level optimizer634. High level optimizer632may control an outer (e.g., subplant level) loop of the cascaded optimization. High level optimizer632may determine an optimal set of control decisions for each time step in the prediction window in order to optimize (e.g., maximize) the value of operating energy storage system500. Control decisions made by high level optimizer632may include, for example, load setpoints for each of generator subplants520, charge/discharge rates for each of storage subplants530, resource purchase amounts for each type of resource purchased from utilities510, and/or an amount of each resource sold to energy purchasers504. In other words, the control decisions may define resource allocation at each time step. The control decisions made by high level optimizer632are based on the statistical estimates of incentive event probabilities and revenue generation potential for various IBDR events as well as the load and rate predictions. Low level optimizer634may control an inner (e.g., equipment level) loop of the cascaded optimization. Low level optimizer634may determine how to best run each subplant at the load setpoint determined by high level optimizer632. For example, low level optimizer634may determine on/off states and/or operating setpoints for various devices of the subplant equipment in order to optimize (e.g., minimize) the energy consumption of each subplant while meeting the resource allocation setpoint for the subplant. In some embodiments, low level optimizer634receives actual incentive events from incentive programs602. Low level optimizer634may determine whether to participate in the incentive events based on the resource allocation set by high level optimizer632. For example, if insufficient resources have been allocated to a particular IBDR program by high level optimizer632or if the allocated resources have already been used, low level optimizer634may determine that energy storage system500will not participate in the IBDR program and may ignore the IBDR event. However, if the required resources have been allocated to the IBDR program and are available in storage subplants530, low level optimizer634may determine that system500will participate in the IBDR program in response to the IBDR event. The cascaded optimization process performed by demand response optimizer630is described in greater detail in U.S. patent application Ser. No. 15/247,885. Still referring toFIG.6A, memory610is shown to include a subplant control module628. Subplant control module628may store historical data regarding past operating statuses, past operating setpoints, and instructions for calculating and/or implementing control parameters for subplants520-530. Subplant control module628may also receive, store, and/or transmit data regarding the conditions of individual devices of the subplant equipment, such as operating efficiency, equipment degradation, a date since last service, a lifespan parameter, a condition grade, or other device-specific data. Subplant control module628may receive data from subplants520-530and/or BMS606via communications interface636. Subplant control module628may also receive and store on/off statuses and operating setpoints from low level optimizer634. Data and processing results from demand response optimizer630, subplant control module628, or other modules of energy storage controller506may be accessed by (or pushed to) monitoring and reporting applications626. Monitoring and reporting applications626may be configured to generate real time “system health” dashboards that can be viewed and navigated by a user (e.g., a system engineer). For example, monitoring and reporting applications626may include a web-based monitoring application with several graphical user interface (GUI) elements (e.g., widgets, dashboard controls, windows, etc.) for displaying key performance indicators (KPI) or other information to users of a GUI. In addition, the GUI elements may summarize relative energy use and intensity across energy storage systems in different buildings (real or modeled), different campuses, or the like. Other GUI elements or reports may be generated and shown based on available data that allow users to assess performance across one or more energy storage systems from one screen. The user interface or report (or underlying data engine) may be configured to aggregate and categorize operating conditions by building, building type, equipment type, and the like. The GUI elements may include charts or histograms that allow the user to visually analyze the operating parameters and power consumption for the devices of the energy storage system. Still referring toFIG.6A, energy storage controller506may include one or more GUI servers, web services612, or GUI engines614to support monitoring and reporting applications626. In various embodiments, applications626, web services612, and GUI engine614may be provided as separate components outside of energy storage controller506(e.g., as part of a smart building manager). Energy storage controller506may be configured to maintain detailed historical databases (e.g., relational databases, XML databases, etc.) of relevant data and includes computer code modules that continuously, frequently, or infrequently query, aggregate, transform, search, or otherwise process the data maintained in the detailed databases. Energy storage controller506may be configured to provide the results of any such processing to other databases, tables, XML files, or other data structures for further querying, calculation, or access by, for example, external monitoring and reporting applications. Energy storage controller506is shown to include configuration tools616. Configuration tools616can allow a user to define (e.g., via graphical user interfaces, via prompt-driven “wizards,” etc.) how energy storage controller506should react to changing conditions in the energy storage subsystems. In an exemplary embodiment, configuration tools616allow a user to build and store condition-response scenarios that can cross multiple energy storage system devices, multiple building systems, and multiple enterprise control applications (e.g., work order management system applications, entity resource planning applications, etc.). For example, configuration tools616can provide the user with the ability to combine data (e.g., from subsystems, from event histories) using a variety of conditional logic. In varying exemplary embodiments, the conditional logic can range from simple logical operators between conditions (e.g., AND, OR, XOR, etc.) to pseudo-code constructs or complex programming language functions (allowing for more complex interactions, conditional statements, loops, etc.). Configuration tools616can present user interfaces for building such conditional logic. The user interfaces may allow users to define policies and responses graphically. In some embodiments, the user interfaces may allow a user to select a pre-stored or pre-constructed policy and adapt it or enable it for use with their system. Energy Cost Optimization Controller Referring now toFIG.6B, a block diagram illustrating controller552in greater detail is shown, according to an exemplary embodiment. Controller552is shown providing control decisions to a building management system (BMS)606. In some embodiments, BMS606is the same or similar the BMS described with reference toFIG.1. The control decisions provided to BMS606may include resource purchase amounts for utilities510and/or setpoints for generator subplants520. BMS606may be configured to monitor conditions within a controlled building or building zone. For example, BMS606may receive input from various sensors (e.g., temperature sensors, humidity sensors, airflow sensors, voltage sensors, etc.) distributed throughout the building and may report building conditions to controller552. Building conditions may include, for example, a temperature of the building or a zone of the building, a power consumption (e.g., electric load) of the building, a state of one or more actuators configured to affect a controlled state within the building, or other types of information relating to the controlled building. BMS606may operate subplants520to affect the monitored conditions within the building and to serve the thermal energy loads of the building. BMS606may receive control signals from controller552specifying on/off states and/or setpoints for the subplant equipment. BMS606may control the equipment (e.g., via actuators, power relays, etc.) in accordance with the control signals provided by controller552. For example, BMS606may operate the equipment using closed loop control to achieve the setpoints specified by energy storage controller552. In various embodiments, BMS606may be combined with controller552or may be part of a separate building management system. According to an exemplary embodiment, BMS606is a METASYS® brand building management system, as sold by Johnson Controls, Inc. Controller552may monitor the status of the controlled building using information received from BMS606. Controller552may be configured to predict the thermal energy loads (e.g., heating loads, cooling loads, etc.) of the building for plurality of time steps in an optimization period (e.g., using weather forecasts from a weather service604). Controller552may generate control decisions that optimize the economic value of operating system550over the duration of the optimization period subject to constraints on the optimization process (e.g., energy balance constraints, load satisfaction constraints, etc.). The optimization process performed by controller552is described in greater detail below. Controller552is shown to include a communications interface636and a processing circuit607having a processor608and memory610. These components may be the same as described with reference toFIG.6A. For example, controller552is shown to include demand response optimizer630. Demand response optimizer630may perform a cascaded optimization process to optimize the performance of system550. For example, demand response optimizer630is shown to include a high level optimizer632and a low level optimizer634. High level optimizer632may control an outer (e.g., subplant level) loop of the cascaded optimization. High level optimizer632may determine an optimal set of control decisions for each time step in the prediction window in order to optimize (e.g., maximize) the value of operating energy storage system500. Control decisions made by high level optimizer632may include, for example, load setpoints for each of generator subplants520, resource purchase amounts for each type of resource purchased from utilities510, and/or an amount of each resource sold to energy purchasers504. In other words, the control decisions may define resource allocation at each time step. Low level optimizer634may control an inner (e.g., equipment level) loop of the cascaded optimization. Low level optimizer634may determine how to best run each subplant at the load setpoint determined by high level optimizer632. For example, low level optimizer634may determine on/off states and/or operating setpoints for various devices of the subplant equipment in order to optimize (e.g., minimize) the energy consumption of each subplant while meeting the resource allocation setpoint for the subplant. The cascaded optimization process performed by demand response optimizer630is described in greater detail in U.S. patent application Ser. No. 15/247,885. These and other components of controller552may be the same as previously described with reference toFIG.6A. Planning Tool Referring now toFIG.7, a block diagram of a planning system700is shown, according to an exemplary embodiment. Planning system700may be configured to use demand response optimizer630as part of a planning tool702to simulate the operation of a central plant over a predetermined time period (e.g., a day, a month, a week, a year, etc.) for planning, budgeting, and/or design considerations. When implemented in planning tool702, demand response optimizer630may operate in a similar manner as described with reference toFIGS.6A-6B. For example, demand response optimizer630may use building loads and utility rates to determine an optimal resource allocation to minimize cost over a simulation period. However, planning tool702may not be responsible for real-time control of a building management system or central plant. Planning tool702can be configured to determine the benefits of investing in a battery asset and the financial metrics associated with the investment. Such financial metrics can include, for example, the internal rate of return (IRR), net present value (NPV), and/or simple payback period (SPP). Planning tool702can also assist a user in determining the size of the battery which yields optimal financial metrics such as maximum NPV or a minimum SPP. In some embodiments, planning tool702allows a user to specify a battery size and automatically determines the benefits of the battery asset from participating in selected IBDR programs while performing PBDR, as described with reference toFIG.5A. In some embodiments, planning tool702is configured to determine the battery size that minimizes SPP given the IBDR programs selected and the requirement of performing PBDR. In some embodiments, planning tool702is configured to determine the battery size that maximizes NPV given the IBDR programs selected and the requirement of performing PBDR. In planning tool702, high level optimizer632may receive planned loads and utility rates for the entire simulation period. The planned loads and utility rates may be defined by input received from a user via a client device722(e.g., user-defined, user selected, etc.) and/or retrieved from a plan information database726. High level optimizer632uses the planned loads and utility rates in conjunction with subplant curves from low level optimizer634to determine an optimal resource allocation (i.e., an optimal dispatch schedule) for a portion of the simulation period. The portion of the simulation period over which high level optimizer632optimizes the resource allocation may be defined by a prediction window ending at a time horizon. With each iteration of the optimization, the prediction window is shifted forward and the portion of the dispatch schedule no longer in the prediction window is accepted (e.g., stored or output as results of the simulation). Load and rate predictions may be predefined for the entire simulation and may not be subject to adjustments in each iteration. However, shifting the prediction window forward in time may introduce additional plan information (e.g., planned loads and/or utility rates) for the newly-added time slice at the end of the prediction window. The new plan information may not have a significant effect on the optimal dispatch schedule since only a small portion of the prediction window changes with each iteration. In some embodiments, high level optimizer632requests all of the subplant curves used in the simulation from low level optimizer634at the beginning of the simulation. Since the planned loads and environmental conditions are known for the entire simulation period, high level optimizer632may retrieve all of the relevant subplant curves at the beginning of the simulation. In some embodiments, low level optimizer634generates functions that map subplant production to equipment level production and resource use when the subplant curves are provided to high level optimizer632. These subplant to equipment functions may be used to calculate the individual equipment production and resource use (e.g., in a post-processing module) based on the results of the simulation. Still referring toFIG.7, planning tool702is shown to include a communications interface704and a processing circuit706. Communications interface704may include wired or wireless interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications with various systems, devices, or networks. For example, communications interface704may include an Ethernet card and port for sending and receiving data via an Ethernet-based communications network and/or a WiFi transceiver for communicating via a wireless communications network. Communications interface704may be configured to communicate via local area networks or wide area networks (e.g., the Internet, a building WAN, etc.) and may use a variety of communications protocols (e.g., BACnet, IP, LON, etc.). Communications interface704may be a network interface configured to facilitate electronic data communications between planning tool702and various external systems or devices (e.g., client device722, results database728, plan information database726, etc.). For example, planning tool702may receive planned loads and utility rates from client device722and/or plan information database726via communications interface704. Planning tool702may use communications interface704to output results of the simulation to client device722and/or to store the results in results database728. Still referring toFIG.7, processing circuit706is shown to include a processor710and memory712. Processor710may be a general purpose or specific purpose processor, an application specific integrated circuit (ASIC), one or more field programmable gate arrays (FPGAs), a group of processing components, or other suitable processing components. Processor710may be configured to execute computer code or instructions stored in memory712or received from other computer readable media (e.g., CDROM, network storage, a remote server, etc.). Memory712may include one or more devices (e.g., memory units, memory devices, storage devices, etc.) for storing data and/or computer code for completing and/or facilitating the various processes described in the present disclosure. Memory712may include random access memory (RAM), read-only memory (ROM), hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, or any other suitable memory for storing software objects and/or computer instructions. Memory712may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. Memory712may be communicably connected to processor710via processing circuit706and may include computer code for executing (e.g., by processor710) one or more processes described herein. Still referring toFIG.7, memory712is shown to include a GUI engine716, web services714, and configuration tools718. In an exemplary embodiment, GUI engine716includes a graphical user interface component configured to provide graphical user interfaces to a user for selecting or defining plan information for the simulation (e.g., planned loads, utility rates, environmental conditions, etc.). Web services714may allow a user to interact with planning tool702via a web portal and/or from a remote system or device (e.g., an enterprise control application). Configuration tools718can allow a user to define (e.g., via graphical user interfaces, via prompt-driven “wizards,” etc.) various parameters of the simulation such as the number and type of subplants, the devices within each subplant, the subplant curves, device-specific efficiency curves, the duration of the simulation, the duration of the prediction window, the duration of each time step, and/or various other types of plan information related to the simulation. Configuration tools718can present user interfaces for building the simulation. The user interfaces may allow users to define simulation parameters graphically. In some embodiments, the user interfaces allow a user to select a pre-stored or pre-constructed simulated plant and/or plan information (e.g., from plan information database726) and adapt it or enable it for use in the simulation. Still referring toFIG.7, memory712is shown to include demand response optimizer630. Demand response optimizer630may use the planned loads and utility rates to determine an optimal resource allocation over a prediction window. The operation of demand response optimizer630may be the same or similar as previously described with reference toFIGS.6-8. With each iteration of the optimization process, demand response optimizer630may shift the prediction window forward and apply the optimal resource allocation for the portion of the simulation period no longer in the prediction window. Demand response optimizer630may use the new plan information at the end of the prediction window to perform the next iteration of the optimization process. Demand response optimizer630may output the applied resource allocation to reporting applications730for presentation to a client device722(e.g., via user interface724) or storage in results database728. Still referring toFIG.7, memory712is shown to include reporting applications730. Reporting applications730may receive the optimized resource allocations from demand response optimizer630and, in some embodiments, costs associated with the optimized resource allocations. Reporting applications730may include a web-based reporting application with several graphical user interface (GUI) elements (e.g., widgets, dashboard controls, windows, etc.) for displaying key performance indicators (KPI) or other information to users of a GUI. In addition, the GUI elements may summarize relative energy use and intensity across various plants, subplants, or the like. Other GUI elements or reports may be generated and shown based on available data that allow users to assess the results of the simulation. The user interface or report (or underlying data engine) may be configured to aggregate and categorize resource allocation and the costs associated therewith and provide the results to a user via a GUI. The GUI elements may include charts or histograms that allow the user to visually analyze the results of the simulation. An exemplary output that may be generated by reporting applications730is shown inFIG.8. Referring now toFIG.8, several graphs800illustrating the operation of planning tool702are shown, according to an exemplary embodiment. With each iteration of the optimization process, planning tool702selects an optimization period (i.e., a portion of the simulation period) over which the optimization is performed. For example, planning tool702may select optimization period802for use in the first iteration. Once the optimal resource allocation810has been determined, planning tool702may select a portion818of resource allocation810to send to plant dispatch830. Portion818may be the first b time steps of resource allocation810. Planning tool702may shift the optimization period802forward in time, resulting in optimization period804. The amount by which the prediction window is shifted may correspond to the duration of time steps b. Planning tool702may repeat the optimization process for optimization period804to determine the optimal resource allocation812. Planning tool702may select a portion820of resource allocation812to send to plant dispatch830. Portion820may be the first b time steps of resource allocation812. Planning tool702may then shift the prediction window forward in time, resulting in optimization period806. This process may be repeated for each subsequent optimization period (e.g., optimization periods806,808, etc.) to generate updated resource allocations (e.g., resource allocations814,816, etc.) and to select portions of each resource allocation (e.g., portions822,824) to send to plant dispatch830. Plant dispatch830includes the first b time steps818-824from each of optimization periods802-808. Once the optimal resource allocation is compiled for the entire simulation period, the results may be sent to reporting applications730, results database728, and/or client device722, as described with reference toFIG.7. Resource Allocation Optimization Referring now toFIG.9, a block diagram illustrating high level optimizer632in greater detail is shown, according to an exemplary embodiment. In some embodiments, high level optimizer632may be implemented as a component of energy storage controller506, as described with reference toFIGS.5A and6A. In other embodiments, high level optimizer632may be implemented as a component of controller552, as described with reference toFIGS.5B and6B. In other embodiments, high level optimizer632may be implemented as a component of planning tool702, as described with reference toFIGS.7-8. High level optimizer632may receive load and rate predictions from load/rate predictor622, incentive predictions from incentive estimator620, and subplant curves from low level optimizer634. High level optimizer632may determine an optimal resource allocation across energy storage system500as a function of the load and rate predictions, the incentive predictions, and the subplant curves. The optimal resource allocation may include an amount of each resource purchased from utilities510, an amount of each input and output resource of generator subplants520, an amount of each resource stored or withdrawn from storage subplants530, and/or an amount of each resource sold to energy purchasers504. In some embodiments, the optimal resource allocation maximizes the economic value of operating energy storage system500while satisfying the predicted loads for the building or campus. High level optimizer632can be configured to optimize the utilization of a battery asset, such as battery108, battery306, and/or electrical energy storage subplant533. A battery asset can be used to participate in IBDR programs which yield revenue and to reduce the cost of energy and the cost incurred from peak load contribution charges. High level optimizer632can use an optimization algorithm to optimally allocate a battery asset (e.g., by optimally charging and discharging the battery) to maximize its total value. In a planning tool framework, high level optimizer632can perform the optimization iteratively to determine optimal battery asset allocation for an entire simulation period (e.g., an entire year), as described with reference toFIG.8. The optimization process can be expanded to include economic load demand response (ELDR) and can account for peak load contribution charges. High level optimizer632can allocate the battery asset at each time step (e.g., each hour) over a given horizon such that energy and demand costs are minimized and frequency regulation (FR) revenue maximized. These and other features of high level optimizer632are described in detail below. Cost Function Still referring toFIG.9, high level optimizer632is shown to include a cost function module902. Cost function module902can generate a cost function or objective function which represents the total operating cost of a system over a time horizon (e.g., one month, one year, one day, etc.). The system can include any of the systems previously described (e.g., frequency response optimization system100, photovoltaic energy system300, energy storage system500, planning system700, etc.) or any other system in which high level optimizer632is implemented. In some embodiments, the cost function can be expressed generically using the following equation: arg⁢min⁢J⁡(x)x⁢where⁢J⁡(x)⁢is⁢defined⁢as⁢follows:⁢J⁡(x)=∑source∑horizoncost⁡(purchaseresource,time,time)-∑incentive∑horizonrevenue⁡(ReservationAmount) The first term in the previous equation represents the total cost of all resources purchased over the optimization horizon. Resources can include, for example, water, electricity, natural gas, or other types of resources purchased from a utility or other outside entity. The second term in the equation represents the total revenue generated by participating in incentive programs (e.g., IBDR programs) over the optimization horizon. The revenue may be based on the amount of power reserved for participating in the incentive programs. Accordingly, the total cost function represents the total cost of resources purchased minus any revenue generated from participating in incentive programs. High level optimizer632can optimize the cost function J(x) subject to the following constraint, which guarantees the balance between resources purchased, produced, discharged, consumed, and requested over the optimization horizon: ∑sourcesp⁢u⁢r⁢c⁢h⁢a⁢s⁢eresource,t⁢i⁢m⁢e+∑s⁢u⁢b⁢p⁢l⁢a⁢n⁢t⁢sproduces⁡(xinternal,time,xexternal,time,vuncontrolled,time)-∑s⁢u⁢b⁢p⁢l⁢a⁢n⁢t⁢sconsumes⁡(xinternal,time,xexternal,time,vuncontrolled,time)+∑storagesdischargesresource(xinternal,time,xexternal,time)-∑sinksrequestsresource=0⁢∀resources,∀time∈horizon where xinternal,timeand xexternal,timeare internal and external decision variables and vuncontrolled,timeincludes uncontrolled variables. The first term in the previous equation represents the total amount of each resource (e.g., electricity, water, natural gas, etc.) purchased from each source (e.g., utilities510) over the optimization horizon. The second term represents the total consumption of each resource within the system (e.g., by generator subplants520) over the optimization horizon. The third term represents the total amount of each resource discharged from storage (e.g., storage subplants530) over the optimization horizon. Positive values indicate that the resource is discharged from storage, whereas negative values indicate that the resource is charged or stored. The fourth term represents the total amount of each resource requested by various resource sinks (e.g., building502, energy purchasers504, or other resource consumers) over the optimization horizon. Accordingly, this constraint ensures that the total amount of each resource purchased, produced, or discharged from storage is equal to the amount of each resource consumed, stored, or provided to the resource sinks. In some embodiments, cost function module902separates the purchase cost of one or more resources into multiple terms. For example, cost function module902can separate the purchase cost of a resource into a first term corresponding to the cost per unit of the resource purchased (e.g., $/kWh of electricity, $/liter of water, etc.) and a second term corresponding to one or more demand charges. A demand charge is a separate charge on the consumption of a resource which depends on the maximum or peak resource consumption over a given period (i.e., a demand charge period). Cost function module902can express the cost function using the following equation: J⁢(x)=∑s∈sources[∑q∈d⁢e⁢m⁢a⁢n⁢d⁢sswdemand,s,q⁢rd⁢e⁢m⁢and,s,qmaxi∈demand,s,q(purchases,i)+∑h⁢o⁢τ⁢i⁢z⁢o⁢nrs,i⁢purchases,i]-∑i⁢n⁢c⁢e⁢n⁢t⁢i⁢v⁢e⁢s∑horizonrevenue(ReservationAmount) where rdemand,s,qis the qth demand charge associated with the peak demand of the resource provided by source s over the demand charge period, wdemand,s,qis the weight adjustment of the qth demand charge associated with source s, and the max( ) term indicates the maximum amount of the resource purchased from source s at any time step i during the demand charge period. The variable rs,iindicates the cost per unit of the resource purchased from source s and the variable purchase indicates the amount of the resource purchased from source s during the ith time step of the optimization period. In some embodiments, the energy system in which high level optimizer632is implemented includes a battery asset (e.g., one or more batteries) configured to store and discharge electricity. If the battery asset is the only type of energy storage, cost function module902can simplify the cost function J(x) to the following equation: J⁡(x)=-∑i=kk+h-1rei⁢Pbati-∑i=kk+h-1rFRi⁢PFRi+∑i=kk+h-1rsi⁢❘"\[LeftBracketingBar]"Pbati-Pbati-1❘"\[RightBracketingBar]"+wd⁢rd⁢maxi(-Pbati+eLoadi) where h is the duration of the optimization horizon, Pbatiis the amount of power (e.g., kW) discharged from the battery asset during the ith time step of the optimization horizon for use in reducing the amount of power purchased from an electric utility, reiis the price of electricity (e.g., $/kWh) at time step i, PFR,iis the battery power (e.g., kW) committed to frequency regulation participation during time step i, rFRiis the incentive rate (e.g., $/kWh) for participating in frequency regulation during time step i, rdis the applicable demand charge (e.g., $/kWh) associated with the maximum electricity consumption during the corresponding demand charge period, wdis a weight adjustment of the demand charge over the horizon, and the max( ) term selects the maximum amount electricity purchased from the electric utility (e.g., kW) during any time step i of the applicable demand charge period. In the previous expression of the cost function J(x), the first term represents the cost savings resulting from the use of battery power to satisfy the electric demand of the facility relative to the cost which would have been incurred if the electricity were purchased from the electric utility. The second term represents the amount of revenue derived from participating in the frequency regulation program. The third term represents a switching penalty imposed for switching the battery power Pbatbetween consecutive time steps. The fourth term represents the demand charge associated with the maximum amount of electricity purchased from the electric utility. The amount of electricity purchased may be equal to the difference between the electric load of the facility eLoadi(i.e., the total amount of electricity required) at time step i and the amount of power discharged from the battery asset Pbatiat time step i. In a planning tool framework, historical data of the electric load eLoad over the horizon can be provided as a known input. In an operational mode, the electric load eLoad can be predicted for each time step of the optimization period. Optimization Constraints Still referring toFIG.9, high level optimizer632is shown to include a power constraints module904. Power constraints module904may be configured to impose one or more power constraints on the objective function J(x). In some embodiments, power constraints module904generates and imposes the following constraints: Pbati+PFRi≤Peff −Pbati+PFRi≤Peff Pbati+PFRi≤eLoadi where Pbatiis the amount of power discharged from the battery at time step i for use in satisfying electric demand and reducing the demand charge, PFRiis the amount of battery power committed to frequency regulation at time step i, Peffis the effective power available (e.g., the maximum rate at which the battery can be charged or discharged), and eLoadiis the total electric demand at time step i. The first two power constraints ensure that the battery is not charged or discharged at a rate that exceeds the maximum battery charge/discharge rate Peff. If the system includes photovoltaic (PV) power generation, the effective power available Peffcan be calculated as follows: Peff=Prated−PPV FirmingReserve where Pratedis the rated capacity of the battery and PPV FirmingReserveis the PV firming reserve power. The third power constraint ensures that energy stored in the battery is not sold or exported to the energy grid. In some embodiments, power constraints module904can remove the third power constraint if selling energy back to the energy grid is a desired feature or behavior of the system. Still referring toFIG.9, high level optimizer632is shown to include a capacity constraints module906. Capacity constraints module906may be configured to impose one or more capacity constraints on the objective function J(x). The capacity constraints may be used to relate the battery power Pbatcharged or discharged during each time step to the capacity and state-of-charge (SOC) of the battery. The capacity constraints may ensure that the SOC of the battery is maintained within acceptable lower and upper bounds and that sufficient battery capacity is available for frequency regulation. In some embodiments, the lower and upper bounds are based on the battery capacity needed to reserve the amount of power committed to frequency regulation PFRiduring each time step i. In some embodiments, capacity constraints module906generates two sets of capacity constraints. One set of capacity constraints may apply to the boundary condition at the end of each time step i, whereas the other set of capacity constraints may apply to the boundary condition at the beginning of the next time step i+1. For example, if a first amount of battery capacity is reserved for frequency regulation during time step i and a second amount of battery capacity is reserved for frequency regulation during time step i+1, the boundary point between time step i and i+1 may be required to satisfy the capacity constraints for both time step i and time step i+1. This ensures that the decisions made for the power committed to frequency regulation during the current time step i and the next time step i+1 represent a continuous change in the SOC of the battery. In some embodiments, capacity constraints module906generates the following capacity constraints: {Ca-∑n=kiPb⁢a⁢tn≤Ce⁢f⁢f-CF⁢R⁢PF⁢RiCa-∑n=kiPb⁢a⁢tn≥CF⁢R⁢PF⁢Ri⁢∀i=k⁢…⁢k+h-1⁢{Ca-∑n=kiPb⁢a⁢tn≤Ce⁢f⁢f-CF⁢R⁢PF⁢Ri+1Ca-∑n=kiPb⁢a⁢tn≥CF⁢R⁢PF⁢Ri+1⁢∀i=k⁢…⁢k+h-2 where Cais the available battery capacity (e.g., kWh), CFRis the frequency regulation reserve capacity (e.g., kWh/kW) which translates the amount of battery power committed to frequency regulation PFRinto an amount of energy needed to be reserved, and Ceffis the effective capacity of the battery. The first set of constraints ensures that the battery capacity at the end of each time step i (i.e., available capacity Caminus the battery power discharged through time step i) is maintained between the lower capacity bound CFRPFRiand the upper capacity bound Ceff−CFRPFRifor time step i. The lower capacity bound CFRPFRirepresents the minimum capacity required to reserve PFRifor frequency regulation during time step i, whereas the upper capacity bound Ceff−CFRPFRirepresents maximum capacity required to reserve PFRifor frequency regulation during time step i. Similarly, the second set of constraints ensures that the battery capacity at the end of each time step i (i.e., available capacity Caminus the battery power discharged through time step i) is maintained between the lower capacity bound CFRPFRi+1and the upper capacity bound Ceff−CFRPFRi+1for time step i+1. The lower capacity bound CFRPFRi+1represents the minimum capacity required to reserve PFRi+1for frequency regulation during time step i+1, whereas the upper capacity bound Ceff−CFRPFRi+1represents maximum capacity required to reserve PFRi+1for frequency regulation during time step i+1. In some embodiments, capacity constraints module906calculates the effective capacity of the battery Ceffas a percentage of the rated capacity of the battery. For example, if frequency regulation and photovoltaic power generation are both enabled and the SOC control margin is non-zero, capacity constraints module906can calculate the effective capacity of the battery Ceffusing the following equation: Ceff=(1−CFR−2CsocCM)Crated−CPV FirmingReserve where CsocCMis the control margin and CPV FirmingReserveis the capacity reserved for photovoltaic firming. Still referring toFIG.9, high level optimizer632is shown to include a switching constraints module908. Switching constraints module908may be configured to impose one or more switching constraints on the cost function J(x). As previously described, the cost function J(x) may include the following switching term: ∑i=kk+h-1rsi⁢❘"\[LeftBracketingBar]"Pb⁢a⁢ti-Pb⁢a⁢ti-1❘"\[RightBracketingBar]" which functions as a penalty for switching the battery power Pbatbetween consecutive time steps i and i−1. Notably, the switching term is nonlinear as a result of the absolute value function. Switching constraints module908can impose constraints which represent the nonlinear switching term in a linear format. For example, switching constraints module908can introduce an auxiliary switching variable siand constrain the auxiliary switching variable to be greater than the difference between the battery power Pbatiat time step i and the battery power Pbati−1at time step i−1, as shown in the following equations: si>Pbati−Pbati−1 ∀i=k . . . k+h−1 si>Pbati−1−Pbati Switching constraints module908can replace the nonlinear switching term in the cost function J(x) with the following linearized term: ∑i=kk+h-1rsi⁢si which can be optimized using any of a variety of linear optimization techniques (e.g., linear programming) subject to the constraints on the auxiliary switching variable si. Demand Charge Incorporation Still referring toFIG.9, high level optimizer632is shown to include a demand charge module910. Demand charge module910can be configured to modify the cost function J(x) and the optimization constraints to account for one or more demand charges. As previously described, demand charges are costs imposed by utilities510based on the peak consumption of a resource from utilities510during various demand charge periods (i.e., the peak amount of the resource purchased from the utility during any time step of the applicable demand charge period). For example, an electric utility may define one or more demand charge periods and may impose a separate demand charge based on the peak electric consumption during each demand charge period. Electric energy storage can help reduce peak consumption by storing electricity in a battery when energy consumption is low and discharging the stored electricity from the battery when energy consumption is high, thereby reducing peak electricity purchased from the utility during any time step of the demand charge period. In some instances, one or more of the resources purchased from utilities510are subject to a demand charge or multiple demand charges. There are many types of potential demand charges as there are different types of energy rate structures. The most common energy rate structures are constant pricing, time of use (TOU), and real time pricing (RTP). Each demand charge may be associated with a demand charge period during which the demand charge is active. Demand charge periods can overlap partially or completely with each other and/or with the optimization period. Demand charge periods can include relatively long periods (e.g., monthly, seasonal, annual, etc.) or relatively short periods (e.g., days, hours, etc.). Each of these periods can be divided into several sub-periods including off-peak, partial-peak, and/or on-peak. Some demand charge periods are continuous (e.g., beginning Jan. 1, 2017 and ending Jan. 31, 2017), whereas other demand charge periods are non-continuous (e.g., from 11:00 AM-1:00 PM each day of the month). Over a given optimization period, some demand charges may be active during some time steps that occur within the optimization period and inactive during other time steps that occur during the optimization period. Some demand charges may be active over all the time steps that occur within the optimization period. Some demand charges may apply to some time steps that occur during the optimization period and other time steps that occur outside the optimization period (e.g., before or after the optimization period). In some embodiments, the durations of the demand charge periods are significantly different from the duration of the optimization period. Advantageously, demand charge module910may be configured to account for demand charges in the high level optimization process performed by high level optimizer632. In some embodiments, demand charge module910incorporates demand charges into the optimization problem and the cost function J(x) using demand charge masks and demand charge rate weighting factors. Each demand charge mask may correspond to a particular demand charge and may indicate the time steps during which the corresponding demand charge is active and/or the time steps during which the demand charge is inactive. Each rate weighting factor may also correspond to a particular demand charge and may scale the corresponding demand charge rate to the time scale of the optimization period. As described above, the demand charge term of the cost function J(x) can be expressed as: J⁡(x)=…⁢∑s∈sources∑q∈d⁢e⁢m⁢a⁢n⁢d⁢sswdemand,s,q⁢rd⁢e⁢m⁢and,s,q⁢maxi∈demands,q(purchases,i)⁢… where the max( ) function selects the maximum amount of the resource purchased from source s during any time step i that occurs during the optimization period. However, the demand charge period associated with demand charge q may not cover all of the time steps that occur during the optimization period. In order to apply the demand charge q to only the time steps during which the demand charge q is active, demand charge module910can add a demand charge mask to the demand charge term as shown in the following equation: J⁢(x)=…⁢∑s∈sources∑q∈d⁢e⁢m⁢a⁢n⁢d⁢sswdemand,s,q⁢rd⁢e⁢m⁢and,s,qmaxi∈demands,q(gs,q,i⁢purchases,i)⁢… where gs,q,iis an element of the demand charge mask. The demand charge mask may be a logical vector including an element gs,q,ifor each time step i that occurs during the optimization period. Each element gs,q,iof the demand charge mask may include a binary value (e.g., a one or zero) that indicates whether the demand charge q for source s is active during the corresponding time step i of the optimization period. For example, the element gs,q,imay have a value of one (i.e., gs,q,i=1) if demand charge q is active during time step i and a value of zero (i.e., gs,q,i=0) if demand charge q is inactive during time step i. An example of a demand charge mask is shown in the following equation: gs,q=[0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1]T where gs,q,1, gs,q,2, gs,q,3, gs,q,8, gs,q,9, and gs,q,10have values of zero, whereas gs,q,4, gs,q,5, gs,q,6, gs,q,7, gs,q,11, and gs,q,12have values of one. This indicates that the demand charge q is inactive during time steps i=1, 2, 3, 8, 9, 10 (i.e., gs,q,i=0 ∀i=1, 2, 3, 8, 9, 10) and active during time steps i=4, 5, 6, 7, 11, 12 (i.e., gs,q,i=1 ∀i=4, 5, 6, 7, 11, 12). Accordingly, the term gs,q,ipurchases,iwithin the max( ) function may have a value of zero for all time steps during which the demand charge q is inactive. This causes the max( ) function to select the maximum purchase from source s that occurs during only the time steps for which the demand charge q is active. In some embodiments, demand charge module910calculates the weighting factor wdemand,s,qfor each demand charge q in the cost function J(x). The weighting factor wdemand,s,qmay be a ratio of the number of time steps the corresponding demand charge q is active during the optimization period to the number of time steps the corresponding demand charge q is active in the remaining demand charge period (if any) after the end of the optimization period. For example, demand charge module910can calculate the weighting factor wdemand,s,qusing the following equation: wdemand,s,q=Σi=kk+h-1⁢gs,q,iΣi=k+hperiod⁢_⁢end⁢gs,q,i where the numerator is the summation of the number of time steps the demand charge q is active in the optimization period (i.e., from time step k to time step k+h−1) and the denominator is the number of time steps the demand charge q is active in the portion of the demand charge period that occurs after the optimization period (i.e., from time step k+h to the end of the demand charge period). The following example illustrates how demand charge module910can incorporate multiple demand charges into the cost function J(x). In this example, a single source of electricity (e.g., an electric grid) is considered with multiple demand charges applicable to the electricity source (i.e., q=1 N, where N is the total number of demand charges). The system includes a battery asset which can be allocated over the optimization period by charging or discharging the battery during various time steps. Charging the battery increases the amount of electricity purchased from the electric grid, whereas discharging the battery decreases the amount of electricity purchased from the electric grid. Demand charge module910can modify the cost function J(x) to account for the N demand charges as shown in the following equation: J⁡(x)=…+wd1⁢rd1⁢maxi(g1i(-Pbati+e⁢L⁢o⁢a⁢di))+⁠…+wdq⁢rdq⁢maxi(gqi(-Pbati+e⁢L⁢o⁢a⁢di))+⁠…+wdN⁢rdN⁢maxi(gNi(-Pbati+e⁢L⁢o⁢a⁢di)) where the term −Pbati+eLoadirepresents the total amount of electricity purchased from the electric grid during time step i (i.e., the total electric load eLoadiminus the power discharged from the battery Pbati). Each demand charge q=1 . . . N can be accounted for separately in the cost function J(x) by including a separate max( ) function for each of the N demand charges. The parameter rdqindicates the demand charge rate associated with the qth demand charge (e.g., $/kW) and the weighting factor wdqindicates the weight applied to the qth demand charge. Demand charge module910can augment each max( ) function with an element gqiof the demand charge mask for the corresponding demand charge. Each demand charge mask may be a logical vector of binary values which indicates whether the corresponding demand charge is active or inactive at each time step i of the optimization period. Accordingly, each max( ) function may select the maximum electricity purchase during only the time steps the corresponding demand charge is active. Each max( ) function can be multiplied by the corresponding demand charge rate rdqand the corresponding demand charge weighting factor wdqto determine the total demand charge resulting from the battery allocation Pbatover the duration of the optimization period. In some embodiments, demand charge module910linearizes the demand charge terms of the cost function J(x) by introducing an auxiliary variable dqfor each demand charge q. In the case of the previous example, this will result in N auxiliary variables d1. . . dNbeing introduced as decision variables in the cost function J(x). Demand charge module910can modify the cost function J(x) to include the linearized demand charge terms as shown in the following equation: J(x)= . . . +wd1rd1d1+ . . . +wdqrdqdq+ . . . +wdNrdNdN Demand charge module910can impose the following constraints on the auxiliary demand charge variables d1. . . dNto ensure that each auxiliary demand charge variable represents the maximum amount of electricity purchased from the electric utility during the applicable demand charge period: d1≥g1i⁢(-Pb⁢a⁢ti+e⁢L⁢o⁢a⁢di)∀i=k⁢…⁢k+h-1,g1i≠0d1≥0⋮dq≥gqi⁢(-Pb⁢a⁢ti+e⁢L⁢o⁢a⁢di)∀i=k⁢…⁢k+h-1,gqi≠0dq≥0⋮dN≥gNi⁢(-Pb⁢a⁢ti+e⁢L⁢o⁢a⁢di)∀i=k⁢…⁢k+h-1,gNi≠0dN≥0 In some embodiments, the number of constraints corresponding to each demand charge q is dependent on how many time steps the demand charge q is active during the optimization period. For example, the number of constraints for the demand charge q may be equal to the number of non-zero elements of the demand charge mask gq. Furthermore, the value of the auxiliary demand charge variable dqat each iteration of the optimization may act as the lower bound of the value of the auxiliary demand charge variable dqat the following iteration. Consider the following example of a multiple demand charge structure. In this example, an electric utility imposes three monthly demand charges. The first demand charge is an all-time monthly demand charge of 15.86 $/kWh which applies to all hours within the entire month. The second demand charge is an on-peak monthly demand charge of 1.56 $/kWh which applies each day from 12:00-18:00. The third demand charge is a partial-peak monthly demand charge of 0.53 $/kWh which applies each day from 9:00-12:00 and from 18:00-22:00. For an optimization period of one day and a time step of one hour (i.e., i=1 . . . 24), demand charge module910may introduce three auxiliary demand charge variables. The first auxiliary demand charge variable d1corresponds to the all-time monthly demand charge; the second auxiliary demand charge variable d2corresponds to the on-peak monthly demand charge; and the third auxiliary demand charge variable d3corresponds to the partial-peak monthly demand charge. Demand charge module910can constrain each auxiliary demand charge variable to be greater than or equal to the maximum electricity purchase during the hours the corresponding demand charge is active, using the inequality constraints described above. Demand charge module910can generate a demand charge mask gqfor each of the three demand charges (i.e., q=1 . . . 3), where gqincludes an element for each time step of the optimization period (i.e., gq=[gq1. . . gq24]). The three demand charge masks can be defined as follows: g1i=1 ∀i=1 . . . 24 g2i=1 ∀i=12 . . . 18 g3i=1 ∀i=9 . . . 12, 18 . . . 22 with all other elements of the demand charge masks equal to zero. In this example, it is evident that more than one demand charge constraint will be active during the hours which overlap with multiple demand charge periods. Also, the weight of each demand charge over the optimization period can vary based on the number of hours the demand charge is active, as previously described. In some embodiments, demand charge module910considers several different demand charge structures when incorporating multiple demand charges into the cost function J(x) and optimization constraints. Demand charge structures can vary from one utility to another, or the utility may offer several demand charge options. In order to incorporate the multiple demand charges within the optimization framework, a generally-applicable framework can be defined as previously described. Demand charge module910can translate any demand charge structure into this framework. For example, demand charge module910can characterize each demand charge by rates, demand charge period start, demand charge period end, and active hours. Advantageously, this allows demand charge module910to incorporate multiple demand charges in a generally-applicable format. The following is another example of how demand charge module910can incorporate multiple demand charges into the cost function J(x). Consider, for example, monthly demand charges with all-time, on-peak, partial-peak, and off-peak. In this case, there are four demand charge structures, where each demand charge is characterized by twelve monthly rates, twelve demand charge period start (e.g., beginning of each month), twelve demand charge period end (e.g., end of each month), and hoursActive. The hoursActive is a logical vector where the hours over a year where the demand charge is active are set to one. When running the optimization over a given horizon, demand charge module910can implement the applicable demand charges using the hoursActive mask, the relevant period, and the corresponding rate. In the case of an annual demand charge, demand charge module910can set the demand charge period start and period end to the beginning and end of a year. For the annual demand charge, demand charge module910can apply a single annual rate. The hoursActive demand charge mask can represent the hours during which the demand charge is active. For an annual demand charge, if there is an all-time, on-peak, partial-peak, and/or off-peak, this translates into at most four annual demand charges with the same period start and end, but different hoursActive and different rates. In the case of a seasonal demand charge (e.g., a demand charge for which the maximum peak is determined over the indicated season period), demand charge module910can represent the demand charge as an annual demand charge. Demand charge module910can set the demand charge period start and end to the beginning and end of a year. Demand charge module910can set the hoursActive to one during the hours which belong to the season and to zero otherwise. For a seasonal demand charge, if there is an All-time, on-peak, partial, and/or off-peak, this translates into at most four seasonal demand charges with the same period start and end, but different hoursActive and different rates. In the case of the average of the maximum of current month and the average of the maxima of the eleven previous months, demand charge module910can translate the demand charge structure into a monthly demand charge and an annual demand charge. The rate of the monthly demand charge may be half of the given monthly rate and the annual rate may be the sum of given monthly rates divided by two. These and other features of demand charge module910are described in greater detail in U.S. patent application Ser. No. 15/405,236 filed Jan. 12, 2017, the entire disclosure of which is incorporated by reference herein. Incentive Program Incorporation Referring again toFIG.9, high level optimizer632is shown to include an incentive program module912. Incentive program module912may modify the optimization problem to account for revenue from participating in an incentive-based demand response (IBDR) program. IBDR programs may include any type of incentive-based program that provides revenue in exchange for resources (e.g., electric power) or a reduction in a demand for such resources. For example, energy storage system500may provide electric power to an energy grid or an independent service operator as part of a frequency response program (e.g., PJM frequency response) or a synchronized reserve market. In a frequency response program, a participant contracts with an electrical supplier to maintain reserve power capacity that can be supplied or removed from an energy grid by tracking a supplied signal. The participant is paid by the amount of power capacity required to maintain in reserve. In other types of IBDR programs, energy storage system500may reduce its demand for resources from a utility as part of a load shedding program. It is contemplated that energy storage system500may participate in any number and/or type of IBDR programs. In some embodiments, incentive program module912modifies the cost function J(x) to include revenue generated from participating in an economic load demand response (ELDR) program. ELDR is a type of IBDR program and similar to frequency regulation. In ELDR, the objective is to maximize the revenue generated by the program, while using the battery to participate in other programs and to perform demand management and energy cost reduction. To account for ELDR program participation, incentive program module912can modify the cost function J(x) to include the following term: minbi⁢Pb⁢a⁢ti(-∑i=kk+h-1bi⁢rE⁢L⁢D⁢Ri(a⁢d⁢j⁢C⁢B⁢Li-(e⁢L⁢o⁢a⁢di-Pb⁢a⁢ti))) where biis a binary decision variable indicating whether to participate in the ELDR program during time step i, rELDRiis the ELDR incentive rate at which participation is compensated, and adjCBLiis the symmetric additive adjustment (SAA) on the baseline load. The previous expression can be rewritten as: minbi,Pb⁢a⁢ti(-∑i=kk+h-1bi⁢rE⁢L⁢D⁢Ri(∑l=14el⁢i4+∑p=m-4m-213⁢(e⁢L⁢o⁢a⁢dp-Pb⁢a⁢tp-∑l=14el⁢p4)-(e⁢L⁢o⁢a⁢di-Pb⁢a⁢ti))) where eliand elpare the electric loads at the lth hour of the operating day. In some embodiments, incentive program module912handles the integration of ELDR into the optimization problem as a bilinear problem with two multiplicative decision variables. In order to linearize the cost function J(x) and customize the ELDR problem to the optimization framework, several assumptions may be made. For example, incentive program module912can assume that ELDR participation is only in the real-time market, balancing operating reserve charges and make whole payments are ignored, day-ahead prices are used over the horizon, real-time prices are used in calculating the total revenue from ELDR after the decisions are made by the optimization algorithm, and the decision to participate in ELDR is made in advance and passed to the optimization algorithm based on which the battery asset is allocated. In some embodiments, incentive program module912calculates the participation vector bias follows: bi={1∀i/rDAi≥NBTi⁢and⁢i∈S0otherwise where rDAiis the hourly day-ahead price at the ith hour, NBTiis the net benefits test value corresponding to the month to which the corresponding hour belongs, and S is the set of nonevent days. Nonevent days can be determined for the year by choosing to participate every x number of days with the highest day-ahead prices out of y number of days for a given day type. This approach may ensure that there are nonevent days in the 45 days prior to a given event day when calculating the CBL for the event day. Given these assumptions and the approach taken by incentive program module912to determine when to participate in ELDR, incentive program module912can adjust the cost function J(x) as follows: J⁡(x)=-∑i-kk+h-1rei⁢Pbati-∑i=kk+h-1rFRi⁢PFRi+∑i=kk+h-1rsi⁢si+wd⁢rd⁢d-∑i=kk+h-1bi⁢rDAi(∑p=m-4m-2-13⁢Pbatp+Pbati) where biand m are known over a given horizon. The resulting term corresponding to ELDR shows that the rates at the ith participation hour are doubled and those corresponding to the SAA are lowered. This means it is expected that high level optimizer632will tend to charge the battery during the SAA hours and discharge the battery during the participation hours. Notably, even though a given hour is set to be an ELDR participation hour, high level optimizer632may not decide to allocate any of the battery asset during that hour. This is due to the fact that it may be more beneficial at that instant to participate in another incentive program or to perform demand management. Peak Load Contribution Incorporation Still referring toFIG.9, high level optimizer632is shown to include a peak load contribution module914. Peak load contribution (PLC) is a customer's contribution to regional demand peaks that occur in geographic area managed by a regional transmission organization (RTO) or independent system operator (ISO) at certain hours within a base period. The regional demand at a given hour may be the summation of the customer's demand during (i.e., the rate at which the customer purchases electricity or another resource from a utility) as well as the demand of other buildings in the geographic area during that hour. The customer may be billed based on its contribution to the peak regional demand (e.g., $/kW of the customer's PLC) in addition to the energy consumption charges and demand charges previously described. PLC module914can be configured to modify the cost function J(x) to account for a cost associated with the customer's PLC. By incorporating PLC costs into the cost function J(x), PLC module914enables high level optimizer632to allocate resource consumption and resource purchases to reduce the customer's PLC. High level optimizer632can reduce PLC costs by shifting the customer's load to non-peak times or shaving the customer's peak load. This can be done, for example, by precooling the building during non-peak times, using thermal energy storage, and/or using electrical energy storage such as a battery asset. Accounting for the cost associated with the customer's PLC can be more difficult than accounting for energy consumption costs and demand charges. Unlike demand charge which is calculated based on the customer's maximum demand during predetermined demand charge periods, the hours over which PLC is calculated may not be known in advance. The hours of peak regional demand (i.e., the coincidental peak (CP) hours) may not be known until the end of the base period over which PLC is calculated. For example, the CP hours for a given base period (e.g., one year) may be determined by a RTO at the end of the base period based on the demand of all the buildings within the geographic area managed by the RTO during the base period (e.g., by selecting the hours with the highest regional demand). The customer's PLC may then be determined based on the customer's demand during the designated CP hours and used to calculate a cost of the customer's PLC. This cost may then be billed to the customer during the next time period (e.g., the next year), referred to as the billing period. Another difficulty in accounting for PLC costs is that the base period, billing period, CP hours, and other factors used to calculate the PLC cost may differ from one RTO to another. For example, a RTO for the Pennsylvania, Jersey, and Maryland (PJM) geographic area may define the base period (i.e., the peak-setting period) as June 1stof year Y to May 31stof year Y+1. The billing period (i.e., the delivery period) may be defined as June 1stof year Y+1 to May 31stof year Y+2. PJM may define the CP hours as the five hours with the highest loads over the five highest peak load days across the PJM geographic region. A customer's PLC in the PJM region may be calculated as the product of the customer's average electric load during the five CP hours and a capacity loss factor (CLF), as shown in the following equation: PLCcustomer=CLF×∑i=15eLoadcpi5 where PLCcustomeris the customer's peak load contribution calculated during year Y, CLF is the capacity loss factor (e.g., CLF=1.05), and eLoadcpiis the customer's electric load (e.g., kW) during the ith CP hour. The customer's PLC cost in the PJM region can be calculated as the product of the customer's PLC during year Y and a PLC rate, as shown in the following equation: PLCcost=rPLC×PLCcustomer where PLCcostis the customer's PLC charge billed over the delivery year Y+1 (e.g., $) and rPLCis the rate at which the customer is charged for its PLC (e.g., $/kW). An additional complication in the PJM region relates to the interaction between PLC costs and economic load demand response (ELDR) revenue. In some embodiments, a customer participating in ELDR in the PJM region during one of the CP hours may be prohibited from reducing its PLC while earning ELDR revenue at the same time. Accordingly, a customer wishing to reduce its load during an assumed CP hour for the purpose of reducing its capacity, transmission, and/or demand charge costs may be restricted from making a bid for the same assumed CP hour in the ELDR market. Another example of an organization which imposes PLC costs is the independent electricity system operator (IESO) in Ontario, Canada. Relative to PJM, IESO may use a different base period, billing period, CP hours, and other factors used to calculate the PLC cost. For example, IESO may define the base period or peak-setting period as May 1stof year Y to April 30thof year Y+1. The billing period or adjustment period for IESO may be defined as July 1stof year Y+1 to June 30thof year Y+2. IESO may define the CP hours as the five hours with the highest regional demands across the IESO geographic region. At the end of the base period, IESO may calculate the customer's peak demand factor (θPDF). The peak demand factor may be defined as the ratio of the sum of the customer's peak demand to the sum of the region-wide demand peaks during the five CP hours, as shown in the following equation: θPDF=∑i=15eLoadcpi∑i=15sysLoadcpi where sysLoadcpiis the region-wide peak load during the ith CP hour and eLoadcpiis the customer's peak load during the ith CP hour. The customer's PLC cost in the IESO region is known as a global adjustment (GA) charge. The GA charge may be imposed as a monthly charge during the billing period. In some embodiments, the GA charge is calculated by multiplying the customer's peak demand factor by the monthly region-wide global adjustment costs, as shown in the following equation: GAcost,month=θPDF×GAtotal,month where GAcost,monthis the customer's monthly PLC cost (e.g., $) and GAtotal,monthis the region-wide global adjustment cost (e.g., $). The value of GAtotal,monthmay be specified by IESO. In some embodiments, GAtotal,monthhas a known value. In other embodiments, the value of GAtotal,monthmay not be known until the end of the base period. In order to incorporate PLC costs into the cost function J(x) and allocate resource consumption/purchases in advance, PLC module914can generate or obtain a projection of the CP hours for an upcoming base period. The projected CP hours can then be used by high level optimizer632as an estimate of the actual CP hours. High level optimizer632can use the projected CP hours to allocate one or more assets (e.g., a battery, thermal energy storage, HVAC equipment, etc.) to minimize the customer's demand during the projected CP hours. These and other features of PLC module914are described in greater detail in U.S. patent application Ser. No. 15/405,234 filed Jan. 12, 2017, the entire disclosure of which is incorporated by reference herein. Asset Sizing Incorporation Still referring toFIG.9, high level optimizer632is shown to include an asset sizing module916. Asset sizing module916can be configured to determine the optimal sizes of various assets in a building, group of buildings, or a central plant. Assets can include individual pieces of equipment or groups of equipment. For example, assets can include boilers, chillers, heat recovery chillers, steam generators, electrical generators, thermal energy storage tanks, batteries, air handling units, or other types of equipment in a building or a central plant (e.g., HVAC equipment, BMS equipment, etc.). In some embodiments, assets include collections of equipment which form a subplant of a central plant (e.g., central plant118). For example, assets can include heater subplant521, chiller subplant522, heat recovery chiller subplant523, steam subplant524, electricity subplant525, or any other type of generator subplant520. In some embodiments, assets include hot thermal energy storage531(e.g., one or more hot water storage tanks), cold thermal energy storage532(e.g., one or more cold thermal energy storage tanks), electrical energy storage533(e.g., one or more batteries), or any other type of storage subplant530. Asset sizes can include a maximum loading of the asset and/or a maximum capacity of the asset. Some assets such as storage subplants530may have both a maximum loading and a maximum capacity. For example, battery assets may have a maximum battery power (e.g., a maximum rate at which the battery can be charged or discharged) and a maximum state-of-charge (e.g., a maximum energy storage of the battery). Similarly, thermal energy storage assets may have a maximum charge/discharge rate and a maximum capacity (e.g., maximum fluid storage, etc.). Other assets such as generator subplants520may have only a maximum loading. For example, a chiller may have a maximum rate at which the chiller can produce cold thermal energy. Similarly, an electric generator may have a maximum rate at which the generator can produce electricity. Asset sizing module916can be configured to determine the maximum loading and/or the maximum capacity of an asset when determining the optimal size of the asset. In some embodiments, asset sizing module916is implemented a component of planning tool702. In the planning tool framework, asset sizing module916can determine the optimal size of an asset for a given application. For example, consider the planning problem described with reference toFIGS.7-8in which the high level optimization is solved at a given time instant k over a given time horizon h. With each iteration of the high level optimization, the time horizon h can be shifted forward by a block size equivalent to b time steps and the first b sets of decision variables may be retained. In such a planning problem, the sizes of the assets to be optimally allocated are typically given along with historical load data, utility pricing, and other relative data. However, there are many cases in which the sizes of the assets to be allocated are unknown. For example, when purchasing a new asset for a given application (e.g., adding thermal energy storage or electrical energy storage to a building or central plant), a user may wish to determine the optimal size of the asset to purchase. Asset sizing module916can be configured to determine the optimal size of an asset by considering the potential benefits and costs of the asset. Potential benefits can include, for example, reduced energy costs, reduced demand charges, reduced PLC charges, and/or increased revenue from participating in IBDR programs such as frequency regulation (FR) or economic load demand response (ELDR). Potential costs can include fixed costs (e.g., an initial purchase cost of the asset) as well as marginal costs (e.g., ongoing costs of using the asset) over the time horizon. The potential benefits and costs of an asset may vary based on the application of the asset and/or the system in which the asset will be used. For example, a system that participates in FR programs may realize the benefit of increased IBDR revenue, whereas a system that does not participate in any IBDR programs may not realize such a benefit. Some of the benefits and costs of an asset may be captured by the original cost function J(x). For example, the cost function J(x) may include terms corresponding to energy cost, multiple demand charges, PLC charges, and/or IBDR revenue, as previously described. Adding one or more new assets may affect the values of some or all of these terms in the original cost function J(x). For example, adding a battery asset may increase IBDR revenue and decrease energy cost, demand charges, and PLC charges. However, the original cost function J(x) may not account for the fixed and marginal costs resulting from new asset purchases. In order to account for these fixed and marginal costs, asset sizing module916may add new terms to the original cost function J(x). Asset sizing module916can be configured to augment the cost function J(x) with two new terms that correspond to the cost of purchasing the new assets, resulting in an augmented cost function Ja(x). The additional terms are shown in the following equation: Ja(x)=J(x)+cfTv+csTsa where J(x) is the original cost function, x is the vector of decision variables of the optimization problem over the horizon, cfis a vector of fixed costs of buying any size of asset (e.g., one element for each potential asset purchase), v is a vector of binary decision variables that indicate whether the corresponding assets are purchased, csis a vector of marginal costs per unit of asset size (e.g., cost per unit loading, cost per unit capacity), and sais a vector of continuous decision variables corresponding to the asset sizes. Advantageously, the binary purchase decisions and asset size decisions are treated as decision variables which can be optimized along with the decision variables in the vector x. This allows high level optimizer632to perform a single optimization to determine optimal values for all of the decision variables in the augmented cost function Ja(x). In some embodiments, asset sizing module916scales the asset purchase costs cfTv and csTsato the duration of the optimization period h. The cost of purchasing an asset is typically paid over an entire payback period SPP, whereas the operational cost is only over the optimization period h. In order to scale the asset purchase costs to the optimization period, asset sizing module916can multiply the terms cfTv and csTsaby the ratio hSPP as shown in the following equation: Ja(x)=J⁡(x)+h8760·SPP⁢(cfT⁢v+csT⁢sa) where h is the duration of the optimization period in hours, SPP is the duration of the payback period in years, and 8760 is the number of hours in a year. High level optimizer632can perform an optimization process to determine the optimal values of each of the binary decision variables in the vector v and each of the continuous decision variables in the vector sa. In some embodiments, high level optimizer632uses linear programming (LP) or mixed integer linear programming (MILP) to optimize a financial metric such as net present value (NPV), simple payback period (SPP), or internal rate of return (IRR). Each element of the vectors cf, v, cs, and samay correspond to a particular asset and/or a particular asset size. Accordingly, high level optimizer632can determine the optimal assets to purchase and the optimal sizes to purchase by identifying the optimal values of the binary decision variables in the vector v and the continuous decision variables in the vector sa. These and other features of asset sizing module916are described in greater detail in U.S. patent application Ser. No. 15/426,962 filed Feb. 7, 2017, the entire disclosure of which is incorporated by reference herein. Maintenance Contracts Incorporation Still referring toFIG.9, high level optimizer632is shown to include a maintenance contracts module918. Maintenance contracts module918can be configured to modify the cost function J(x) to account for a cost associated with maintenance contracts for various assets in a building, group of buildings, or a central plant. Assets can include individual pieces of equipment or groups of equipment. For example, assets can include boilers, chillers, heat recovery chillers, steam generators, electrical generators, thermal energy storage tanks, batteries, air handling units, or other types of equipment in a building or a central plant (e.g., HVAC equipment, BMS equipment, etc.). In some embodiments, assets include collections of equipment which form a subplant of a central plant (e.g., central plant118). For example, assets can include heater subplant521, chiller subplant522, heat recovery chiller subplant523, steam subplant524, electricity subplant525, or any other type of generator subplant520. In some embodiments, assets include hot thermal energy storage531(e.g., one or more hot water storage tanks), cold thermal energy storage532(e.g., one or more cold thermal energy storage tanks), electrical energy storage533(e.g., one or more batteries), or any other type of storage subplant530. Maintenance contracts module918can be configured to modify the cost function J(x) to include a maintenance cost term. The maintenance cost term may account for an economic or monetary cost (e.g., dollars) of performing maintenance on the assets. An example of a modified cost function Ja(x) which can be generated by maintenance contracts module918is shown in the following equation: Ja(x)=J⁡(x)+∑k∈horizonchourly⁢bk where J(x) is the original cost function, chourlyis the hourly cost of maintenance, and bkis a binary variable representing the on/off state of the asset at hour k of the optimization period. For example, bkmay have a value of bk=1 if the asset is on during hour k or a value of bk=0 if the asset is off during hour k. The modified cost function can be expressed in matrix form as follows: Ja(x)=J(x)+Chourlyb where b is a vector of binary variables having an element for each hour k of the optimization period. The binary variables bkand the vector b may be decision variables to be optimized by high level optimizer632as part of the high level optimization process. The value of chourlymay be determined by maintenance contracts module918, as described in detail below. Although chourlyis described as an hourly cost and each time step k is described as one hour, it should be understood that the time period associated with the cost and the time step can have any duration. For example, chourlycan be replaced with a daily cost, a monthly cost, a yearly cost, a cost per half hour, a cost per quarter hour, or a cost associated with any other duration. Similarly, the duration of each time step k can be a quarter hour, a half hour, an hour, a day, a month, or any other duration. The same techniques can be used to augment the cost function J(x) with a maintenance cost term regardless of the duration of time step k and the time period associated with chourly. Many assets have a fixed maintenance schedule that is dependent on the number of run hours since the last time the maintenance was performed. For example, the maintenance schedule for a chiller may involve cleaning the chiller tubes every X run hours (e.g., every 5000 run hours). A fixed cost $C may be incurred each time the maintenance is performed. For fixed maintenance schedules, maintenance contracts module918can determine the hourly cost chourlyby taking the maintenance cost $C and dividing by the number of run hours X between performances of the maintenance (e.g., chourly=$C/X). Maintenance contracts module918can then incorporate the hourly cost of maintenance chourlyinto the cost function Ja(x) as a fixed cost per run hour of the asset. In some scenarios, an owner of an asset might contract with a maintenance provider to perform maintenance under a fixed contract. In this case, the contract terms may specify a base cost cbasewhich covers base number of run hours tbaseand a marginal cost cmfor each hour that the asset is operated exceeding the base number of run hours tbase.For example, the contract might stipulate $500,000 dollars for the first 4,000 run hours and an additional $42 per hour exceeding 4,000 hours. The maintenance cost can be expressed as a piecewise-defined function, as shown in the following equation: AnnualCost={cbaset<tbasecbase+cm(t-tbase)t≥tbase where AnnualCost is the total maintenance cost per year, cbaseis the base cost, tbaseis the base number of hours covered by the base cost cbase, cmis the marginal cost for each run hour exceeding the base number of run hours tbase, and t is the number of run hours of the asset. Incorporating this type of maintenance contract into the optimization algorithm can be significantly more complicated than incorporating a fixed cost per run hour. Maintenance contracts module918can be configured to determine the value of chourlythat will yield the optimal solution to the high level optimization problem. The value of chourlymay reflect the true hourly cost of maintenance when the cost changes from an already sunk cost with no marginal cost to a marginal cost of cmafter the equipment has be used for a stipulated number of run hours. Maintenance contracts module918can use the value of chourlyto define the maintenance cost term in the modified cost function Ja(x). In some embodiments, maintenance contracts module918is configured to run in an offline mode (e.g., a planning mode) and an online mode (e.g., an operational mode). In the offline mode, maintenance contracts module918can perform several simulations of the year with assumed loads and utility costs (i.e., a planning run) with different values of chourlyin the cost function Ja(x) to determine the value of chourlythat yields the optimal results under the terms of the maintenance contract. In the online mode, maintenance contracts module918can run the plan at different hourly costs chourlyto determine how to adjust the hourly cost chourlyat periodic times during the course of the year. This allows maintenance contracts module918to incorporate feedback as to how many of the base hours tbasehave actually been used as opposed to how many were expected to be used. Maintenance contracts module918can update the hourly cost chourlythroughout the year based on the actual number of run hours of the assets covered by the maintenance contracts. In some embodiments, maintenance contracts module918is configured to account for the cost of maintenance contracts in terms of the total production of the asset rather than run hours. Total production can be defined as the amount of one or more resources produced by an asset. For example, the total production of a chiller may be the amount of chilled water produced by the chiller or the amount of cooling energy (e.g., tons) produced by the chiller over a given time period. Similarly, the total production of a boiler may be the amount of hot water produced by the boiler or the amount of heating energy (e.g., kWh) produced by the boiler over a given time period. A maintenance contract may specify a base cost cbasewhich covers a base amount of productionbaseand a marginal cost cmfor each unit of production exceeding the base productionbase. For example, the contract might stipulate $10,000 dollars for the first 100,000 kWh of production and an additional $0.1 per kWh exceeding 100,000 kWh. The maintenance cost can be expressed as a piecewise-defined function, as shown in the following equation: AnnualCost={cbaseℓ<ℓbasecbase+cm(ℓ-ℓbase)ℓ≤ℓbase where AnnualCost is the total maintenance cost per year, cbaseis the base cost,baseis the amount of production covered by the base cost cbase, cmis the marginal cost for each unit of production exceeding the base amount of productionbase, andis the amount of production of the asset. To account for the cost of maintenance contracts in terms of total production, maintenance contracts module918can augment the cost function J(x) with a maintenance cost term that defines maintenance cost in terms of the production of the asset. An example of a modified cost function Ja(x) which can be generated by maintenance contracts module918is shown in the following equation: Ja(x)=J⁡(x)+∑k∈horizoncp⁢pk where J(x) is the original cost function, cpis the cost of maintenance per unit of production (e.g., $/kWh of energy, $/liter of hot or chilled water, etc.), and pkis a variable representing the production the asset at hour k of the optimization period. Maintenance contracts module918can be configured to determine the value of cpthat will yield the optimal solution to the high level optimization problem. Maintenance contracts module918can use the same techniques to account for the cost of maintenance contracts, regardless of whether the maintenance contracts define maintenance cost in terms of run hours or total production. For example, the variables,base, cp, and pkcan be used in place of the variables t, tbase, chourly, and bk, respectively, in any of the systems and methods described herein to account for maintenance costs in terms of total production. These and other features of maintenance contracts module918are described in greater detail with reference toFIGS.10-21B. Subplant Curve Incorporation Still referring toFIG.9, high level optimizer632is shown to include a subplant curves module930. In the simplest case, it can be assumed that the resource consumption of each subplant is a linear function of the thermal energy load produced by the subplant. However, this assumption may not be true for some subplant equipment, much less for an entire subplant. Subplant curves module930may be configured to modify the high level optimization problem to account for subplants that have a nonlinear relationship between resource consumption and load production. Subplant curves module930is shown to include a subplant curve updater932, a subplant curves database934, a subplant curve linearizer936, and a subplant curves incorporator938. Subplant curve updater932may be configured to request subplant curves for each of subplants520-530from low level optimizer634. Each subplant curve may indicate an amount of resource consumption by a particular subplant (e.g., electricity use measured in kW, water use measured in L/s, etc.) as a function of the subplant load. In some embodiments, low level optimizer634generates the subplant curves by running the low level optimization process for various combinations of subplant loads and weather conditions to generate multiple data points. Low level optimizer634may fit a curve to the data points to generate the subplant curves and provide the subplant curves to subplant curve updater832. In other embodiments, low level optimizer634provides the data points to subplant curve updater932and subplant curve updater932generates the subplant curves using the data points. Subplant curve updater932may store the subplant curves in subplant curves database934for use in the high level optimization process. In some embodiments, the subplant curves are generated by combining efficiency curves for individual devices of a subplant. A device efficiency curve may indicate the amount of resource consumption by the device as a function of load. The device efficiency curves may be provided by a device manufacturer or generated using experimental data. In some embodiments, the device efficiency curves are based on an initial efficiency curve provided by a device manufacturer and updated using experimental data. The device efficiency curves may be stored in equipment models618. For some devices, the device efficiency curves may indicate that resource consumption is a U-shaped function of load. Accordingly, when multiple device efficiency curves are combined into a subplant curve for the entire subplant, the resultant subplant curve may be a wavy curve. The waves are caused by a single device loading up before it is more efficient to turn on another device to satisfy the subplant load. Subplant curve linearizer936may be configured to convert the subplant curves into convex curves. A convex curve is a curve for which a line connecting any two points on the curve is always above or along the curve (i.e., not below the curve). Convex curves may be advantageous for use in the high level optimization because they allow for an optimization process that is less computationally expensive relative to an optimization process that uses non-convex functions. Subplant curve linearizer936may be configured to break the subplant curves into piecewise linear segments that combine to form a piecewise-defined convex curve. Subplant curve linearizer936may store the linearized subplant curves in subplant curves database934. Subplant curve incorporator938may be configured to modify the high level optimization problem to incorporate the subplant curves into the optimization. In some embodiments, subplant curve incorporator938modifies the decision variables to include one or more decision vectors representing the resource consumption of each subplant. Subplant curve incorporator938may modify the inequality constraints to ensure that the proper amount of each resource is consumed to serve the predicted thermal energy loads. In some embodiments, subplant curve incorporator938formulates inequality constraints that force the resource usage for each resource in the epigraph of the corresponding linearized subplant curve. For example, chiller subplant522may have a linearized subplant curve that indicates the electricity use of chiller subplant522(i.e., input resource in1) as a function of the cold water production of chiller subplant522(i.e., output resource out1). The linearized subplant curve may include a first line segment connecting point [u1, Q1] to point [u2, Q2], a second line segment connecting point [u2, Q2] to point [u3, Q3], and a third line segment connecting point [u3, Q3] to point [u4, Q4]. Subplant curve incorporator938may formulate an inequality constraint for each piecewise segment of the subplant curve that constrains the value of the decision variable representing chiller electricity use to be greater than or equal to the amount of electricity use defined by the line segment for the corresponding value of the cold water production. Similar inequality constraints can be formulated for other subplant curves. For example, subplant curve incorporator938may generate a set of inequality constraints for the water consumption of chiller subplant522using the points defining the linearized subplant curve for the water consumption of chiller subplant522as a function of cold water production. In some embodiments, the water consumption of chiller subplant522is equal to the cold water production and the linearized subplant curve for water consumption includes a single line segment connecting point [u5, Q5] to point [u6, Q6]. Subplant curve incorporator938may repeat this process for each subplant curve for chiller subplant522and for the other subplants of the central plant to define a set of inequality constraints for each subplant curve. The inequality constraints generated by subplant curve incorporator938ensure that high level optimizer632keeps the resource consumption above all of the line segments of the corresponding subplant curve. In most situations, there is no reason for high level optimizer632to choose a resource consumption value that lies above the corresponding subplant curve due to the economic cost associated with resource consumption. High level optimizer632can therefore be expected to select resource consumption values that lie on the corresponding subplant curve rather than above it. The exception to this general rule is heat recovery chiller subplant523. The equality constraints for heat recovery chiller subplant523provide that heat recovery chiller subplant523produces hot water at a rate equal to the subplant's cold water production plus the subplant's electricity use. The inequality constraints generated by subplant curve incorporator938for heat recovery chiller subplant523allow high level optimizer632to overuse electricity to make more hot water without increasing the amount of cold water production. This behavior is extremely inefficient and only becomes a realistic possibility when the demand for hot water is high and cannot be met using more efficient techniques. However, this is not how heat recovery chiller subplant523actually operates. To prevent high level optimizer632from overusing electricity, subplant curve incorporator938may check whether the calculated amount of electricity use (determined by the optimization algorithm) for heat recovery chiller subplant523is above the corresponding subplant curve. In some embodiments, the check is performed after each iteration of the optimization algorithm. If the calculated amount of electricity use for heat recovery chiller subplant523is above the subplant curve, subplant curve incorporator938may determine that high level optimizer632is overusing electricity. In response to a determination that high level optimizer632is overusing electricity, subplant curve incorporator938may constrain the production of heat recovery chiller subplant523at its current value and constrain the electricity use of subplant523to the corresponding value on the subplant curve. High level optimizer632may then rerun the optimization with the new equality constraints. These and other features of subplant curves module930are described in greater detail in U.S. patent application Ser. No. 14/634,609 filed Feb. 27, 2015, the entire disclosure of which is incorporated by reference herein. Maintenance Contracts Module Referring now toFIG.10, a block diagram illustrating maintenance contracts module918in greater detail is shown, according to an exemplary embodiment. Maintenance contracts module918can be configured to modify the cost function J(x) to include a maintenance cost term. The maintenance cost term may account for the cost of performing maintenance on one or more assets1004covered by a maintenance contract. An example of a maintenance cost term which can be generated by maintenance contracts module918is: ∑k∈horizonchourly⁢bk where chourlyis the hourly cost of maintenance and bkis a binary variable representing the on/off state of the assets1004covered by the maintenance contract at hour k. The binary variable bkfor each hour k may be treated as decision variables to be optimized by high level optimizer632as part of the high level optimization process. The hourly cost of maintenance chourlycan be determined by maintenance contracts module918prior to performing the high level optimization process. Maintenance contracts module918can add the maintenance cost term to the cost function J(x) to generate an augmented cost function Ja(x), as shown in the following equation: Ja(x)=J⁡(x)+∑k∈horizonchourly⁢bk The augmented cost function Ja(x) can be optimized by a cost function optimizer1020to determine optimal values for the binary decision variables bkat each hour k of the optimization period, along with other decision variables in the cost function J(x). As described above, the true maintenance cost may be based on the number of run hours of the covered assets1004. For example, the maintenance cost can be expressed as a piecewise-defined function, as shown in the following equation: AnnualCost={cbaset<tbasecbase+cm(t-tbase)t≥tbase where AnnualCost is the total maintenance cost per year, cbaseis the base cost, tbaseis the base number of hours covered by the base cost cbase, cmis the marginal cost for each run hour exceeding the base number of run hours tbase, and t is the number of run hours of the covered assets1004. Maintenance contracts module918can be configured to determine a value of chourlythat captures the true maintenance cost in a manner that allows high level optimizer632to determine optimal values of the binary decision variables bk. Still referring toFIG.10, maintenance contracts module918is shown to include an hourly cost optimizer1010. Hourly cost optimizer1010can be configured to determine the value of the hourly cost chourlyto include in the maintenance cost term of the augmented cost function Ja(x). Hourly cost optimizer1010can determine a value of chourlythat accounts for an annual cost that is a function of both a base cost cbaseand the number run hours of the covered assets1004in excess of a base number of run hours tbase. Consider a maintenance contract specifying the annual cost as shown in the previous piecewise-defined function. If the high level optimization were simulated over a year, there are three potential optimal solutions: 1. The optimal solution includes running covered assets1004for less than tbaserun hours. 2. The optimal solution includes running covered assets1004for more than tbaserun hours. 3. The optimal solution includes running covered assets1004for exactly tbaserun hours. This trichotomy is specifically called out because each scenario has a particular way of finding the best value for chourly. Scenario 1 If the optimal solution were to run the covered assets1004for less than tbase, then the best value for chourlyis zero. Consider running the high level optimization over a year with the hourly cost set to zero (i.e., chourly=0). If this simulation were to obtain a total run time less than tbase, then this is the optimal solution because the owner of assets1004is in actuality paying zero additional dollars per run hour. In other words, the actual marginal cost cmis equal to the hourly cost chourlyused by the optimization (i.e., chourly=cm=0). The base cost cbaseis already a sunk cost (i.e., an offset) and need not be considered in the optimization. On the other hand, the simulation with chourly=0 may obtain a total run time greater than tbase. In this case, the simulation is unlikely to be optimal because the owner of assets1004is paying cmper run hour above tbase, but high level optimizer632is not penalizing these run hours in any way. Thus, a run hour that provided a positive benefit that is less than the marginal cost cmfor running assets1004would be executed, making the annual run suboptimal. Scenario 2 If the optimal solution were to run the covered assets1004for more than tbase, then the best value for chourlyis cm. Consider running the high level optimization over a year with the hourly cost set to cm(i.e., chourly=cm). If this simulation were to obtain a total run time more than tbase, then this is the optimal solution because high level optimizer632is actually over-penalizing the number of run hours. The base number of run hours tbasebuilt into the contract would be charged a marginal cost of cmby high level optimizer632when in actuality the owner of assets1004does not pay the marginal cost cmfor the run hours within tbase. If the high level optimization still selects a number of run hours greater than tbasedespite the over-penalization, then the selected number of run hours is optimal. On the other hand, the simulation with chourly=cmmay obtain a total run time less than tbase.In this case, the simulation is unlikely to be optimal because the owner of assets1004would pay zero cost for the additional run hours up to tbase, but high level optimizer632is penalizing those run hours at the marginal rate cm. Thus, a run hour within the base number of run hours tbasethat provided a positive benefit that is less than cmfor running assets1004would not be executed, making the annual run suboptimal. Scenario 3 If neither an annual simulation with chourly=0 yields a runtime less than tbasenor an annual simulation with chourly=cmyields a run time of more than tbase(i.e., neither scenario 1 nor scenario 2 is true), then the optimal runtime is exactly tbase. The problem now becomes which of the tbasehours provide the maximum benefit. This can be found by varying the hourly cost chourly. The optimal number of hours is tbaseand the optimal hours to run are those that provide a benefit greater than or equal to chourlythat causes exactly tbaserun hours to be selected by high level optimizer632. Still referring toFIG.10, hourly cost optimizer1010is shown to include a planning mode optimizer1012, an operational mode optimizer1014, and a continuous updater1016. Planning mode optimizer1012can be configured to determine the optimal value of chourlywhen operating in a planning mode (i.e., an offline mode). Operational mode optimizer1014can be configured to periodically update the value of chourlywhen operating in an operational mode (i.e., an online mode). Continuous updater1016can be configured to continuously update chourlyafter each hour k when operating in the operational mode. Both the planning mode and the operational mode may revolve around performing a planning simulation with several different hourly costs chourlyand generating a run hour curve as a function of the hourly cost chourly. For example, hourly cost optimizer1010can generate several different values of chourlybetween chourly=0 and chourly=Cm. Each value of chourlycan be provided to maintenance cost term generator1008. Maintenance cost term generator1008can use the values of chourlyto generate several different maintenance cost terms in the augmented cost function Ja(x). Each maintenance cost term may have the form Σk∈horizonchourlybkand may include a different value of chourly. Maintenance cost term generator1008may provide all of the maintenance cost terms to cost function augmenter1006. Cost function augmenter1006may add each of the maintenance cost terms to the original cost function J(x) provided by cost function module902to generate several different augmented cost functions Ja(x). Each augmented cost function Ja(x) may be a linear combination of the original cost function J(x) and one of the maintenance cost terms. Cost function optimizer1020can optimize each augmented cost function Ja(x) to determine optimal values for the binary decision variables bk. Each set of optimized binary decision variables bkmay correspond to one of the values of chourlyand may indicate the planned run hours for assets1004at the corresponding hourly cost value chourly. Cost function optimizer1020can provide the sets of optimized binary decision variables bkand/or the total number of run hours indicated by each set of binary decision variables bkto run hour curve generator1018. Run hour curve generator1018can be configured to generate a run hour curve using the hourly cost values chourlyand the resulting number of run hours determined by cost function optimizer1020. An example of a run hour curve1100which can be generated by run hour curve generator1018is shown inFIG.11. Run hour curve1100identifies the relationship between the estimated number of run hours in the plan (vertical axis) and the hourly cost per run hour (horizontal axis). In some embodiments, run hour curve generator1018generates run hour curve1100by using data points provided by hourly cost optimizer1010and cost function optimizer1020. Each data point may include an hourly cost value chourly(determined by hourly cost optimizer1010) and a corresponding number of run hours (determined by cost function optimizer1020). Run hour curve generator1018can interpolate between the data points to generate a continuous run hour curve1100. Planning Mode Referring now toFIG.12, a flowchart of a process1200illustrating the operations performed by planning mode optimizer1012is shown, according to an exemplary embodiment. Planning mode optimizer1012may receive contract information from a contract information database1002(step1202). The contract information may include the number of base hours tbase, the marginal cost cm, the base cost cbase, and/or other information specified by a maintenance contract for assets1004. Planning mode optimizer1012can set chourlyequal to cmand run the plan (step1204). Running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the corresponding number of run hours tplan. Planning mode optimizer1012may compare the planned number of run hours tplanto the base number of run hours tbase(step1206). If the number of run hours tplanis greater than the base number of run hours tbase(i.e., the result of step1206is “yes”), planning mode optimizer1012may determine that the planned run hours are optimal. However, if the number of run hours tplanis not greater than the base number of run hours tbase(i.e., the result of step1206is “no”), planning mode optimizer1012may proceed to step1208. Planning mode optimizer1012can set chourlyequal to zero and run the plan (step1208). As before, running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the corresponding number of run hours tplan. Planning mode optimizer1012may compare the planned number of run hours tplanto the base number of run hours tbase(step1210). If the number of run hours tplanis less than the base number of run hours tbase(i.e., the result of step1210is “yes”), planning mode optimizer1012may determine that the planned run hours are optimal. However, if the number of run hours tplanis not less than the base number of run hours tbase(i.e., the result of step1210is “no”), planning mode optimizer1012may proceed to step1212. Planning mode optimizer1012can generate multiple values of Chourlybetween Chourly=0 and Chourly=Cm(step1212). In some embodiments, step1212includes adding a predetermined increment inc to each value of Chourlyto generate the next value of chourly. For example, planning mode optimizer1012can add an increment of inc=5 to the initial value of chourly=0 to generate the value chourly=5. Planning mode optimizer1012can add the increment of inc=5 to the new value of chourly=5 to generate the value chourly=10. This process can be repeated until chourly=cm. In some embodiments, the value of inc is a predetermined percentage of cm(e.g., 1%, 5%, 10%, etc.) such that a fixed number of values of chourlyare generated between chourly=0 and chourly=Cm. Planning mode optimizer1012can run the plan using each of the values of chourlygenerated in step1212. As before, running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the corresponding number of run hours tplan. By running the plan for each value of chourly, planning mode optimizer1012can generate a corresponding value of tplan. Planning mode optimizer1012can use the values of Chourlyand the corresponding values of tplanto generate a run hour curve (step1214). The run hour curve may define a relationship between tplanand chourly. An example of a run hour curve which can be generated in step1214is shown inFIG.11. Planning mode optimizer1012can use the run hour curve to determine an hourly cost value c* that corresponds to the base number of run hours tbase(step1216). Step1216may include finding a point along the run hour curve that includes the base number of run hours tbaseand identifying the hourly cost value c* of that point. Planning mode optimizer1012can set chourlyequal to c* and run the plan (step1218). As before, running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the corresponding number of run hours tplan. The run hours determined in step1218may be the optimal run hours. In some embodiments, planning mode optimizer1012may compare the number of run hours tplandetermined in step1218to the base number of run hours tbase.If the difference between tplanand tbaseis less than a threshold value, planning mode optimizer1012may determine that the planned run hours determined in step1218are optimal. However, if the difference between tplanand tbaseare not less than the threshold value, planning mode optimizer1012may update the hourly cost value c* to obtain a number of run hours tplancloser to the base number of run hours tbase. For example, planning mode optimizer1012can regenerate a portion of the run hour curve around c* using smaller increments of the hourly cost value chourly. The smaller increments may result in a run hour curve with higher resolution around the hourly cost value c* to allow for more accurate interpolation. Steps1212-1218can be repeated until the number of run hours tplandetermined in step1218is sufficiently close to the base number of run hours tbase(e.g., within a threshold). Planning Mode Examples Consider, for example, a maintenance contract with tbase=3000 hours, cm=50 $/hour, and an asset with the run hour curve1100shown inFIG.11. In this case, the hourly cost chourlywill be set to 50 $/hour in step1204. Running the plan with a value of chourly=50 $/hour may result in 3600 run hours in the plan (i.e., tplan=3600). Because tplan>tbasethe 3600 plan hours are greater than the 3000 base hours), the plan hours are determined to be optimal in step1206. Consider another maintenance contract with tbase=7100 hours, cm=50 $/hour, and an asset with the run hour curve1100shown inFIG.11. The hourly cost Chourlywill still be set to 50 $/hour in step1204. Running the plan with a value of chourly=50 $/hour may result in 3600 run hours in the plan (i.e., tplan=3600). Because tplan<tbasethe 3600 plan hours are less than the 7100 base hours), the plan hours are not determined to be optimal in step1206and process1200proceeds to step1208. In this case, the hourly cost Chourlywill be set to 0 $/hour in step1208. Running the plan with a value of Chourly=0 $/hour may result in 7000 run hours in the plan (i.e., tplan=7000). Because tplan<tbasethe 7000 plan hours are less than the 7100 base hours), the plan hours are determined to be optimal in step1210. Finally, consider another maintenance contract with tbase=5000 hours, cm=50 $/hour, and an asset with the run hour curve1100shown inFIG.11. The tests performed in step1206and step1210will both fail. A sweep of hourly cost values Chourlycan be performed in step1212to generate the entire run hour curve1100. From run hour curve1100, it can be seen that the base number of run hours (i.e., tbase=5000 hours) occurs at approximately 21.5 $/hour. Accordingly, the variable c* can be set to c*=21.5 $/hour in step1218and the plan can be run with an hourly cost value of chourly=c*=21.5 $/hour. For the case of the example, it will be assumed that this run yields 4953 run hours of the device. This is 47 hours away from the optimal number of hours (i.e., 5000−4953=47). If this is close enough, planning mode optimizer1012may end process1200. If not, then the hourly cost chourlymay be decreased and step1218can be repeated until some number of run hours closer to 5000 is obtained. Operational Mode Referring now toFIGS.13-14, flowchart of processes1300and1400illustrating the operations performed by operational mode optimizer1014is shown, according to an exemplary embodiment. Process1300can be performed by operational mode optimizer1014when operating in an offline mode, whereas process1400can be performed by operational mode optimizer1014when operating in an online mode. In some embodiments, the planning mode process1200is performed prior to either of the operational mode processes1300-1400. Thus, a starting value for the hourly cost chourlymay already have been calculated by planning mode module1012, which may be optimal to use given the data in the plan. The hourly cost value chourlycalculated by planning mode module1012can be used as a starting point for operational mode optimizer1014. In some embodiments, operational mode optimizer1014is configured to use feedback regarding the actual run hours of assets1004to update the hourly cost value chourlyat periodic intervals. This is a difference relative to planning mode optimizer1012which operates entirely offline. Periodically throughout the year, operational mode optimizer1014can recalculate the optimal value of chourlyfor the remaining amount of time in the year. The optimization performed by operational mode optimizer1014may be similar to the optimization performed by planning mode optimizer1012. However, operational mode optimizer1014may use the remaining number of run hours in the maintenance contract tremainingin place of the base number of run hours tbasewhen selecting the value of c*. In some embodiments, operational mode optimizer1014is configured to calculate the remaining number of run hours tremainingusing the following equation: tremaining=tbase−tYTD where tbaseis the total number of run hours specified by the maintenance contract and tYTDis the number of run hours already used in the year-to-date. At the beginning of the year, the value of tYTDcan be estimated at various times throughout the year based on the estimated run hours determined by planning mode optimizer1012. During the year, feedback can be collected from assets1004which indicates the actual number of run hours of assets1004. The feedback from assets1004can be used to determine the actual value of tYTDfor use in updating tremainingperiodically throughout the year. Referring particularly toFIG.13, a flowchart of a process1300performed by operational mode optimizer1014in the offline mode is shown, according to an exemplary embodiment. In some embodiments, process1300is performed once at the beginning of the year. Operational mode optimizer1014may receive contract information from a contract information database1002(step1302). The contract information may include the number of base hours tbase, the marginal cost cm, the base cost cbase, and/or other information specified by a maintenance contract for assets1004. Operational mode optimizer1014can generate multiple values of chourlybetween chourly=0 and chourly=cm(step1304). In some embodiments, step1304includes adding a predetermined increment inc to each value of chourlyto generate the next value of chourly. For example, operational mode optimizer1014can add an increment of inc=5 to the initial value of chourly=0 to generate the value chourly=5. Operational mode optimizer1014can add the increment of inc=5 to the new value of chourly=5 to generate the value chourly=10. This process can be repeated until chourly=cm. In some embodiments, the value of inc is a predetermined percentage of cm(e.g., 1%, 5%, 10%, etc.) such that a fixed number of values of chourlyare generated between chourly=0 and chourly=cm. Operational mode optimizer1014can run the plan using each of the values of chourlygenerated in step1304. Running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the corresponding number of run hours tplan. By running the plan for each value of chourly, operational mode optimizer1014can generate a corresponding value of tplan. Planning mode optimizer1012can use the values of chourlyand the corresponding values of tplanto generate a run hour curve (step1306). The run hour curve may define a relationship between tplanand chourly. An example of a run hour curve which can be generated in step1306is shown inFIG.15. In step1306, operational mode optimizer1014can generate a run hour curve for each time that chourlyis to be recalculated during the year. Each run hour curve may cover the time period beginning at the time chourlywill be recalculated and ending at the end of the year. For example, if chourlyis to be recalculated every 1.5 months, then a total of 8 run hour curves can be generated in step1306. The first run hour curve may cover the time period from January 1-December 31, the second run hour curve may cover the time period from February 15-December 31, the third run hour curve may cover the time period from April 1-December 31, and so on. Rather than repeating process1300periodically throughout the year, all the run hour curves generated in step1306can be created at the beginning of the year and saved to perform the periodic recalculations of chourly. The run hour curves generated in step1306(denoted t(c,k) as these curves are now a function of iteration) can be used to perform the periodic recalculation of chourlywith no additional planning simulations at the end of each period. In fact, it doesn't require any additional planning simulations in general, as all the run hour curves can be generated for a fixed hourly cost from a single simulation. Referring particularly toFIG.13, a flowchart of a process1400performed by operational mode optimizer1014in the online mode is shown, according to an exemplary embodiment. In some embodiments, process1400is repeated periodically throughout the year to update chourly. For example, process1400can be performed at the beginning of each of the time periods corresponding to the run hour curves generated by process1300(e.g., February 15, April 1, May 15, etc.). At the beginning of each time period, operational mode optimizer1014may calculate tremaining(step1402). tremainingcan be calculated by subtracting the actual number of run hours tYTDof assets1004that have been used in the year-to-date from the base number of run hours tbasespecified by the maintenance contract (i.e., tremaining=tbase−tYTD). For example, if step1402is performed at April 1, tYTDmay be the number of run hours used in the time period from January 1-March 31. Feedback from assets1004or from a building management system that monitors and controls assets1004can be used to determine the actual number of run hours tYTDused in the year-to-date. Operational mode optimizer1014can use the run hour curve for the remaining time period to determine an hourly cost value c* that corresponds to the calculated number of run hours tremainingremaining (step1404). Step1404may include finding a point along the run hour curve that includes the remaining number of run hours tremainingand identifying the hourly cost value c* of that point. Operational mode optimizer1014can update chourlyto be equal to the value of c* (step1406). The updated value of chourlycan then be used to run the plan to determine the optimal run hours. As before, running the plan may include using the value of chourlyto generate a maintenance cost term, modifying the cost function J(x) to include the maintenance cost term, and optimizing the augmented cost function Ja(x) to determine the optimal hours at which to run assets1004. Continuous Update Referring now toFIGS.16-19, several graphs1600-1900illustrating the operations performed by continuous updater1016are shown, according to an exemplary embodiment. Continuous updater1016can be configured to continuously update the value of chourlywith each iteration (i.e., after each time step) of the high level optimization process. In some embodiments, continuous updater1016updates the value of chourlyby performing processes1300-1400after every iteration of the optimization. For example, continuous updater1016can generate a run hour curve that corresponds to each time step in the optimization period (e.g., each hour) in process1300and may recalculate chourlyby performing process1400after each time step. In some embodiments, continuous updater1016updates the value of chourlyfrom time step k to time step k+1 using the derivative of the run hour curve t(c,k) with respect to hourly cost ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 The notation t(c,k) is used to denote the run hour curve for the time period that begins at time step k and ends at the end of the year (i.e., the run hour curve for time step k). As described above, each run hour curve t(c,k) may express the estimated number of run hours t as a function of the hourly cost c. Accordingly the derivative {∂t∂c}k,c* may be equivalent the slope or gradient of the run hour curve t(c,k) with respect to hourly cost chourly. Continuous updater1016can calculate the hourly cost c*k+1at time step k+1 as an iterative function of the hourly cost c*kat time step k. In some embodiments, continuous updater1016calculates c*k+1using the following equation: ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 where c*kis the hourly cost at time step k, {∂t∂c}k,c* is the derivative of the run hour curve t(c,k) for time step k at the location of the hourly cost c*k, and Δt is the difference between the planned number of run hours during time step k and the actual number of run hours during time step k (i.e., Δt=planned run hours−actual run hours). For example, if assets1004were planned to use five run hours during time step k but did not actually run during time step k (i.e., used zero run hours), the value of Δt would be Δt=5. Similarly, if assets were planned to remain off during time step k (i.e., planned to use zero run hours) but actually ran during time step k using five run hours, the value of Δt would be Δt=—5. Continuous Update Examples Referring particularly toFIG.16, consider an optimization system running online at hour k into the year. Assume that the planning simulation at hour k showed it was optimal to run a given device during hour k. In other words, the planned number of run hours for the device during hour k is equal to 1. Moving from the beginning of hour k to the beginning of hour k+1, the remaining number of run hours tremainingis planned to decrease by 1 run hour because 1 run hour would be used during hour k. Accordingly, the run hour curve1604for hour k+1 is shifted downward in graph1600relative to the run hour curve1602for hour k by 1 run hour because the estimated number of run hours remaining in the plan tremainingis planned to decrease by 1 run hour. If the device does not actually run during hour k, the remaining number of remaining run hours tremainingbuilt into the contract does not change. Thus, the actual number of run hours remaining at hour k (i.e., tr,k) will be equal to the actual number of run hours remaining at hour k+1 (i.e., tr,k+1), as shown in graph1600. Continuous updater1016can use the run hour curve1604for hour k+1 and the value of tr,k+1to identify the corresponding value of the hourly cost c*k+1at hour k+1. As shown in graph1600, the value of c*k+1will be less than the value of c*k. The amount of the decrease in the hourly cost value is equal to the inverse of the slope of the run hour curve1602multiplied by the duration of the time step between hour k and hour k+1 (because run hour curves1602and1604are separated by the run time difference between hour k and hour k+1). For the scenario shown in graph1600, the difference between the planned run time during hour k (i.e., 1 hour) and the actual run time during hour k (i.e., 0 hours) is equal to 1 run hour. In other words, Δt=1 hour. Accordingly, continuous updater1016can update the hourly cost value using the equation: ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 where the sign of the derivative ∂t∂c is negative such that the addition sign in this equation results in a decrease to the hourly cost value c*k. Referring now toFIG.17, consider another optimization system running online at hour k into the year. Assume that the planning simulation at hour k showed it was optimal to run a given device during hour k. In other words, the planned number of run hours for the device during hour k is equal to 1. Moving from the beginning of hour k to the beginning of hour k+1, the remaining number of run hours tremainingis planned to decrease by 1 run hour because 1 run hour would be used during hour k. Accordingly, the run hour curve1704for hour k+1 is shifted downward in graph1700relative to the run hour curve1702for hour k by 1 run hour because the estimated number of run hours remaining in the plan tremainingis planned to decrease by 1 run hour. If the device actually runs during hour k, then the remaining number of run hours tremainingdecreases by the same amount as the planned decrease. In other words the difference between tr,k+1and tr,kis equivalent to the vertical shift from run hour curve1702to run hour curve1704. Continuous updater1016can use the run hour curve1704for hour k+1 and the value of tr,k+1to identify the corresponding value of the hourly cost c*k+1at hour k+1. As shown in graph1600, the value of c*k+1will be equal to the value of c*kbecause both the run hour curve and remaining number of run hours decrease by the same amount. For the scenario shown in graph1700, the difference between the planned run time during hour k (i.e., 1 hour) and the actual run time during hour k (i.e., 1 hour) is equal to 0 run hours. In other words, Δt=0 hours. Accordingly, the equation: ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 will result in an updated hourly cost value of c*k+1=c*kbecause Δt=0. Referring now toFIG.18, consider another optimization system running online at hour k into the year. Assume that the planning simulation at hour k showed it was not optimal to run a given device during hour k. In other words, the planned number of run hours for the device during hour k is equal to 0. Moving from the beginning of hour k to the beginning of hour k+1, the remaining number of run hours tremainingis planned to remain the same because 0 run hours would be used during hour k. Accordingly, the run hour curve1802is not shifted in graph1800and represents both the run hour curve for time step k+1 and the run hour curve for run hour k. If the device does not actually run during hour k, the remaining number of remaining run hours tremainingbuilt into the contract does not change. Thus, the actual number of run hours remaining at hour k (i.e., tr,k) will be equal to the actual number of run hours remaining at hour k+1 (i.e., tr,k+1), as shown in graph1800. Continuous updater1016can use the run hour curve1802and the value of tr,k+1to identify the corresponding value of the hourly cost c*k+1at hour k+1. As shown in graph1800, the value of c*k+1will be equal to the value of c*k. For the scenario shown in graph1800, the difference between the planned run time during hour k (i.e., 0 hours) and the actual run time during hour k (i.e., 0 hours) is equal to 0 run hours. In other words, Δt=0 hours. Accordingly, the equation: ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 will result in an updated hourly cost value of c*k+1=c*kbecause Δt=0. Referring now toFIG.19, consider another optimization system running online at hour k into the year. Assume that the planning simulation at hour k showed it was not optimal to run a given device during hour k. In other words, the planned number of run hours for the device during hour k is equal to 0. Moving from the beginning of hour k to the beginning of hour k+1, the remaining number of run hours tremainingis planned to remain the same because 0 run hours would be used during hour k. Accordingly, the run hour curve1902is not shifted in graph1900and represents both the run hour curve for time step k+1 and the run hour curve for run hour k. If the device actually runs during hour k, then the remaining number of run hours tremainingdecreases by 1 run hour. In other words the difference between tr,k+1and tr,kis equal to 1 run hour. Continuous updater1016can use the run hour curve1902to identify the corresponding value of the hourly cost c*k+1at hour k+1. As shown in graph1900, the value of c*k+1will be greater than the value of c*kbecause run hour curve1902has a negative slope. For the scenario shown in graph1900, the difference between the planned run time during hour k (i.e., 0 hours) and the actual run time during hour k (i.e., 1 hour) is equal to −1 run hour. In other words, Δt=—1 hour. Accordingly, continuous updater1016can update the hourly cost value using the equation: ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 where both the derivative ∂t∂c and Δt are negative such that the addition sign in this equation results in an increase to the hourly cost value c*k. Advantageously, continuous updater1016can use the iterative updating technique described herein to update the value of chourlyafter each time iteration. All that is required is the derivative ∂t∂c of the function of the remaining hours in the plan as a function of the current hourly cost c*kat the current iteration as well as a planning simulation that shows the planned on/off state of the device for every iteration of the year and for every cost. On/Off State Estimation Referring now toFIG.20, a graph2000illustrating a technique for estimating the on/off state of assets1004is shown, according to an exemplary embodiment. Continuous updater1016can be configured to estimate the on/off state of assets1004in order to determine whether the actual run hours of assets1004is equal to the planned run hours of assets1004at each time step. It can be difficult to accurately determine the on/off state at every cost, as the binary function that indicates on/off states may be discontinuous. For example, assets1004may have binary on/off states (i.e., either on or off), represented by points2002having a value of either 0 or 1. Continuous updater1016can approximate the discontinuous function shown in graph2000with a value linearly interpolated between the hourly costs that define the interval where the device transitioned from running to not running for that specific iteration. For example, let the binary on/off function be denoted b(c,k). In this approximation, the function b is allowed to take on intermediate values between 0 and 1. For example, b(c,k) could take on the form shown by line2004. It is possible to approximate b to arbitrary precision by running the planning simulations for more hourly costs. Referring now toFIGS.21A-21B, continuous updater1016can use the linear approximation shown in graph2000to determine that the run hour curve2104at hour k+1 changes by an amount b (c,k) relative to the run hour curve at hour k (as shown in graph2100). The variable b (c*,k) represents the planned change in the remaining runtime between hours k and k+1. The actual number of remaining run hours may change by Δtrbased on whether the device actually runs during hour k.FIG.21Aillustrates the scenario in which the device does not run and therefore Δtr=0 (i.e., tr,k+1=tr,k).FIG.21Billustrates the scenario in which the device does run and therefore Δtr=tr,k+1−tr,k. Continuous updater1016can update the hourly cost chourlyusing the following equation: ck+1*=ck*+{b⁡(c*,k)+Δ⁢tr}⁢{∂t∂c}k,c*-1⁢or⁢ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1 where Δt is still equal to the difference between the planned run time (possibly a fraction due to approximation) and the actual run time, b(c*,k) is equal to the planned change in the run time, and Δtris equal to the actual change in the run time (e.g., Δtr=tr,k+1−tr,k). In some embodiments, continuous updater1016finds the derivative ∂t∂c by sampling the run hour curve2102, representing the run hour curve2102as a linear surface (e.g., each piece represented by a triangle) and using this to evaluate the derivative ∂t∂c at the point c*k. In some embodiments, continuous updater1016updates the hourly cost chourlyby performing both offline/planning steps and online/operational steps. The offline/planning steps may include running the plan for several hourly costs and using plan data to determine a piecewise linear version of d⁢t⁡(c,k)d⁢c and b(c,k) for the device under contract. The online/operational steps may include calculating the difference Δt between the planned run time and the operational run time for the device under contract and using the derivative d⁢t⁡(c,k)d⁢c to update the current hourly cost c*kusing the equation ck+1*=ck*+Δ⁢t⁢{∂t∂c}k,c*-1. CONFIGURATION OF EXEMPLARY EMBODIMENTS The construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. Although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). For example, the position of elements can be reversed or otherwise varied and the nature or number of discrete elements or positions can be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. The order or sequence of any process or method steps can be varied or re-sequenced according to alternative embodiments. Other substitutions, modifications, changes, and omissions can be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure can be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. Although the figures show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps can be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
207,600
11861742
DETAILED DESCRIPTION FIG.1is a diagram of a system for providing bi-directional real-time tab control, according to an example embodiment. The system100is shown schematically in greatly simplified form, with only those components relevant to understanding of one or more embodiments (represented herein) being illustrated. The various components are illustrated and the arrangement of the components is presented for purposes of illustration only. It is to be noted that other arrangements with more or less components are possible without departing from the bi-directional real-time tab control presented herein and below. Moreover, various components are illustrated as one or more software modules, which reside in non-transitory storage and/or hardware memory as executable instructions that when executed by one or more hardware processors perform the processing discussed herein and below. The techniques, methods, and systems presented herein and below for providing bi-directional real-time tab control can be implemented in all, or some combination of the components shown in different hardware computing devices having one or more hardware processors. The system100includes: a tab generator110, a user-operated device120, and one or more POS devices130. The POS device(s) include a tab manager131. The tab generator110can be processed on a server, in a cloud, or one of the POS devices130. The POS device(s)110can be one or more of: POS servers and POS terminals. The tab manager131is under the control or accessible at least one of the POS devices130. In an embodiment, the tab generator110and the tab manager131are a same processing module or located and processed on a same POS device110. The user-operated device120can include any of: a mobile phone, a wearable processing device, a tablet, and a laptop. During operation, the user operated device120is operated by a user. When the user enters a venue, such as a bar or a restaurant with user-operated device120, the user obtains a unique tab number or unique tab token. The can be obtained by the user through the user-operated device120in a number of manners. The user operated the device120to scan a Quick Response (QR) code displayed in the venue (at the entrance, on tables, and the receptions desk, etc.). The user-operated device120opens a browser in response to the scanning (by a camera of the mobile device). The website provides a connection to the tab generator110. The tab generator or web-page presents a unique number and, perhaps, a phone number associated with communications to the tab manager131. The user then texts the unique tab number to the number associated with the tab manager131. This opens an order with the venue and associates the unique number with a tab transaction for the customer. When the text is received by the tab manager131, the number to the device120is known the tab manager131as well (as the sender of the unique tab number). Alternatively, the user simply texts a blank message to the venue number, which is received by the tab generator110and the tab manager130and a unique number is returned to the user on the device120as a return text. A tab transaction number is associated therewith by the tab manager131. In another instance, the scanned QR code is placed on tables at the venue, such that when the device120is redirected from the scan and a unique tab number is displayed, the tab number includes a table number as part of the unique tab number such as an appended table number for the venue as the last predefined number of digits for the unique tab number. In this case, the order number and tab number association also is associated with a particular table within the venue. In still another instance, the waiter provides a tab number to the user upon seating the user and/or the user's party. In this case, as part of the POS software of the venue when the waiter opens or seats a patron the POS software interacts with the tab generator110and/or tab manager131. In yet another circumstance, a beacon transmitting wireless signals through WiFi, Bluetooth®, or Low Energy Bluetooth®. A customized mobile application operating on the device120detects the signal upon entering the venue and automatically contacts the tab generator110for obtaining the unique tab number and automatically notifies the tab manager131of the unique tab number or instructs the user to text the unique tab number to the tab manager131. Once the unique tab number is known and made available (through automated action, waiter action, or user action), the tab number needs associated with the user's order. This can be done by the user communicating the tab number to the waiter, or by the waiter scanning the tab number as a QR code from the display of the device120. In fact, a variety of other manners are possible for the user to obtain the tab number and for the tab manager131to become aware of the tab number. Now when the waiter begins processing ordered items from the venue and enters the orders into the POS ordering system, the tab manager maintains the orders, such that the user can operate the device120to receive updates and a running total for the order through the device120as texts or as information communicated over a website or a customized mobile application. All of these connection and reporting mechanisms interface, in some manner, wireless with the tab manager131. The interfaces may also permit the user to place additional orders, issues commands for details on the current tab bill, transfer a portion of the tab to another open tab, set a tab limit that is not to be exceeded, set notifications to close at the tab at a predefined time, set notifications to receive when the tab is within a user-defined percent or among of a set tab limit, transfer a portion of the tab to a newly created and opened tab, transfer the tab to a different order opened within the venue (such as when the user is in a bar and wants to transfer the tab to the restaurant when his/her seat becomes available for seating in the restaurant), and close out the tab and/or pay for the tab. These interfaces options are communicated to the tab manager131, which interfaces with the POS software at the venue on the POS devices130for processing the selected interface options by the user. This provides a mechanism for users to control their tabs with venues. The user is no longer a passive participant but becomes an active participant that can control the tab and receive real time notifications of the tab. In an embodiment, when something is placed on the tab through orders communicated to the waiter or through the interface by the users or members of the users' parties, the tab manager131sends a real time text or application notification to the user device120. This control is particularly useful for individuals that are responsible for a tab of a group of people, such as parties, where the members are disbursed and ordering at will. The tab control is also useful to quickly close out a tab so the user can exist the venue, which can be particular problematic in some venues. The information returned to the user through the mobile device120may also include additional useful information to the user, such as a time that an order was placed. This allows the user to demonstrate to the waiter that the length of time it took to receive an order item was excessive. These metrics on the time the order was placed and when filled may also be maintained by the tab manager131and used internally within the venue for evaluating the efficiencies of the entire venue with respect to certain days of the week, certain traffic volume of customers, certain times of the day, and the like. So, the metrics can be useful to the user in real time and also useful to the venue for efficient operation of the venue. Interaction for processing the options of the user through the interface can be an Application Programming Interface between the tab manager131and the POS transaction and ordering software services. The interface for the user can be an API that monitors and responses to texts or as stated before a Web-based set of browser pages for the user to interact with the tab manager131. Also, the API may be between a mobile application on device120and the tab manager131. In an embodiment, preset notifications for the tab manager131are set for a user. The user can change these settings through interaction with the tab manager (mobile application, web browser, and/or texts). In an embodiment, the tab manager131also permits user registration through the interface for venue loyalty points, advertisements, registration of payment methods, and the like. In an embodiment, the tab manager131links an open tab once the user identity is known, such as after a tab is opened and the waiter obtains loyalty information for the order associated with the tab. In an embodiment, the tab manager131links an open tab when the tab is paid and closed based on a payment method being associated with the customer. In an embodiment, the tab manager131links the open tab with the user based on the phone number that is known for the customer and used by the customer as the device120. In an embodiment, the tab manager131creates an anonymous customer account for access to the tab manager131based on the customer's phone number for communication with the tab manager131. In this way, when the customer returns the tab manager131has previous tabs (accessible through the API between the tab manager131and the POS ordering and transaction software) and can retain preference settings for the customer even when the identity of the customer is unknown. In an embodiment, the tab manager131can provide interfaces options for the customer to view through the mobile device120a previous tab history for the customer. In an embodiment, the tab manger includes interface options for the customer to select and automatically order from selected items of the history. In an embodiment, the user through any user-operated device120registers through a website or mobile application with the system100and include in the registration particulars of the user and user-operated device identifiers (phone numbers) along with, optionally, payment methods of the user, such as credit card, PayPal® accounts, and others. This permits for automatic recognition of the user through the user-operated device120, allows for history to be maintained for tab-based orders, allows for automatic payment of bills associated with open orders tied to unique tab numbers and tabs maintained by the user with the venue, loyalty rewards from the venue and/or the system100, and delivering of target marketing to the user. In an embodiment, the communication between the user-operated device120and the tab manager131can occur through one or more of: a mobile application executing on the device120, SMS texting, other messaging-based systems besides SMS texting (such as social media: FaceBook®, Instagram®, Twitter®, Slack®, and others), and/or automated chat bots that are responsive to user interactions within a particular messaging platform or capable of integrating user interaction from a first messaging platform type across one or more other disparate messaging platform types or integrate user interaction over a messaging platform with back end and external service associated with the system100. In an embodiment, the POS ordering and transaction software sends a notification to the tab manager131when each new item that is placed on an open order to which the unique tab number is associated as one mechanism for integrating the tab manager131with the POS ordering and transaction software. In an embodiment, an automated chat bot (as described in a previous embodiment above) is a front-end interface to the tab manager131. The user operating the device120interacts with the chat bot to control the open order associated with the user and the unique tab number. The chat bot is accessed by the user through any user-selected messaging platform interface. The chat bot translates interaction between the messaging platform and an API associated with the tab manager131and responses from the tab manager API back to the user-selected messaging platform being used by the user on the device120. The embodiments presented in theFIG.1and other embodiments are now discussed with reference to theFIGS.2-4. FIG.2is a diagram of a method200for providing bi-directional real-time tab control, according to an example embodiment. The software module(s) that implements the method200is referred to as a “tab manager.” The tab manager is implemented as executable instructions programmed and residing within memory and/or a non-transitory computer-readable (processor-readable) storage medium and executed by one or more hardware processors of a hardware computing device. The processors of the device that executes the tab manager are specifically configured and programmed to process the tab manager. The tab manager has access to one or more networks during its processing. The networks can be wired, wireless, or a combination of wired and wireless. In an embodiment, the device that executes the tab manager is any of the POS devices110. In an embodiment, the device that executes the tab manager is a cloud computing environment. In an embodiment, the device that executes the tab manager is a server. In an embodiment, the tab manager is all of or some combination of the tab generator110and the tab manager131. At210, the tab manager delivers a unique tan number to a mobile device being operated by a user at an establishment. According to an embodiment, at211, the tab manager delivers the unique tab number in response to receiving an SMS text message. In an embodiment, at212, the tab manager delivers the unique tab number in response to a website page being activated from the mobile device, such as when the mobile device is operated to scan a QR code. In an embodiment, at213, the tab manager delivers the unique tab number in response to a mobile application request from a mobile application processing on the mobile device. At220, the tab manager assigns the unique tab number to an open order of a user that is operating the mobile device. In an embodiment, at221, the tab manager assigns the unique text number to the open order in response to receiving an SMS text message from the mobile device with the unique text number. In an embodiment, at222, the tab manager assigns the unique text number to an open order in response to receiving an open order number and the unique tab number from a POS interface. In an embodiment, at223, the tab manager assigns the unique text number to the open order in response to receiving an open order number from the mobile device. At230, the tab manager provides an interface to the mobile device for bi-directionally controlling the open order in real time. According to an embodiment, at231, the tab manager interacts with a POS interface to provide control to the user through the provided interface to the mobile device. In an embodiment of231and at232, the tab manager provides interface options to the mobile device for one or more of: placing an order for an item, placing a price limit on the open order, receiving user-defined notification, closing a bill associated with the open order, and paying for the open order. In an embodiment of232and at233, the tab manager provides metadata with some requested information from processing the interface options (such as time a specific item was ordered with the open order, and the like). In an embodiment of233and at234, the tab manager provides additional interface options to recall previous closed order history for the user. In an embodiment of234and at235, the tab manager provides metrics relevant to the open order during the open order and when the open order is closed. FIG.3is a diagram of another method providing bi-directional real-time tab control, according to an example embodiment, according to an example embodiment. The software module(s) that implements the method300is referred to as a “tab generator.” The tab generator is implemented as executable instructions programmed and residing within memory and/or a non-transitory computer-readable (processor-readable) storage medium and executed by one or more hardware processors of a hardware device. The processors of the device that executes the tab generator are specifically configured and programmed to process the tab generator. The tab generator has access to one or more networks during its processing. The networks can be wired, wireless, or a combination of wired and wireless. The tab generator presents another and in some ways enhanced perspective of the method200. In an embodiment, the tab generator is the transaction generator110. In an embodiment, the device that executes the tab generator is any of the POS devices110. In an embodiment, the device that executes the tab generator is a cloud computing device. In an embodiment, the device that executes the tab generator is a server. At310, the tab generator receives a request for a unique tab number. According to an embodiment, at311, the tab generator receives the request in response to one of: a scan code provided from the mobile device, a SMS text message received from the mobile device, and message sent from a mobile application processing on the mobile device. At320, the tab generator generates the unique tab number in response to the request. In an embodiment, at321, the tab generator receives with the request a code representing a table number at an establishment and including the table number in the unique text number. In an embodiment, at322, the tab generator links the unique text number to an open order at an establishment. In an embodiment of322and at323, the tab generator act as an interface between a POS interface handling the open order and the mobile device. At330, the tab generator provides the unique tab number to a mobile device operated by a user for the user to control the open order in real time via interfaces provided to the mobile device. FIG.4is a diagram of another system400for bi-directional real-time tab control, according to an example embodiment. The system400includes a variety of hardware components and software components. The software components of the system400are programmed and reside within memory and/or a non-transitory computer-readable medium and execute on one or more hardware processors of a hardware device. The system400communicates one or more networks, which can be wired, wireless, or a combination of wired and wireless. In an embodiment, the system400implements all or some combination of the processing discussed above with theFIGS.1-3. In an embodiment, the system400implements, inter alia, the method200of theFIG.2. In an embodiment, the system400implements, inter alia, the method300of theFIG.3. The system400includes a POS terminal401and the POS terminal401including a tab manager402. The tab manager402is configured to: 1) execute on at least one hardware processor of the server401; 2) assign unique tab numbers to mobile devices operated by users; 3) link each unique tab number to a specific open order associated with a specific user, and 4) provide an interface to the mobile devices for bi-directionally controlling the open orders. In an embodiment, the tab manager403is further configured to: 6) provide at least one interface option to: split items on a particular open order to a different or a newly created open order and transfer the particular open order to a different open order. In an embodiment, the tab manager403is further configured to: 5) interact in real time and act as an intermediary between the interface and a POS ordering and transaction interface of the POS terminal401. In an embodiment, the tab manager402is the tab manager131. In an embodiment, the tab manager402is the method200. In an embodiment, the tab manager402performs some or all of the processing of the tab generator110and the method300. It should be appreciated that where software is described in a particular form (such as a component or module) this is merely to aid understanding and is not intended to limit how software that implements those functions may be architected or structured. For example, modules are illustrated as separate modules, but may be implemented as homogenous code, as individual components, some, but not all of these modules may be combined, or the functions may be implemented in software structured in any other convenient manner. Furthermore, although the software modules are illustrated as executing on one piece of hardware, the software may be distributed over multiple processors or in any other convenient manner. The above description is illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting that the claimed embodiments have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Description of the Embodiments, with each claim standing on its own as a separate exemplary embodiment.
21,771
11861743
Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same. DETAILED DESCRIPTION Provided are methods that provide communication of orders and payments in a drive through using wireless beacons. Systems suitable for practicing methods of the present disclosure are also provided. Various merchant locations may provide short range wireless communications with a device, such as through beacons using Bluetooth Low Energy (BLE), LTE Direct, or other communication protocol. These beacons may be set up at a merchant location, such as a merchant's drive through, and communicate with devices to alert users of check-in services through their device. The beacons may provide additional functionality, such as establishing a connection with a device or server entity to complete transactions, including ordering and payment services. The beacons may provide communications to the users' devices directly, including information stored in the beacons. The beacons may also provide communication with a device attached to, or in communication with, the beacon, such as a device of a merchant. A merchant may offer a drive through at the merchant's location where a user may place and/or pick up an order while the user is in their vehicle. Additionally, the merchant may offer check-in services through one or more short range wireless beacons established in the drive through for the merchant. For example, merchants may correspond to fast food restaurants, banks, pharmacies, etc. These beacons at the merchant may utilize a short range wireless communications to communicate with a device of the user. For example, the beacons may be established at an entry to the drive through, in individual lanes of a multiple lane the drive through, next to a menu of available items from the merchant, and/or near an ordering intercom for the drive through. The beacons may employ Bluetooth Low Energy (BLE), LTE Direct, or another communication protocol to emit a communication signal receivable by the user's device. The communication may include an identifier for the beacon, the user, the merchant, and/or a payment provider. The user's device may be set up to passively monitor for BLE communications. When the device detects the signal and verifies the one or more identifiers, both the device and the beacon may ramp up in power and establish a connection, where the connection may further enable the device to communicate with the merchant and/or the payment provider. The beacon may be connected to a networked device at the merchant location, or the beacon may include network functionality to communicate with other devices and/or servers. Thus, the beacon enables the user's device to establish a connection, communicate check-in information (e.g., an identifier for the user), and/or complete a check-in with the merchant. The check-in may be completed automatically when the user's device is in range of the beacon, or may be completed after prompting the user to check-in when the user's device is in range of the beacon. Once the merchant has established at least one wireless beacon at the drive through, the wireless beacon(s) may connect to the user's device when the device is in proximity to the wireless beacon(s). For example, a wireless beacon may broadcast the identifier that initiates a check-in within an area around the wireless beacon. Thus, as the user's device enters that area, the device may connect to the wireless beacon and/or initiate a check-in process. The wireless beacons may be range limited to correspond to a specific area of the merchant's drive through, such as an ordering intercom/menu and/or a specific lane of a multilane drive through. This may be done by adjusting the power of the signal emitted by the beacon so that devices outside of a radius surrounding the beacon will not pick up the identifier/check-in request and connect to the beacon. Thus, only devices in a certain range (e.g., a size of coverage for a vehicle detected by the merchant) may connect to the beacon. Moreover, the merchant may implement measures to limit the range of the wireless beacon, including placement of the wireless beacon and construction of the drive through. The beacon may further include directionality such that the beacon may connect to vehicle entering the drive through or section of the drive through, and disconnect as vehicles exit the drive through. Once the user's device connects to the beacon, various transactions may be initiated, accessed, and/or completed using the device. For example, if the beacon is near a menu of available items for the merchant, the user may utilize the device to enter and submit an order. The wireless beacon may provide an interface for searching, selecting, and/or viewing the menu of available items and/or services. The device may display the order to the user and may update the order as the user adds, removes, and/or changes items/services in the order. Moreover, if the menu displayed in the drive through includes a nearby ordering intercom the user may submit items/services for the order using the intercom, which may be reflected on the device. Furthermore an ordering display device may be established in the drive through to display the order to the user and reflect changes made to the order by the user through the device. The display device may assist the merchant in accurately taking the order from the user. Thus, the order as seen to the merchant and displayed on the ordering display device may be matched with the order displayed to the user on the user's device. This allows the user (or other users in the vehicle) to submit orders using both the intercom and the user's device. The wireless beacon may also connect with a plurality of users' devices. For example, a vehicle may include more than one user, each having their device (e.g., mobile phone). The vehicle may also have a main device, such as a heads up display or console computing system mounted inside the vehicle. The wireless beacon may display the order on each device and allow each device to edit the order. Thus, each user in the car may submit their own order and customize their items/services to their preferences. Additionally, changes to the orders may be reflected on each user's device as well as the ordering display device to insure accuracy of the order. To prevent devices in other vehicles or surrounding the vehicle from connecting to the wireless beacon, the merchant may detect a size and/or shape of the vehicle using sensors, weight sensors, cameras, or other devices. The size of the vehicle may affect a range that the wireless beacon may connect to devices (e.g., a range to transmit identifiers or other check-in requests). Once an order is submitted to the merchant, the user or users may complete payment for the order using the device(s). Payment may be provided using a payment account with a payment provider or other payment instrument (e.g., cash, payment card, etc.). The wireless beacon may display a total to the user(s) through the device, where the user may select the payment instrument. If more than one user is submitting payment for the total (e.g., a split payment), each device for the user may be utilized to submit part of the payment. Moreover, each user may utilize their device to view their respective share of the total, such as by selecting items the user wishes to pay for or receiving their respective share from the merchant through the wireless beacon. Payment may be issued to the merchant through the wireless beacon or may be issued to the merchant over a network connection. In certain embodiments, the order may be submitted by the user prior to arriving at the drive through. Thus, when the user's device connects to the wireless beacon, the order may be populated on the device and/or ordering display device in the drive through for editing and submission for preparation. In other embodiments, the common and/or past orders of the user may be presented to the user when the user arrives at the drive through so that the user may select to submit their “regular” order. The common and past order may be determined using an identifier of the user used in the past transactions or through transaction histories in a user/payment account of the user. FIG.1is a block diagram of a networked system100suitable for implementing the processes described herein, according to an embodiment. As shown, system100may comprise or implement a plurality of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described embodiments. Exemplary device and servers may include device, stand-alone, and enterprise-class servers, operating an OS such as a MICROSOFT® OS, a UNIX® OS, a LINUX® OS, or other suitable device and/or server based OS. It can be appreciated that the devices and/or servers illustrated inFIG.1may be deployed in other ways and that the operations performed and/or the services provided by such devices and/or servers may be combined or separated for a given embodiment and may be performed by a greater number or fewer number of devices and/or servers. One or more devices and/or servers may be operated and/or maintained by the same or different entities. System100includes a user102, a merchant104a device110, a merchant drive through structure140having an ordering display142and a wireless beacon144, a merchant device150, and payment provider server170in communication over a network180. User102, such as a consumer or other potential purchaser, may arrive at a merchant location for merchant104that has a drive through. Device110may establish a connection with wireless beacon144at the drive through. User102may then submit an order for fulfillment to merchant104using device110over the connection between device110and wireless beacon144. Additionally, payment provider server170may provide payment services between device110and merchant device150. Device110, ordering display142, wireless beacon144, merchant device150, and payment provider server170may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein. For example, such instructions may be stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system100, and/or accessible over network180. Device110may be implemented using any appropriate hardware and software configured for wired and/or wireless communication with wireless beacon144, merchant device150, and/or payment provider server170. For example, in one embodiment, device110may be implemented as a personal computer (PC), a smart phone, laptop computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g. GOOGLE GLASS®), or other wearable computing device, a computing device mounted within a vehicle (e.g., a console or heads up display computing device in a vehicle), and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Although a user device is shown, the user device may be managed or controlled by any suitable processing device. Although only one user device is shown, a plurality of user devices may function similarly. Device110ofFIG.1contains a check-in application112, an ordering application120, a payment application130, other applications114, a database116, and a communication module118. Check-in application112, ordering application120, payment application130, and other applications114may correspond to processes, procedures, and/or applications executable by a hardware processor, for example, a software program. In other embodiments, device110may include additional or different software as required. Check-in application112may be used by user102of device110to establish a connection with wireless beacon144, including a check-in with merchant104. Check-in application112may correspond to a specific application utilized by device110with wireless beacon144and/or merchant device150to complete a check-in for a location corresponding to merchant104. The check-in with merchant device150may correspond to a process to log in to a user account of user102with merchant device150(or payment provider server170if payment provider server170provides check-in services for merchant104). In other embodiments, the check-in may provide and/or verify the identity of user102, including transmission of an identifier for user102and/or device110. The check-in may be completed over network180with merchant device150. In such embodiments, check-in application112may correspond more generally to a browser application configured to communicate with merchant device150over a network connection (e.g., over a connection with network180). In various embodiments, check-in application112may also receive short range wireless communications from wireless beacon144at a location and transmit information to wireless beacon144, including check-in information for a check-in process with merchant device150(or payment provider server170if payment provider server170provides check-in services for merchant104) that associates user102with wireless beacon144. For example, wireless beacon144may be located in a drive through for merchant102(e.g., at an entrance to a drive through lane, merchant menu display, ordering display, ordering window/intercom, etc.) where wireless beacon144is set up to communicate with device110when device110is in proximity to wireless beacon144. Thus, wireless beacon144may be range limited to connect only with devices (e.g., device110) within a specified area, such as a radius around wireless beacon144, a distance away from wireless beacon144, and/or a signal direction for wireless beacon144. In various embodiments, wireless beacon144may connect to device110when device110is located in a vehicle that is currently located at a place for ordering from merchant104(e.g., a menu display, ordering display, and/or ordering window/intercom). Wireless beacon144may be set to be range limited using the construction of the drive through and/or wireless beacon144. Wireless beacon144may also be range limited using the signal strength of wireless beacon144, which may be adjusted as merchant104detects a size of the vehicle that device110is located in. Based on the proximity for connection to wireless beacon144, check-in application112may transmit information to wireless beacon144when user102is nearby wireless beacon144, enabling merchant device150to determine that user102is located in proximity to wireless beacon144(and thus may complete an order and payment to merchant104). Check-in application112may execute in the background of an operating system of device110and be configured to establish connections, using communication module118of device110, with wireless beacon144. The connection may be established with or without user input from user102. For example, wireless beacon144may broadcast a token, such as a universally unique identifier (UUID), for reception by check-in application112, as will be explained in more detail herein. Check-in application112may utilize communication module118of device110to receive the token from wireless beacon144. If check-in application112acknowledges the UUID as identifying wireless beacon144, merchant device150, and/or payment provider server170(e.g., if check-in application112determines the UUID corresponds to a request to establish a communication channel and/or process and complete a check-in), check-in application112may transmit an identifier corresponding to user102and/or device110back to wireless beacon144. Check-in application112may utilize communication module118of device110to communicate with wireless beacon144(e.g., over near field communication, Bluetooth, Bluetooth Low Energy, radio, infrared, LTE Direct, or other communication protocol). The identifier from device110may include, be transmitted with, concatenated with, or otherwise bundled with the identifier received from wireless beacon144. In other embodiments, different information may be transmitted to wireless beacon144, such as an identifier for user102, a name or other personal information for user102, an identifier used to recall or determine a previously submitted order by user102, and/or information used to determine previous or common orders for user102. Thus, the information transmitted to wireless beacon144does not need to be utilized to process and/or complete a check-in with merchant device150in all embodiments. Once a connection is established with wireless beacon144, device110may be checked-in with merchant device150if user102has not previously been checked-in. The check-in process may then associate user102with wireless beacon144used to connect to device110. For example, merchant104may previously have registered wireless beacon144as located in the drive through and at a location used to submit orders for fulfillment by merchant104. Thus, merchant104is informed that user102is in the drive through and ready to order from merchant104. Merchant104may further determine that a vehicle for user102is in the drive through and detect a size for the vehicle, as will be explained in more detail herein. Thus, any other device in proximity to wireless beacon144(e.g., capable of connecting to wireless beacon144) may be determined to be located in the vehicle for user102and associated with user102. Thus, those devices may be further associated with the order submitted to merchant104. As previously discussed, in other embodiments, a check-in need not be processed and/or completed to associate user102with the drive through. Thus, other connections and data transfers to wireless beacon144may be sufficient to associate user102with the drive through. Ordering application120may correspond to, in various embodiments, an application that allows user102to view menu items/services available from merchant104and submit an order for selected items/service to merchant104for fulfillment. Thus, ordering application120may receive information from merchant device150(e.g., from wireless beacon144connected to with merchant device150and/or over network180from merchant device150). Information received from merchant device150may include menu information for items and/or services available from merchant104. For example, where merchant104is a fast food merchant with a drive through having wireless beacon144, menu information may display a list of food and drink items available from merchant104as well as price information, ingredients, nutritional information, options/customizations for the food/drink items, etc. Once device110is connected to wireless beacon144, the menu item may be populated to ordering application120so that user102may select food and drink items for purchase and submit an order for selected items. In other embodiments, the menu may be accessible from merchant device150over network180. The menu may be displayed as an interactive menu allowing user102to browse, navigate and search for items/services available from merchant104while generating, editing, and submitting an order. In various embodiments, menu information may correspond to more general information of items and/or services available from merchant104, such as prescription medication submissions, pick-up times, refills, etc., banking information, balances, etc., and/or available services (e.g., oil change, car wash, etc.). If other users are associated with user102and ordering at the same time (e.g., friends, coworkers, and/or family members in the same vehicle as user102while creating and submitting an order for fulfillment by user102), the other users may also possess devices that include a check-in application, an ordering application, and/or a payment application. Thus, the other users may utilize their respective devices to select, edit, and submit items/services for purchase in the same order with user102. User102may therefore view selected items/services by the other users in the order using ordering application120as well as a total including cost, tax, tip, and/or service charge for those items. In various embodiments, user102may add, remove, and/or customize the items/services submitted by the other users. For example, user102and the other users may all be given access rights to the order to generate and edit the order as user102and the other users see fit. In certain embodiments, user102or another user may be given priority access rights to have a final determination of the order, such as a parent in a car full of children. However, in other embodiments, user102may be given no rights to access, edit, and submit items/services selected by another user in an order (e.g., in a car full of coworkers where each coworker has final say in their order). Such access rights may be determined at the time of connection of device110and the other user's device to wireless beacon144or may be set by user account credentials, based on past transactions for each user, or based on user device relationships (e.g., if two devices are associated on a similar plan and one is noted as a device of a child for the parent's device on the plan, if two devices co-located or perform transactions together frequently, etc.). Ordering application120may also display common, regular and/or past orders for user102and/or other users associated with user102(e.g., users in a vehicle with user102while ordering from merchant104, children of user102, friends/family of user102, or other associated users). These regular and/or past orders may be determined using user information for user102and/or the other users, such as a user identifier, name, payment card/account information, etc., as will be explained in more detail herein. The regular/past orders may also be set by the user, such as by user102selecting a “favorite” option or feature when using ordering application120. User102may utilize ordering application120to select, view, edit (e.g., add, remove, and/or customize the items/services in the order), and/or submit one or more of the regular/past orders. Additionally, the other users may similarly select, edit, and submit the regular/past orders as their order using their respective user devices, which may appear to user102on a user interface of ordering application120, as previously discussed. The regular/past orders of the other users may similarly be populated to ordering application120and/or the devices of the other users. In various embodiments, user102may receive rewards, discounts, and/or loyalty benefits for use with merchant104. For example, user102may have discounts, such as 20% off offers, that user102may apply to an order. Such discounts may be entered and/or accessible by user102using ordering application120. Ordering application120may transmit the coupons to merchant device150for application to an order. In other embodiments, user102may also physically present the discount to merchant104and/or merchant device150to receive the discount. In additional embodiments, user102may receive rewards from previous purchases from merchant104. In such embodiments, user102may receive a discount based on a previous purchase, such as a discount incentive to receive further business from user102. Thus, such a reward may correspond to a free soda at a next visit. These rewards may be stored to a loyalty account for user102, and may further be stored and/or accessible by ordering application120. Similarly, ordering application120and/or user102may present the rewards to merchant104when submitting an order for purchase. In various embodiments, user102may utilize ordering application120to determine an order for pick-up from merchant104's drive through prior to user102arriving at the drive through. For example, user102may select an order for pick-up prior to leaving an office for user102and then drive to the merchant location for merchant104. Thus, menu information for merchant104may be received over network180prior to arrival at the drive through. User102may utilize this menu information to create and submit an order to merchant104, as previously discussed. When user102connects to wireless beacon144, the previously submitted order may be recalled and displayed to user102on a display interface of ordering application120. In various embodiments, user102may review, edit, and approve the order for fulfillment by merchant104. Additionally, user102may provide payment for the order prior to arrival at the merchant location for merchant104(e.g., prior to connecting to wireless beacon144in the drive through), or when approving the order while connected to wireless beacon144. In various embodiments, orders may be submitted to merchant104and order details may also be displayed to user102using ordering display142of merchant drive through structure140, as will be explained in more detail herein. Once an order is submitted to merchant104, payment may be required for the order. Thus, ordering application may also display a total, including tax, tip, and/or service charge, for processing and payment. Additionally, ordering application120may display an amount for each item/service, including the item's pro-rata portion of the tax, tip, and/or service charge. If other users have submitted items/services in an order using their user device, the items/service submitted by each user may be sectionalized to determine those items/services portion of the payment total, including the pro-rata portion of the tax, tip, and/or service charge owed by each user's order. Payment may be provided to merchant104using cash, a payment card, or a payment account with a payment provider. Thus, in various embodiments, payment application130may be utilized to process and provide payment to merchant104. Payment application130may be used, for example, to provide a convenient interface to permit user102to select payment options and provide payment for items and/or services. For example, payment application130may be implemented as an application having a user interface enabling the user to enter payment options for storage by device110, provide payment to merchant104, and complete a transaction for the items and/or services using payment provider server170. Payment application130may be configured to provide payment to merchant104. In this regard, payment application130may correspond to an application that may provide an interface where user102may view an order for items/services submitted by user102. Additionally, user102may generate a payment request for the order to merchant104. The payment request may instruct payment provider server170to provide payment for the order to merchant104. Additionally, the payment request may include identification of a payment instrument that payment provider server170may utilize to provide the payment to merchant104. Payment application130may correspond to a dedicated application for payment provider server170(e.g., a specific device application) or may correspond to a browser application configured to view information available over the Internet or access a website corresponding to a payment provider. The payment request may correspond to a token generated by payment application130including a payment amount and a selected payment instrument for user102. As previously discussed, the payment amount may correspond to a complete amount for the total for the order or a partial amount of the total for the order. For example, if user102is the only party ordering in one instance from merchant104or user102is providing payment for all parties ordering from merchant104during the instance, the payment amount may include the total due to merchant104. However, in other embodiments, one or more other users may split the total due for the order with user102. Thus, the payment amount may correspond to the amount user102has agreed to pay for the total for the order (e.g., a selected amount or an amount due for the items/services requested by the user). The payment instrument may correspond to an account identifier, payment card, bank account, etc. Once the payment request is generated, user102may authorize the payment request for transmission to payment provider server170in order to effectuate a payment to merchant104. User device140may transmit the payment request to payment provider server170with an identifier for merchant104in order to complete the payment to merchant104. In other embodiments, payment application130may transmit the payment request as a token with a payment instrument and identifier for user102to merchant device150for completion by merchant104. If the payment amount is a partial amount due for the total (e.g., a split of the total with other users in the vehicle with user102), the token may be transmitted to merchant device150or payment provider server170separately from the payment tokens due by the other users or bundled with the payment tokens of the other users. Payment application130may provide payment for items using a user account with the payment provider, such as payment provider server170. Payment application130may include cross-linking, allowing user102to identify a user account through an identifier for a separate user account (e.g. identifying a user account through a debit card account number and vice versa). Payment application130may further include options to store transaction histories for purchased items, such as receipts, for later use. Thus, payment application130provides an interface enabling user102to provide proof of purchase to merchant104. In various embodiments, one or more features of check-in application112, ordering application120, and/or payment application130may be incorporated in the same application so as to provide their respective features in one application. Device110includes other applications114as may be desired in particular embodiments to provide features to device110. For example, other applications114may include security applications for implementing client-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network180, or other types of applications. Other applications114may also include email, texting, voice and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network180. In various embodiments, other applications114may include financial applications, such as banking, online payments, money transfer, or other applications associated with payment provider server170. Other applications114may include browser, social networking, and/or mapping applications, which may also be used in conjunction with check-in application112, ordering application120, and/or payment application130. Other applications114may contain software programs, executable by a processor, including a graphical user interface (GUI) configured to provide an interface to the user. Device110may further include database116which may include, for example, identifiers such as operating system registry entries, cookies associated with check-in application112, ordering application120, payment application130, and/or other applications114, identifiers associated with hardware of device110, or other appropriate identifiers, such as identifiers used for payment/user/device authentication or identification. Identifiers in database116may be used by a payment/credit provider, such as payment provider server170, to associate device110with a particular account maintained by the payment/credit provider. Database116may include user device tokens and/or encryption keys, including an encryption key of wireless beacon144, merchant device150, and/or payment provider server170. Database116may include identifying information for tokens enabling check-in application112to identify wireless beacon144, merchant device150, and/or payment provider server170when receiving a corresponding check-in token. Additionally, database116may include data received by ordering application120and/or payment application130, including menu information, merchant information, and/or payment and transaction history information. Device110includes at least one communication module118adapted to communicate with wireless beacon144, merchant device150, and/or payment provider server170. In various embodiments, communication module118may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices. Communication module118may communicate directly with wireless beacon144using short range communications, such as Bluetooth Low Energy, LTE Direct, WiFi, radio frequency, infrared, Bluetooth, and near field communications. Merchant drive through structure140may be implemented as a physical structure at a merchant location for merchant104, such as located in, near, or corresponding to a drive through for the merchant location. In this regard, merchant drive through structure140may include physical displays having information of items and/or services available from merchant104. Merchant drive through structure140may include a physical display having the menu information for the items/service, prices for the items/service, components of the items/services (e.g., ingredients, contents, etc.), and/or customization options for the items/services. The information displayed by merchant drive through structure140may be utilized by user102to generate, select, and submit an order having items/services for fulfillment by merchant104. Although only one structure is shown, the merchant location may utilize a plurality of similar structures, for example, in separate lanes of a multilane drive through or in separate locations of the drive through. Merchant drive through structure140ofFIG.1further includes an ordering display142and a wireless beacon144. Ordering display142and wireless beacon144may include hardware and software necessary to execute the processes and functions as described below. In other embodiments, merchant drive through structure140may include displays, hardware, and/or software as required. Ordering display142may, in various embodiments, correspond to a visual display device such as a CRT, LED, LCD, plasma, or other display device configured to display order details to user102. Thus, ordering display142may further include necessary hardware and/or software to receive order details from device110and/or merchant device150and display the order details to user102on a display screen. As previously discussed, order details may include items/services requested by user102, modifications to the items/services (e.g., customizations including adding and removing ingredients), prices for individual items/services, an overall total for the items/services in the order, and/or other costs (e.g., tax, tip, and/or service charges). Thus, ordering display142may display the aforementioned information included in the order to user102. As the aforementioned information may also be displayed to user102in ordering application120, ordering display142may be synchronized with the order in ordering application120to reflect additions, changes, and deletions from the order. In various embodiments, ordering display142may include an intercom, microphone, or other input/output device or system (including a staff member of merchant104receiving voice requests by user102and utilizing an input/output device to enter the voice requests as requested items/services input for an order) where user102may submit requested items/services. For example, user102may make voice requests at ordering display142that may add, update, change, and/or remove items/services from an order. Such voice requests may be reflected in the order and order details displayed to user102in an application interface of ordering application120. Additionally, the voice requests may update the order and order details displayed on ordering display142. Thus, ordering display142may be utilized to provide orders to merchant104, for example, where user102is driving an older or “classic” car. In such embodiments, ordering application120may not be utilized to provide the order to merchant104, and instead user102may submit the order using the intercom. User device110may also provide payment for the order, such as by receiving a payment request token from merchant device150through wireless beacon144, completing a payment for the payment request in the received token using payment provider server170, and providing proof of payment to merchant104at a check-out and pick-up window. Proof of payment may be provided, in various embodiments, through a transaction history, identification number, or other receipt or payment documentation. Wireless beacon144may be maintained, for example, by merchant104and/or payment provider server170. Wireless beacon144may be implemented using any appropriate hardware and software configured for wireless communication with device110. For example, in one embodiment, wireless beacon144may be implemented as a dongle device including a hardware processor and a communication module, for example, connected to device at the location of merchant104. Wireless beacon144may also be implemented as devices incorporated within a personal computer (PC), a smart phone, laptop computer, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Wireless beacon144may also act as a stand-alone device including a processor, communication module, and/or network interface component configured to communicate with device110and/or payment provider server170. Although wireless beacon144is described singly, a plurality of wireless beacons may be set up at a drive through of merchant104, such as in various lanes of a multilane drive through or at various locations in the drive through. Wireless beacon144may be located at a physical location corresponding to merchant104. A physical location corresponding to merchant104may constitute a drive through and more specifically to merchant drive through structure140. For example, wireless beacon144may be established at merchant drive through structure140, including nearby ordering display142. Wireless beacon144may be limited, either by signal range or physical boundaries, to merchant drive through structure140and/or an area corresponding to merchant drive through structure140. Wireless beacon144ofFIG.1contains processes, procedures, and/or applications executable by a hardware processor, for example, a software program, configured to interact with device110, merchant device150, and/or payment provider server170. Thus, regardless of the implementation of wireless beacon144as discussed above, wireless beacon144may utilize a connection/check-in process and include or be connected to a communication module. In other embodiments, wireless beacon144may include additional or different hardware and software as required. Wireless beacon144may include an application for transmitting requests to establish a connection between a device (e.g., device110) and wireless beacon144. The requests may be unique to wireless beacon144, thereby identifying wireless beacon144. Wireless beacon144may utilize short range wireless communications of wireless beacon144to transmit the requests to establish a connection, including an identifier such as a Universally Unique Identifier (UUID). If device110receives a request to establish the connection with wireless beacon144and responds with an identifier for user102/device110(potentially including the UUID and other information necessary to effectuate a check-in for user102), wireless beacon144to ramp up in power and create a connection between device110and wireless beacon144. Wireless beacon144may transmit the request to establish the connection with wireless beacon144as a short range wireless communication (e.g. a BLE protocol communication) including a “wake up” process for check-in application112of device110and/or a token for wireless beacon144transmitting the request. In other embodiments, the request and/or connection may utilize near field communication, radio communication, infrared communication, or Bluetooth communication. Additionally, although wireless beacon144may utilize BLE protocol communications to effectuate an “always on” type service where the UUID and “wake up” process are transmitted continuously, other communication protocols used to provide an “always on” service may include QUALCOMM® LTE Direct or similar device-to-device communication technology. BLE and LTE Direct may both be utilized to provide discovery of nearby devices to wireless beacon144(e.g., device110and/or merchant device150) and establishment of a connection for data transfers. In other embodiments, wireless beacon144may correspond to other devices, such as WiFi capable devices, near field communication devices, etc. The request may be specific to device110by including information that is specific to user102, such as a name, identifier, or user device identifier. The information specific to user102may be determined from a user account of user102or other information previously provided to merchant device150and/or payment provider server170(e.g., an identifier for user102provided to merchant device150and/or payment provider server170). Thus, in certain embodiments, only device110will pick up and authenticate the request, for example, if user102has previously submitted an order and merchant104is expecting user102to arrive. In other embodiments, only device110(and devices in the same vehicle as device110) may pick up the request if wireless beacon144is ranged limited to only transmit the request to devices within an area for a vehicle in proximity to wireless beacon144. The range limitation of wireless beacon144may be fixed or may be determined based on an approximate vehicle size detected by merchant104(e.g., using merchant device150and/or scales, cameras, sensor devices, etc.). For example, one of wireless beacon144established at merchant drive through structure140may be limited in range only to connect to device110if device110is located in proximity merchant drive through structure140. After wireless beacon144receives an identifier from device110, wireless beacon144may determine user102is in proximity to wireless beacon144. If identifiers are received from other users' devices while wireless beacon144is range limited to an area or vehicle size corresponding to user102, wireless beacon144may further determine those devices are in the same vehicle or area as user102and correspondingly connect to those devices. Wireless beacon144may pass the identifier (and any other device's identifiers where applicable) to merchant device150and/or payment provider server170to associate user102(and the other users where applicable) with the wireless beacon144. By associating user102with wireless beacon144, merchant device150and/or payment provider server170may determine user102(and the other users where applicable) is located at merchant drive through structure140and is ready to generate and submit an order to merchant104. Wireless beacon144may utilize a communication module to pass the identifier to merchant device150, which may then pass the identifier to payment provider server170. However, in other embodiments, wireless beacon144may utilize a network connection of wireless beacon144to pass the identifier to payment provider server170directly. Thus, wireless beacon144includes a communication module adapted to communicate with device110, merchant device150, and/or payment provider server170. The communication module may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices. The communication module of wireless beacon144may also communicate with device110and/or merchant device150using short range communications, such as Bluetooth Low Energy, LTE Direct, WiFi, radio frequency, infrared, Bluetooth, and near field communications. Merchant device150may correspond to a device used by merchant104to view, process, and complete financial transactions for orders submitted by user102. Thus, merchant device150may be located locally to a merchant location for merchant104, such as at a drive through window or station of a drive through at the merchant location. However, merchant device150may also function remotely to the merchant location and interact with merchant104and/or merchant representatives for merchant104at the merchant location. Merchant device150may be implemented using any appropriate hardware and software configured for wired and/or wireless communication with device110, wireless beacon144, and/or payment provider server170. For example, merchant device150may be implemented as a personal computer (PC), a smart phone, laptop computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g. GOOGLE GLASS®), other type of wearable computing device, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Although a merchant device is shown, the merchant device may be managed or controlled by any suitable processing device. Although only one merchant device is shown, a plurality of merchant devices may function similarly. Moreover, in various embodiments, one or more of the applications, processes, and/or features discussed below in reference to merchant device150may be included in payment provider server170(e.g., check-in application152where check-in services are offered to merchant104through payment provider server170), and vice versa. Merchant device150ofFIG.1contains a check-in application152, a merchant sales application160, other applications154, a database156, and a communication module158. Check-in application152, merchant sales application160, and other applications154may correspond to processes, procedures, and/or applications executable by a hardware processor, for example, a software program. In other embodiments, merchant device150may include additional or different software as required. Check-in application152may correspond to processes to complete check-in with device110for a location corresponding to merchant104(e.g., with one or more of wireless beacon144established in a merchant location for merchant104). Thus, check-in application152may correspond to the merchant device side application configured to receive check-in information from device110and complete the check-in. The check-in request may include log in information for a user account with merchant104and/or payment provider server170and thus complete the check-in with user102by verifying the account information. For example, the check-in information may include an identifier or other account information for a user/payment account of user102. However, in embodiments where a user account has not been previously established by user102, check-in application152may receive other information identifying user102, including a user name/identifier, user device identifier, an identifier for an account with another server, or other information. Such information may also be used to identify past transactions of user102with merchant104. The check-in information may also be utilized to pull up a previous order submitted by user102and complete a transaction for the order. For example, the check-in information may include an identifier for user102that enables merchant device150to identify a food order, prescription, or other requested order submitted by user102prior to device110connecting to wireless beacon144, a submitted. The identifier received by check-in application152from device110may also be associated with an order submitted by user102while connected to wireless beacon144allowing payment and recall of the order when necessary. Once a connection is established and/or a check-in is completed between device110and wireless beacon144, merchant sales application160may be utilized to transmit and receive information between device110and merchant device150. Merchant sales application160may provide information for available items and/or services to device110and receive an order submitted by user102, as previously discussed. Merchant sales application160may also be configured to answer queries for information (in some cases using input by merchant104), provide order limitations, and/or update the information for the available items/services (e.g., menu updates including available menu items/services). Thus, merchant sales application160may be configured to provide menu options to user102based on information available for merchant104. Merchant sales application160may also be utilized to, for example, provide a convenient interface to permit merchant104to view a submitted order, approve the submitted order, and complete a transaction for the submitted order (e.g., receive payment for the order). In this regard, merchant sales application160may display the order to merchant104so that merchant104may confirm the order. If items and/or services cannot be fulfilled in the order, merchant sales application160may also be utilized to notify user102and/or edit the order. Once the order is approved by merchant104, merchant sales application160may be utilized to request payment for the order. Payment for the order may include a request to pay a total for the order, including tax, tip, and/or service charges. As previously discussed, multiple users (e.g., user102and other users) may submit different items/services in an order using their respective devices. Thus, using the identifiers for each device connected to wireless beacon144and the items/services selected by each device, merchant sales application160may keep each user's requested items/services separate and determine their totals for display to each individual user. Merchant sales application160may present the total for the order and any requested split totals for the amounts due by each person. The split amounts due by each person may be displayed to all users on all the devices, or each specific user on their device using the identifier for the device that is associated with their split total. Moreover, merchant sales application160may also receive different split totals from user102and the other users, for example by each user selecting either a partial amount of the total to pay, or selecting specific items/services in the order to pay. Merchant sales application160may calculate each user's pro-rata portion of tax, tip, and/or service charge, or may accept each user's selection of a partial amount of the tax, tip, and/or service charge to pay. Merchant sales application160may receive payment for the total from device110, the other users' devices, and/or payment provider server170. In various embodiments, merchant104may also receive physical payment instruments, such as cash and/or payment cards from user102and/or the other users, in order to pay for partial amounts of the total. Thus, merchant sales application160may also be utilized to run payment cards, complete cash transactions, and/or otherwise complete payment for the order. Once payment for the order is complete, merchant sales application160may be configured to generate a transaction history for the order, including an overall receipt, receipt for partial amounts, and/or confirmation of payment(s). The transaction history and/or receipts may be provided electronically to user102and/or the other users through wireless beacon144and/or network180, or a physical copy of the transaction history and/or receipts may be provided. Merchant device150includes other applications154as may be desired in particular embodiments to provide features to merchant device150. For example, other applications154may include security applications for implementing client-side security features, programmatic client applications for interfacing with appropriate application programming interfaces (APIs) over network180, or other types of applications. In various embodiments, other applications154may include financial applications, such as banking, online payments, money transfer, or other applications associated with payment provider server170. Other applications154may contain other software programs, executable by a processor, including a graphical user interface (GUI) configured to provide an interface to the user. Merchant device150may further include database156which may include, for example, identifiers such as operating system registry entries, cookies associated with check-in application152, merchant sales application160, and/or other applications154, identifiers associated with hardware of merchant device150, or other appropriate identifiers, such as identifiers used for payment/user/device authentication or identification. In one embodiment, identifiers in database156may be used by payment provider server170to associate merchant device150with a particular account maintained by payment provider server170. Database156may also store user102's information, including check-in information, an identifier, etc., for user102, and any other users associated with user102while ordering with user102. Database156may include orders by user102and transaction histories for purchased items by user102to present proof of purchase. Merchant information, such as menu information of available items/services, may also be stored to database156. Merchant device150includes at least one communication module158adapted to communicate with device110, wireless beacon144, and/or payment provider server170. In various embodiments, communication module158may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices. Communication module158may communicate directly with wireless beacon144using short range communications, such as Bluetooth Low Energy, LTE Direct, radio frequency, infrared, Bluetooth, and near field communications. Payment provider server170may be maintained, for example, by an online payment service provider, which may provide payment services and/or processing for financial transactions on behalf of a user. In this regard, payment provider server170includes one or more processing applications which may be configured to interact with device110, wireless beacon144, and/or merchant device150to facilitate payment for a transaction. In one example, payment provider server170may be provided by PAYPAL®, Inc. of San Jose, CA, USA. However, in other embodiments, payment provider server170may be maintained by or include a credit provider, financial services provider, financial data provider, and/or other service provider, which may provide payment services to user102and/or merchant104. Moreover, in various embodiments, one or more of the applications, processes, and/or features discussed below in reference to payment provider server170may be included in merchant device150, and vice versa. Payment provider server170ofFIG.1includes a transaction processing application172, other applications174, a database176, and a network interface component178. Transaction processing application172and other applications174may correspond to processes, procedures, and/or applications executable by a hardware processor, for example, a software program. In other embodiments, payment provider server170may include additional or different software as required, such as a check-in application as discussed in reference to merchant device150, where such check-in processes and features are instead provided by payment provider server170. Transaction processing application172may be configured to receive information from and/or transmit information to device110and/or merchant device150for processing and completion of financial transactions. Transaction processing application172may include one or more applications to process financial transaction information from user102and merchant104by receiving a request to complete transaction for items and/or services offered by merchant104. The request may correspond to a payment from user102to merchant104. The payment may include a user account identifier or other payment information (e.g. a credit/debit card or checking account) for user102and a receiving account for merchant104. Additionally, the payment may include a payment amount and terms of payment. The payment amount may constitute the entire total for an order submitted by user102, or a partial amount of the total during a split payment transaction, as previously discussed. Transaction processing application172may complete the transaction by providing payment to merchant104through merchant104's account/payment information. Additionally, transaction processing application172may provide transaction histories, including receipts, to device110and/or merchant device150for completion and documentation of the financial transaction. For example, a transaction history may be provided to device110and/or merchant device150to allow for merchant104to view the transaction and provide the items and/or services to user102. In various embodiments, payment provider server170includes other applications174as may be desired in particular embodiments to provide features to payment provider server170. For example, other applications174may include security applications for implementing server-side security features, programmatic server applications for interfacing with appropriate application programming interfaces (APIs) over network180, or other types of applications. Other applications174may contain software programs, executable by a processor, including a graphical user interface (GUI), configured to provide an interface to a user. Additionally, payment provider server170includes database176. As previously discussed, user102and/or merchant104may establish one or more payment accounts with payment provider server170. User accounts in database176may include merchant/user information, such as name, address, birthdate, payment/funding information, additional user financial information, and/or other desired user data. User102and/or merchant104may link to their respective payment accounts through a user, merchant, and/or device identifier. Thus, when an identifier is transmitted to payment provider server170, e.g. from device110and/or merchant device150, a payment account belonging to user102and/or merchant104may be found. In other embodiments, user102and/or merchant104may not have previously established a payment account and may provide other financial information to payment provider server170to complete financial transactions, as previously discussed. Database176may further include additional information received from device110and/or merchant device150, such as check-in information and identifiers, merchant104's information including menu information, and transaction information for user102and merchant104. In various embodiments, payment provider server170includes at least one network interface component178adapted to communicate device110, wireless beacon144, and/or merchant device150over network180. In various embodiments, network interface component178may comprise a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices. Network180may be implemented as a single network or a combination of multiple networks. For example, in various embodiments, network180may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks. Thus, network180may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system100. FIG.2is an exemplary environment with a user in a vehicle utilizing a wireless beacon to order and pay for items and/or services available from a merchant, according to an embodiment. Environment200ofFIG.2includes a user202and a device210corresponding generally to user102and device110, respectively, ofFIG.1. Additionally, environment200includes a merchant drive through structure240with an ordering display242and a wireless beacon244corresponding generally to merchant drive through structure140, ordering display142, and wireless beacon144, respectively, ofFIG.1. As shown in environment200, user202arrives at a drive through for a merchant (e.g., merchant104, not shown) in a vehicle206. User202further has device210inside vehicle206. Device210may correspond to a user device, such as a mobile phone, wearable computing device, tablet computer, etc. Additionally, device210may correspond to a computing device attached or connected to vehicle206, such as a heads up display computing device, console computing device, etc. Thus, when user202arrives at the drive through and device210is within range of wireless beacon244, device210and wireless beacon244may establish a connection for purposes of completing an order and payment to the merchant. As previously discussed, when device210connects to wireless beacon244, the merchant may detect a size, shape, or other approximate area coverage of vehicle206and adjust the connectivity range of wireless beacon244(e.g., the range of signals emitted by wireless beacon244). Therefore, any other devices in vehicle206that also connects to wireless beacon244may be determined to be within vehicle206and may be associated with user202, vehicle206, and device210, as well as the order submitted while device210is within range of wireless beacon244. While located at merchant drive through structure240, user202may view items and/or services offered for sale from the merchant. Thus, merchant drive through structure240includes a menu290listing available food items from the merchant forFIG.2. Items and/or services available under menu290may also populate on device210through communications received over a network connection or over a connection with wireless beacon244. When viewing the items available from the merchant under menu290, user202may form an order and submit the order to the merchant using device210. However, user202may also order, cancel, change, or modify an order (e.g., items and/or services request in an order) though an intercom292. Intercom292may be connected to a merchant audio or audiovisual device that enables the merchant to receive voice instructions from user202for an order and enter the instructions into the order. Once items and/or services are ordered by user202, they may appear under ordering display242, for example, as order294. Thus, as shown in environment200, user202has ordered a hamburger and a soda, and has a total of $5.48. In various embodiments, order294may also appear on device210to user202so that user202may view order294and make additions, deletions, and changes. FIG.3is an exemplary system environment showing display screens for a user's device and a merchant device interacting through a wireless beacon to complete orders with a merchant, according to an embodiment. Environment300ofFIG.3includes a device310, an ordering display342, a wireless beacon344, and a merchant device350corresponding generally to device110, ordering displays142, wireless beacon144, and merchant device150, respectively, ofFIG.1. User device310displays an ordering application interface320corresponding generally to the processes and features described in reference to ordering application120ofFIG.1. Ordering application interface320includes past orders321, saved orders322, a current order323, and a submit order328option. Ordering application interface320may correspond to an interactive graphical user interface whereby a user (not shown) of user device310may make selections of items and/or services available from a merchant (not shown). Thus, ordering application interface320may display a menu of items/services available from the merchant and/or enable selection, browsing, and/or searching for items/services available from the merchant. Information displayed in ordering application interface320may be received from wireless beacon344or, in various embodiments, over a network connection of device310(e.g., from a source over the Internet). The user may view past orders321, which may include orders previously submitted and/or fulfilled by the merchant. Past orders321may be determined using a user account for the user of device310. Additionally, the merchant and/or merchant device350may determine past orders321after an identifier is transmitted to wireless beacon344. Once orders previously submitted to the merchant are determined for the user of device310, they may be transmitted to and/or stored by device310for display to the user in ordering application interface320. Similarly, saved order322may be determined for the user from a user account and/or identifier for the user or device310. Saved orders322may correspond to orders that the user has elected to store for later recall. For example, one or more of past orders321and saved orders322may correspond to a “favorite” or “regular” order that the user may later purchase again. This enables easy recall and selection of an order to merchant device350. Current order323may include a present order for the user of device310that the user currently wishes to submit to the merchant of merchant device350and purchase. Current order323may be determined by selecting items/services available from the merchant while searching/browsing a menu of available items/services. For example, the menu of available items/service may appear under a tab or as an interactive display screen in ordering application interface320. Additionally, a search box or other browsing tools may also be displayed to the user to allow the user to find items/services. Thus, the user may make selections of desired items/services, which may then appear under current order323, allowing the user to create and view an order. In other embodiments, the user may make a selection of one or more orders viewable in past orders321and/or current orders322. As shown in environment300, current order323includes a hamburger324, a soda325, as well as an add item326option and menu327information. Thus, the user of device310has placed hamburger324and soda325in an order that the user may submit to the merchant of merchant device350. Additionally, the user may edit, add, and/or remove items from current order323. For example, selection of hamburger324and/or soda325may allow the user to edit their ingredients, make special requests, delete the items from the order, or otherwise modify current order323. If the user wishes to add items to current order323, the user may select the option to add item326, which populates information under menu327. Add item326and menu327may correspond to lists, search boxes, interactive menus, or other interfaces enabling the user to add, view, and select items for current order323. As previously discussed, if other users are in a vehicle with the user of device310, items selected to add to current order323on their respective devices may also appear to the user under current order323. In other embodiments, current order323may apply only to the items selected by the user of device310for purchase and not include items selected by other users in the vehicle. Once the user is satisfied with current order323, the user may select a submit order328process to transmit the order to merchant device350(e.g., using wireless beacon344or over a network connection of device310) for fulfillment by the merchant. While the user of device310is generating current order323, additions, deletions, and modifications to the order may appear on ordering display342for viewing by the user and other users in the same vehicle as the user. Ordering display342displays an order394and messages346. Information displayed on ordering display342may be received from device310through wireless beacon344or from merchant device350as the order is entered by a merchant for merchant device350. Thus, as the user enter information in current order323or submits the information to the merchant (e.g., through an intercom), order394displays information for the order, including hamburger324, soda325, and a total332. Total331may include a cost for order394, and may further include tax, tip, and/or service charges, in various embodiments. The merchant may also utilize ordering display342to transmit messages346to the user, including queries as to whether the user would like to modify order394, if the user is ready to submit order394, and/or instructions for payment and navigation of a drive through. User device310further includes a payment application interface330corresponding generally to the processes and features described in reference to payment application130ofFIG.1. Payment application interface330includes total331, payment instruments322, and a submit for payment334process. Total331may be imported to the application displaying payment application interface330from the application supporting ordering application interface320. Thus, total331may correspond to a total cost for current order323and order394, as previously discussed. In various embodiments, total331may also correspond to a split amount due for a total cost of current order323/order392. The user may initiate, submit for processing, and/or complete a payment for total331by selecting a payment instrument under payment instruments332, such as payment account333. Payment instruments may include payment cards, payment accounts, banking accounts, gift cards, and/or other payment and financial related information that may be utilized to provide payment to the merchant. Once payment account333is selected, the user may select the submit for payment334process, thereby submitting total331and payment account33to the merchant and/or payment provider for processing. The merchant may utilize merchant device350to view order and payment details. Once an order is submitted to merchant device350, the order may be displayed under order394. Thus, the merchant may view hamburger324and soda325required to be prepared by the merchant. The merchant may view a status361of the items/services in order394, such as a ready362and a ready363status. Status361may inform the merchant whether the items/services are ready for the user, if the items/services can be fulfilled by the merchant, and other relevant information for completing order394for the user. Order394may also be given an order number364that may assist the merchant in tracking order392. Additionally, the merchant may view a payment status for order392under payment status365. In environment300, the user has completed a payment for order394, therefore the merchant may view payment received366status under payment status365. FIG.4is a flowchart of an exemplary process for communication of orders and payments in a drive through using wireless beacons, according to an embodiment. Note that one or more steps, processes, and methods described herein may be omitted, performed in a different sequence, or combined as desired or appropriate. At step402, it is determined that a user is in a vehicle in a drive through of a merchant based on a first connection between a device for the user and a wireless beacon, wherein the merchant further detects the vehicle in the drive through. For example, the merchant may detect a vehicle in a drive through using video cameras, imaging equipment, image recognition, sensors, and/or scales. Thus, the merchant may also detect a size of the vehicle. If a device connects to a wireless beacon in proximity to the vehicle, it may be determined that a user with the device is in the vehicle. A range for the wireless beacon may be adjusted based on the size of the vehicle. The connection may use one of near field communication, radio communication, infrared communication, Bluetooth communication, Bluetooth Low Energy (BLE) communication, and LTE Direct communication. Additionally, the device may comprise a mobile phone device, a tablet computing device, and/or a console computing device mounted in the vehicle. Check-in information for the user is accessed, at step404, wherein the check-in information is generated from the connection between the device and the wireless beacon. The check-in information may comprise user account information for the user and/or an identifier for the user. Thus, at step406, an order submitted by the user is accessed using the check-in information. The order may be generated by the user when the user is connected to the wireless beacon or may comprise a pre-existing order generated by the user prior to arriving at the drive through in the vehicle. The order may be displayed to the user on the device of the user and/or on a merchant display device in the drive through. The user may also update the order by adding and/or removing items/services in the order or modifying items/services in the order. The order may also comprise a past order based on a previous visit by the user to the merchant, a transaction history for the user, and/or a user account for the user (e.g., favorites and/or past transactions in a user account). The user may modify the order using the device or may give voice input to an intercom that is entered by the merchant to modify the order. Additionally, if other users are in the vehicle, the other users may utilize their devices to add and/or remove items in the order and/or modify the order as appropriate. For example, it may be determine a second user is in the vehicle based on a second connection between a second device and the wireless beacon. Thus, check-in information for the second user may be accessed and the order may be communicated to the second device. The second device may be configured to accept changes to the order by the second user. At step408, a payment for the order is processed using the check-in information and the order. The payment may further be processed using a payment account or a payment instrument provided by the user. An identifier or other information in the check-in information may be utilized to identify the payment account or the payment instrument, and may be provided as identification of the user. Additionally, if other users in the vehicle wish to split payment for the order, the first payment request may comprise a first partial payment for the order and a second payment request may be processed for a second partial payment of the order. FIG.5is a block diagram of a computer system suitable for implementing one or more components inFIG.1, according to an embodiment. In various embodiments, the user device may comprise a personal computing device (e.g., smart phone, a computing tablet, a personal computer, laptop, a wearable computing device such as glasses or a watch, Bluetooth device, key FOB, badge, etc.) capable of communicating with the network. The service provider may utilize a network computing device (e.g., a network server) capable of communicating with the network. It should be appreciated that each of the devices utilized by users and service providers may be implemented as computer system500in a manner as follows. Computer system500includes a bus502or other communication mechanism for communicating information data, signals, and information between various components of computer system500. Components include an input/output (I/O) component504that processes a user action, such as selecting keys from a keypad/keyboard, selecting one or more buttons, image, or links, and/or moving one or more images, etc., and sends a corresponding signal to bus502. I/O component504may also include an output component, such as a display511and a cursor control513(such as a keyboard, keypad, mouse, etc.). An optional audio input/output component505may also be included to allow a user to use voice for inputting information by converting audio signals. Audio I/O component505may allow the user to hear audio. A transceiver or network interface506transmits and receives signals between computer system500and other devices, such as another user device, service device, or a service provider server via network180. In one embodiment, the transmission is wireless, although other transmission mediums and methods may also be suitable. One or more processors512, which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on computer system500or transmission to other devices via a communication link518. Processor(s)512may also control transmission of information, such as cookies or IP addresses, to other devices. Components of computer system500also include a system memory component514(e.g., RAM), a static storage component516(e.g., ROM), and/or a disk drive517. Computer system500performs specific operations by processor(s)512and other components by executing one or more sequences of instructions contained in system memory component514. Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to processor(s)512for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. In various embodiments, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as system memory component514, and transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus502. In one embodiment, the logic is encoded in non-transitory computer readable medium. In one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications. Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EEPROM, FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read. In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by computer system500. In various other embodiments of the present disclosure, a plurality of computer systems500coupled by communication link518to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another. Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa. Software, in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein. The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate embodiments and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described embodiments of the present disclosure, persons of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims.
82,052
11861744
DETAILED DESCRIPTION OF THE DISCLOSURE The following description of various embodiments of the disclosure, combined with the associated drawings, enables persons of ordinary skill in the art to both practice the preferred embodiments of the disclosure, and to understand related applications and embodiments of the disclosure that may not be specifically set forth, but are encompassed by the specification and claims. Various terms are used to refer to particular system components. Different companies may refer to a component by different names—this document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” or “couples” is intended to mean either an indirect or a direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections. The terminology used herein is for the purpose of describing particular example embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. The method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed. The terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections; however, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer, or section from another region, layer, or section. Terms such as “first,” “second,” and other numerical terms, when used herein, do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from the teachings of the example embodiments. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C. In another example, the phrase “one or more” when used with a list of items means there may be one item or any suitable number of items exceeding one. Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” “top,” “bottom,” and the like, may be used herein. These spatially relative terms can be used for ease of description to describe one element's or feature's relationship to another element(s) or feature(s) as illustrated in the figures. The spatially relative terms may also be intended to encompass different orientations of the device in use, or operation, in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptions used herein interpreted accordingly. While the making and using of various embodiments of the present disclosure are discussed in detail below, it should be appreciated that the present disclosure provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts, goods, or services. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the disclosure and do not delimit the scope of the disclosure. The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. The following detailed description is, therefore, not intended to be taken in a limiting sense. General Embodiment This disclosure is, in general embodiments, a restaurant location for use at least with a mobile ordering system to provide at least more time-efficient pick up of orders by customers than with traditional ordering and pick up windows. This disclosure is, in some embodiments, a restaurant location for use at least with a mobile ordering system, comprising a restaurant building (which is, in various embodiments, a building, multiple buildings, a food truck, a food stand, some other discrete physical restaurant structure, or a combination thereof) having an order pick up window and associated order pick up area configured only for pick-up of pre-paid orders. In some such embodiments, the restaurant location further comprises a parking lot at least partially around the building, and having a drive-through lane that allows vehicles to pull adjacent to the order pick up window. In some embodiments, the restaurant location also comprises a plurality of queuing spaces, being parallel, single-vehicle, parking spaces in the parking lot, each queuing space opening directly into the drive-through lane such that each queuing space provides independent, non-sequential access to the drive-through lane. In some embodiments, the restaurant location further comprises an order status display positioned and sized to be visible both to customers in the queuing spaces, and to customers in or approaching the drive-through lane but not yet approaching the order pick up window. In some embodiments, each queuing space opens directly into at least one drive-through lane such that each queuing space provides independent, non-sequential access to that space's drive-through lane, the order status display is positioned and sized to be visible to customers in the queuing spaces and to customers in or approaching a drive-through lane but not yet approaching the order pick up window; and a merging means to direct vehicles from the several drive-through lanes to approach the order pick up window. In some embodiments, the restaurant location comprises at least one drive-through order placement station separate from and before the order pick up window and placed accessible from at least one drive-through lane. The present system and methods provide a significant advantage in accurate expected wait times. In particular, it provides an advantage over the typical requirement for customers that desire an estimated wait time to call or walk-in to the restaurant location, and receive a mentally calculated estimate from wait staff. The current order system allows customers to receive an estimated order-ready time before placing their order and without having to directly contact the restaurant. Furthermore, it allows a more accurate order ready time, instead of either being inconvenienced by an over-ambitious estimate that is too short and requires the customer to wait longer than expected, or by an overly-conservative estimate that unnecessarily discourages the customer from ordering. In some embodiments, wait staff greet the customer by name, verbally confirm the order, or some combination thereof, before handing the order to the customer. The ordering system and non-sequential pick up does not eliminate human interaction. Instead, it minimizes frustrating human interaction (such as trying to accurately place an order over a drive-through microphone), and allows the human interaction to be more positive, such as greeting a customer by name and with a smile, and handing a fresh, accurate order to a customer. This present disclosure is, in general embodiments, an ordering system for use at least with one or more restaurant customers' mobile computing devices to provide at least more time-efficient pick up of an order involving multiple customers than with traditional ordering and pick up practices. This present disclosure is, in one embodiment, a computer-implemented method including receiving, by a computing device, over a network, one or more contact information pertaining to one or more customers; receiving a selection of a restaurant to initiate an order for the one or more customers; transmitting, via the computing device and based on the one or more contact information, one or more notifications to one or more customer computing devices associated with the one or more customers, wherein the one or more notifications comprises a prompt to at least a menu associated with the restaurant; receiving, by the computing device, one or more selections of items ordered from the menu associated with the restaurant; and transmitting, by the computing device, the one or more selections of items included in the order to a computing device associated with the restaurant to cause the items to be prepared. The present system and methods provide a significant advantage in accurate expected wait times. In particular, it provides an advantage over the typical requirement for customers that desire an estimated wait time to call or walk-in to the restaurant location, and receive a mentally calculated estimate from wait staff. The current order system allows customers to receive an estimated order-ready time before placing their order and without having to directly contact the restaurant. Exemplary Advantages Various embodiments of the present present disclosure provide a multitude of advantages over current ordering, production, and delivery systems. A non-exhaustive, non-limiting, list of example advantages of some embodiments are provided hereafter. Menu Quality In various embodiments, the system and process presented herein provide restaurants the ability to offer customers a combination of a higher quality menu typically associated with greater wait times, and the ordering and pick up convenience typically associated with lower quality fast food menus. Heretofore, restaurants with a high percentage of drive-through orders (typically greater than fifty percent) had to restrict their menu to orders that could be prepared quickly to prevent the line growing uncontrollably and excessive wait times between order placement and order pick up. The present system and methods allow the longer preparation times, such as is necessary in ‘home-cooking,’ in healthier menus, and in more customized or more variable menus, to be accounted for while preserving customer convenience, by pre-ordering. It also allows the greater variations in preparation times associated with a varied menu to be accounted for by a production timing and slip-logic order system. Accordingly, the restaurant prepares orders more efficiently, eliminating inefficiencies in order preparation time, orders not ready when expected, and orders ready substantially before expected. The greater efficiency allows the effect of longer preparation times to be minimized by removing ‘overhead’ time that was lost in inefficiency in previous systems and methods, thereby reducing the impact of the longer preparation time on the time-to-ready that affects customers. Customer Interaction and Convenience Embodiments of the present disclosure provide further advantages in accommodating customer personalities, moods, etc., and in providing customers with a more relaxed and friendly order placement and pick up experience. In particular, the ability to pre-order on a mobile device, computer, or kiosk, allows a customer to explore the menu at their leisure instead of being pressured to quickly make decisions by other customers waiting behind them, or by a hurried wait staff waiting to receive their order. Indeed, in a prototype embodiment of an embodiment of this disclosure using a non-sequential order pick up lane and window, and a mobile ordering system, it was found that mobile orders resulted in an increased engagement of customers with the menu, increased amount of time customers spent creating an order, a greater level of customization, and an increased number of menu options added to orders. Many customers are uncomfortable with excessive interaction: they may feel awkward, they may fear engaging a new restaurant because they are not familiar with it, they may tend to be introverted and prefer limiting unnecessary engagement with random people, they may have had a stressful day and not feel like the extra effort to engage people at the moment, they may be in a hurry and find it more efficient to interact with a device as it is convenient for them rather than dedicating the time to go place their order in person, etc. Whatever the reason, allowing pre-ordering, especially through a website, mobile device, etc. allows the customer to place an order without a) the stress of interacting with an often hurried order taker, and b) dedicating the time to place an order and wait for order preparation. The combination of slip-logic queuing, accurate estimation of order ready time, and notifying customers when the order is ready, enables pre-ordering to work smoothly and efficiently, without previous problems associated with pre-ordering, such as customers forgetting their order, losing track of time, or having to ‘activate’ their order upon arrival and wait for the order to be prepared. Embodiments of the present disclosure offer distinct advantages to customers in convenience and speed, as referred to elsewhere herein. An order pick up window(s) configured solely for pre-orders, especially mobile orders, alleviates the frustration to a customer of pre-ordering, and then being trapped in line behind non pre-orders. Additionally, pre-ordering through a customer-centric mobile application or website allows convenient re-ordering. For example, if a customer regularly places one or several orders, the customer is able, in some embodiments, to access their account and simply ‘re-order’ instead of having to build the order time and time again, or to verbally dictate their order again and again to an order taker at a drive-through location. Restaurant Advantages Furthermore, embodiments of the present disclosure provide advantages to restaurants in increasing order accuracy, increasing customer service, and improving the working environment for staff, thereby contributing to a better experience for customers. In particular, the removal of microphones from the order pick up lane and window, in combination with mobile and online orders, reduces the stress of understanding customer's verbal orders, increases order accuracy and so decreases customer tension over inaccurate orders, and allows wait staff to greet customers picking up their orders with a friendly, un-harried, smile and greeting. The reduction or elimination of phone calls seeking information and placing orders over the phone (due in part to direct mobile and online orders, and due in part to the ordering system, discussed elsewhere, allowing calls to be taken at a quiet, central location) reduces the stress on wait staff, and allows phone conversations to be in a quiet and calm environment without the background noise of a busy restaurant environment. Mixed Pre-Order and In-Line Ordering Additionally, the present system and methods provide an advantage over various systems and methods that seek to improve upon fast food ordering by taking pre-orders, and then mixing pre-order customers and customers ordering in-line in the same order and pick up line(s). In various embodiments, taking orders over a mobile device or other internet-enabled device, calculating an accurate order-ready time, and managing order-prep start time with the slip-logic order management system allows the customer to order when convenient, and pick up when convenient, avoiding extended wait times and making a higher quality menu actually faster for the customer than present fast-food systems and methods. As customers demand higher quality menus, and menus including healthier options, such a system and methods are particularly advantageous to customers and restaurants alike. Similarly, the present system and processes also provide an advantage over systems and methods that take pre-orders, but provide no dynamic order queuing, production timing, or slip-logic, such that large or slow orders may interfere with expected wait times, and small or fast orders may sit abnormally long before the customer picks them up. Again, the present system and methods also provide an advantage over systems and methods that take pre-orders but, in order to maximize order freshness, require the customer to ‘activate’ or ‘confirm’ the order upon reaching the restaurant location, effectively eliminating the advantage of pre-ordering to avoid the wait of order preparation. The present systems and methods, thus, capitalize on the advantages of pre-ordering, rather than effectively putting pre-orders in the same preparation position as if the orders were placed at the window. Non-Sequential Linear Access The present disclosure offers multiple advantages over drive-in restaurants with multiple parallel ordering and pick up spaces. Non-sequential customer access to a drive-through window(s) maximizes efficiency of wait staff, preventing the necessity of constantly carrying orders to a plurality of locations. Additionally, non-sequential customer access to a drive-through window(s) maximizes convenience and time savings for customers, eliminating the need to wait at a particular location for the order to be prepared—an especial advantage over restaurants where the customer places and receives the order at the same window or parking space, and has to wait thereat during preparation. Efficiency Some embodiments have the benefit of improving efficiency. For example, a vendor receiving a plurality of orders that are grouped together in accordance with some embodiments of the present disclosure may determine that each of the plurality of orders need not be ready until the last of the plurality of orders is ready. In this way, some embodiments, may permit vendors to determine when to start cooking items that take less time to prepare. In some examples, one order in the plurality of orders grouped together may take an hour to prepare whereas another order takes just ten minutes; some embodiments may permit the vendor to determine the two orders are grouped together. Therefore, some embodiments will encourage the vendor to begin preparing the order that takes an hour to make immediately but wait fifty minutes to begin preparing the order that takes just ten minutes to prepare. Such embodiments have the benefit of promoting efficiency in the form of improving the quality of the product ultimately delivered to the customer. In some embodiments, such efficiency may take the form of producing a fresher or hotter product to the customer than would otherwise be possible. Additionally, some embodiments of the present disclosure have the benefit of improving efficiency by decreasing deliveries. For example, some embodiments may involve delivering the plurality of orders. In such examples, the present disclosure may have the benefit of minimizing deliveries, reducing deliveries, or permitting all orders to be delivered via a single delivery. Additionally, some embodiments of the present disclosure improve efficiency by permitting vendors to focus on higher priority action items that might otherwise not be recognized as higher priority action items. For example, in some embodiments, if a plurality of orders arrives at a vendor and includes one order that takes an hour to make and another order that takes just ten minutes, the vendor's staff need not divert resources and time to the another order until fifty minutes have passed. In such embodiments, the vendor may then spend resources (including time) on matters that truly need to be completed or addressed before the fifty minute time period has elapsed. Some embodiments of the present disclosure therefore have the benefit of increasing efficiency in the form of setting more accurate expectations regarding the time it will take for products to arrive, providing a fresher, more enjoyable, or higher quality product, and allowing vendors to better allocate their time upon receiving an order grouped in accordance with one or more embodiments of the present disclosure. Coordination Some embodiments have the benefit of improving coordination or reducing friction associated with coordinating between two or more customers. In examples of such embodiments, two or more customers may each individually order from a given vendor rather than having to resort to communicating each of the orders associated with the two or more customers to one of the two or more customers, where the one of the two or more customers then submits the order via their mobile device. Because some embodiments eliminate the requirement that two or more customers all order from a single device, such embodiments improve coordination between the two or more customers. Additionally, because each of the two or more customers purchases their order from their mobile device, some embodiments may eliminate the need to coordinate payment after the order has been delivered. For example, some embodiments may eliminate the need for a single user of the two or more customers to divide an aggregate bill into two or more individualized bills based on what each of the two or more users ordered. Consequently, some embodiments improve coordination by eliminating the need for certain steps to be performed among two or more users that have the potential or likelihood of causing social conflict. Additionally, some embodiments of the present disclosure may have the benefit of improving coordination with a vendor from which the two or more customers purchase one or more orders. For example, if a vendor receives a plurality of orders that are grouped together in accordance with some embodiments of this disclosure, then the vendor may determine a single wait time associated with the plurality of orders based on when the last order in the plurality of orders will be ready. Thus, such embodiments have the benefit of communicating to the two or more customers when their order will most likely arrive, thereby decreasing potential conflicts with the vendor. Retention of Customer Information Some embodiments have the benefit of permitting the retention of customer information. Customer information is a highly sought after and valuable commodity to a variety of companies. In some embodiments, the system may more accurately determine whether two or more persons are ordering together by having access to and maintaining customer information as to the two or more persons. This provides a strong incentive, in some embodiments, for customers to agree to allow their customer information to be retained. In turn, some embodiments may include prompting the user to permit the retention of customer information. In examples of such embodiments, the user—when prompted to give permission to the retention of their customer information—may be informed of the benefits that such information provides to some embodiments. Additionally, such permission may include the right to convey, sell, or disclose such customer information to third-parties. In some embodiments, the sale, conveyance, or disclosure of customer information may be based, in part, on the customer information. In such embodiments, the sale, conveyance, or disclosure of customer information to a third-party may be based on a comparison of the customer information with the information retained about the third-party. Some embodiments may utilize machine learning to perform this comparison. In some embodiments, a neural network, with one or more hidden layers, may be used to determine whether a particular third-party in a set of third-parties is the best match among the set of third-parties for the sale, disclosure, or conveyance of the customer information. Therefore, some embodiments have the benefit of encouraging users to consent to the retention of their customer information; in turn, such retention of information has the benefit of improving performance of some embodiments and generating an additional revenue stream for administrators of some embodiments. Versatility Some embodiments have the benefit of allowing increased versatility. Some embodiments may identify whether two or more orders are grouped together using a variety of means. For example, some embodiments may determine that two or more customer orders are grouped together by receiving as an input, the IP address from which each of the orders is received and the vendor to which the orders are sent. For example, some embodiments may permit several guests at a social gathering at a single house to place several orders to one vendor and determining, based on the single IP address used by the several guests to place each of their orders, that the several orders of the several guests should be grouped together. Additionally, two or more customers at diverse locations may decide between themselves to dine-in at a particular vendor. Some embodiments of the present disclosure may group orders received from the two or more customers based on the historical tendency of the two or more customers to dine together. Some embodiments may determine that two or more orders ought to be grouped together despite receiving as an input location information indicating that the two or more customers are very far apart from one another. Some embodiments may leverage machine learning models and neural networks for order prediction. In some embodiments, as a quality assurance check, two or more customers may receive a prompt asking each user to verify whether they would like their order to be grouped with the two or more customers identified as potentially part of their group. Thus, some embodiments of the present disclosure permit the identification of order groupings despite the myriad of factual cases in which two or more customers may wish to dine together. System and Method Components In various embodiments, the present system and methods of use thereof include the following components. Ordering System The present disclosure comprises, in various embodiments, a production-timing and slip-logic mobile ordering and order management system. A particularly suitable such system, incorporated in many embodiments herein, is described in patent publication US 2017/0018041 A1, published Jan. 19, 2017, the disclosure of which is incorporated herein by reference for all purposes. Further details regarding the ordering system is provided, as relevant, herein, particularly in relation to the detailed description of the FIGs. Mobile ordering system, as used herein, is defined as a system that provides customers with the ability to place an order via a mobile device. Mobile device, as used herein, encompasses stationary computers that are able to remotely access the system—such as a desktop personal computer connected to the internet. Some mobile devices herein must be literally mobile—such as carried in a hand, vehicle, about the human body, etc.—as indicated expressly or by context, such as for dynamically tracking customer location. Such mobile devices include smartphones, tablets, laptops and notebooks, smart watches, vehicle-integrated processing and connectivity systems, and other suitable devices. Non-Sequential Order Pick Up Window In preferred embodiments herein, a restaurant location comprises at least one non-sequential order pick up window (unless specified otherwise, also referred to herein as “order pick up window,” or “pick up window,” and sometimes abbreviated when repeatedly used simply as “window”). It should be noted that, as used herein, “order window” is defined as a general term that includes order pick up windows, order payment windows, order placement windows, or any order-related window, unless otherwise indicated explicitly or by context. An order pick-up window is a type of order window. A non-sequential order pick up window is configured for customers to come to the window and pick up their order in the sequence that the order is ready, and not necessarily in the sequence that the order was placed. In preferred embodiments, the order pick up window is used to deliver orders to customers that have already both placed and paid for their order online (including through a mobile device). In preferred embodiments, the order pick up window does not have a microphone, and does not accept payment for orders. In particularly preferred embodiments, the restaurant location does not have any microphone for customers in the parking lot to communicate with the restaurant staff. In particularly preferred embodiments, there are no sequential queues in the parking lot. “Non-sequential,” as used herein, unless otherwise indicated, is defined as referring to lane, order pick up window, etc. that: (a) is not necessarily entered or approached in the sequence of order placement, or of customer arrival at the restaurant location but, instead, (b) is accessed in the sequence of order pick up, or at least of attempted order pick up by the customer, which is generally and preferably the sequence determined by the intersection of when the orders are ready and when the associated customers arrive. In other words, ideally, a non-sequential lane is entered and a non-sequential order pick up window is approached, for example, when: (a) the customer is present at the location, and (b) an order-ready board (or other order-ready notification system) indicates to the customer that their order is ready for pick up. Non-sequential queuing spaces or lanes, and non-sequential parking spaces, however, are typically entered when a customer arrives, allowing the customer to wait conveniently for a notification that their order is ready. Such spaces and lanes are still non-sequential in the sense that they do not block other customers from entering the non-sequential drive-through lane or approaching the non-sequential order pick up window, thereby preserving efficient customer flow and reducing or eliminating unnecessary wait time because of sequential, linear queuing. In some embodiments, no orders are taken at the order pick up window. In some embodiments, orders and payment are received at the order pick up window, but the customer must exit the lane accessing the order pick up window and wait to be notified (such as by the order board) that the order is prepared, before re-approaching the order pick up window. Such embodiments provide service to customers who do not have access to a mobile device, eliminate the need to go inside to place an order (whether for convenience of the customer or because the restaurant has no area inside for receiving orders), while preserving the efficiency of non-sequential access to the order pick up window. In some embodiments, the restaurant location does not have any microphone for customers to communicate with wait staff. Customers place their orders in various embodiments, through a mobile device, through a web site, inside the store, at a kiosk, with an attendant in the parking lot, or—in relatively limited circumstances—at the order pick up window. Eliminating the microphone allows the restaurant to focus on accurate order preparation instead of trying to accurately hear and interpret customer orders, thereby eliminating a potential source of error. Eliminating the microphone encourages pre-ordering by customers, and the associated advantages discussed herein, including convenience and speed for other customers. In some embodiments, order-placement stations are provided in the parking lot, inside the restaurant, at other convenient locations (such as a mall, grocery store, retail store, office building, industrial center, bus stop, food park, school, university, conference center, visitor center, travel center, convenience store, etc.). In some embodiments, the order-placement stations are kiosks, staff, or third-party personnel or equipment. In some embodiments, the order-placement stations also accept payment through at least one of: credit cards, debit cards, automated clearing house system, electronic funds transfer, cash, bitcoin, other electronic funds, or some combination thereof. In some embodiments, customers who approach an order-pick up window to place an order are directed to order online, with a mobile device, inside the restaurant, or at an order-placement station. Order Notifications In various embodiments, customers are provided with various order notifications through the ordering system, a notification system, an order-ready board (discussed hereafter), etc. In preferred embodiments, customers are provided with notifications on their mobile device, in their vehicle, through short message service (SMS), electronic mail (e-mail), etc. In particularly preferred embodiments, customers are provided with a mobile device notification, an SMS notification, or both, when at least one of the following occurs: their order is within a given time of being ready, or their order is ready. In some embodiments, customers receive a mobile device notification, an SMS notification, or both, when their order is ready. Order-ready notifications are particularly important to non-sequential order pick up, as it prevents customers from entering the order pick up lane until the precise time their order is ready. This provides increased time convenience for customers, and minimizes the length of customer queues. As discussed elsewhere herein, minimizing the number of customers in a lane at a given time decreases customer wait time, and minimizes required real estate for a restaurant location. Pre-Ordering and Pre-Arrival Production In various embodiments herein, the ordering system allows, encourages, or requires, pre-ordering, or placing an order before entering the order pick up lane. In particularly preferred embodiments, the ordering system allows, encourages, or requires offsite (or at least outside of the pick-up lane(s)) placement of orders and payment for orders. Pre-ordering (typically also including pre-payment) allows the restaurant location to begin production before customer arrival, minimizing customer wait time and increasing restaurant and traffic efficiency. In some embodiments, as discussed further elsewhere herein, a location providing only a non-sequential order pick up window, requiring (or at least highly encouraging) pre-ordering and pre-payment, reduces the amount of parking lot needed, as it can be treated like a pick up window (such as a laundromat) for planning and relevant city code purposes. For example, in one embodiment of a restaurant location serving a high-quality, highly-customizable casual restaurant menu, in a metropolitan area of approximately two hundred seventy-three thousand (273,000) people, anecdotal observation indicates that there are never more than two (2) or three (3) customers in a row at the order pick up window, even at highly busy times. Furthermore, pre-ordering online, through a mobile device, etc. allows customers to more fully review the menu without time and embarrassment pressures, and to place the order without time and microphone and language or accent constraints. Accordingly, higher levels of customization and drastically reduced order mistakes are simultaneously possible. Order-Ready Board (ORB) Many embodiments comprise, or comprise the use of, an order-ready board (ORB) or similar order status display structure for notifying customers when to advance to a non-sequential pick up area. In various embodiments, the ORB is positioned such that it can be viewed when approaching a non-sequential drive-through pick up lane, from queuing parking spaces and/or lanes, and from at least some portions of the parking lot in general. The ORB presents at least the orders which can be picked up, using some form of identification which may be readily discerned by customers. In some embodiments, it alternatively or additionally presents the orders for which customers should approach the pic-up area(s) (preferably at least one window), even if the order is not actually ready yet. Such an ORB is not a menu, such as display some or all of a restaurant's menu options, and from which customers determine what they are going to order. However, in some embodiments, a single structure combines both at least one each of an ORB and a menu. Conversely, in some embodiments, an ORB does not function as a menu in any capacity, and is not combined with any menu. Furthermore, in various embodiments, the ORB is not connected to a microphone, and so cannot broadcast a verbal announcement of an order number, a customer's name, etc. In preferred embodiments, the ORB is provided in combination with a means for SMS messaging, mobile device notifications, e-mail, or another form of electronic messaging notification. In such embodiments, the customers may choose to rely primarily on looking at the ORB to know when to advance to the order pick up area(s), may rely primarily on the electronic messaging notification, or some combination thereof. In preferred embodiments, all pending orders are presented on the ORB, and customers are able to verify that their order is in process by looking at the ORB. In some embodiments, the ORB is configured to protect identity, to combat theft of orders, or both. In some such embodiments, the ORB displays at least one of: an order identification alphanumeric string, a customer-provided ‘nickname,’ and a customer provided ‘order identification’ string. Such embodiments do not display a customer's actual name, phone number, or other sensitive information that may be used to breach the customer's security if publicly displayed. In some such embodiments, an additional piece of information, such as a confirmation string, a name, a phone number, or other information associated with the order and/or customer, and not made public on the board, is required before delivery of the order. Such embodiments prevent an order from being stolen by being seen on the ORB by a passerby and then being picked up. An ORB may also be referred to as an order status display which, in various embodiments, displays orders that are ready, displays the status of all pending orders, or other suitable order status display configurations. Group Ordering System In various embodiments, customers are capable of joining a group order based on various input characteristics and contact information associated with the customer. In some embodiments, the customer may input specified codes—which can be for single-use or a recurring group order—that are associated with a group order on the group ordering system. In other embodiments, based on the characteristics, including but not limited to location, place of business, and related contacts, the group ordering system may alert the customer of a potential group order that would be convenient to the customer. In preferred embodiments, customers are able to select menu items and place an individual selection as part of the group order, which is then added to the running tab for the group. In some embodiments, customers in the group order receive a notifications leading up to a preset time when the group order will be closed and no additional modifications can be made. In certain embodiments, the customers in the group can receive a notification when the food is in preparation, when the group order has been picked up, and when the group order has been delivered to a convenient destination based on the characteristics of customers in the group order. A group ordering system can be of particular importance when multiple individuals plan to order from a specific restaurant location and dine at the same time. The group ordering system can allow the individual customer orders to be placed up to a preset time and, so long as the customer in the group order makes menu selections prior to the preset time, all of the customers' orders in the group order can be treated as a single order when sent to and executed by the restaurant. The group ordering system is also beneficial to the restaurant preparing the order, as it reduces the amount of customers (or delivery drivers assigned to different customer orders) that will enter the pick-up lane, parking lot, and physical restaurant building to wait multiple different orders. Rather, for the group ordering system, a single delivery driver can be assigned to and handle the entire group order, as all of the food is set to be ready for pickup at the same time and will be delivered to the same location. DETAILED DESCRIPTION OF THE DRAWINGS FIG.1—Ordering Process Referring toFIG.1, representing some exemplary embodiments, the process starts at102where customers access a menu of available items. The menu will generally be available on an interactive website or mobile application (app) that can be accessed by customer through his/her mobile device (cell phone, computer tablet, computer, and the like). Customers (guests) can place orders from anywhere—allowing them to plan ahead and pick up later. This allows other occupants in a vehicle to place the pick-up order while the vehicle is moving towards the restaurant. The menu display also includes means for the customer to indicate the desired pick up wait time ((for example: as soon as possible (ASAP), 10 minutes, tomorrow, etc.)). FIGS.2A-2D—Order System Screens In various embodiments, shown inFIGS.2A-2D, when the customer accesses the menu a customer identification (ID) is generated by a data processing unit, or the customer uses an existing unique ID number or code. When the customer selects items from the menu for orders that are wanted ASAP, there is generated and displayed on the customer's mobile device or computer the menu items selected, the price and the preparation time for each item, and the preparation time plus indicated wait time, as illustrated inFIG.2AandFIG.2B. Thus, the customer can see (104display on customer's mobile device, see alsoFIG.2A- or appropriate computer screen) if the preparation for one or more items is excessive for his or her needs and can edit the order accordingly. The system data processor(s) will calculate all the variables. If a particular item on the order could be deleted and allow the order to be produced more quickly, then the system will highlight that item and inform the customer how much time could be “saved” by not ordering that particular item. If no item can be deleted to save preparation time, then no indication will be displayed. The display may also display an appropriate message such as “Your Order's Wait Time will be X minutes. To shorten your Wait Time, remove the highlighted item(s) below.” The display may also show “Promised Time”. For example, preparation time for pizza may be 7 minutes, and all other items (drinks, sandwiches, bagged chips, and the like) only 1-2 minutes, as illustrated in an example mobile device display inFIG.2A. Thus, the customer may edit the order, for example, delete pizza, and select another item with shorter preparation time (see106to102ofFIG.1), such as is shown inFIG.2B, where pizza is deleted and hamburger is selected. The order is confirmed or edited. If edited (as illustrated onFIG.2B) it may then be confirmed. When confirmed,106(decision connector) to108(customer payment module), the order and cost is displayed (FIG.2C) and the customer is asked to make required payment, as illustrated inFIG.2C. The order status will be periodically or continuously updated (FIG.2D). There will also be an indication on the customer's device screen that the “order is ready.” For future planned orders where a customer selects a specific time slot and day, the system will determine whether or not the order can be produced (and delivered) by the time requested, and inform the customer on its device screen. No production times will be displayed. When the customer selects items from the menu, there is generated and displayed on the Customer's mobile device or computer the menu items selected, the price and the indicated total wait time. Also displayed next to any one item in the order is the amount of wait time that could be removed from the total wait time by removing that one item (which has a longer production time associated with it) from the order. Thus, the customer can see (104display on customer's mobile device,FIG.2A) if the preparation for one or more items is excessive for his or her needs and can edit the order accordingly. For future “planned” orders the customer selects a specific time slot and day. The system will determine whether or not the order can be produced (and delivered on time) by the time requested. In this scenario, no production times will be displayed because they do not matter. As noted above there is also provided means, in some embodiments, for customers to enter a unique customer ID number or code upon placing an order (for example, to create an account). This identification number or code will facilitate speedier service and allow identification of repeat and frequent customers. Customer's payment types, past orders and favorites are remembered by the system, thus, making reorder quicker and more convenient compared to traditional drive through windows. The wait time for as soon as possible (ASAP) orders and time slots allowed for future orders is based on an algorithm that factors multiple variables. Variables include (but are not limited to):a. ASAP or promised time(s) of prior orders and the current production progress of each of those prior ordersb. Order sizec. Order item complexityd. Production staff levelse. Delivery staff levelsf. Skill levels of staff membersg. Delivery distance of prior and current orders The system, with a customer identification, is programed in some embodiments to allow frequent, loyal, “very important person” (VIP) guests to jump ahead of the line and for their order to receive preferential timing. The system is also, in some embodiments, configured to allow guests to pay an extra fee to receive their order quicker. It also allows, in some embodiments, for a “Free if Late” promotion, other promotions, or some combination thereof. The system contains a management tool for measuring and tracking promised times versus actual fulfilled delivery and pick up times. In preferred embodiments, only electronic payment from a customer's mobile or computer device connected by the internet (or other distributive computing method or connectivity method) is accepted as payment. There are many mobile payment systems available and more are being developed all the time. These include Square Wallet, virtual prepaid cards, Google Pay, Apple Wallet, Android Pay, Dwolla, and the like. In some embodiments, there is provided a kiosk at the restaurant location for payment by credit card, cash, other payment means, or combinations thereof. Also, in some embodiments, there is provided a customer service representative (order taker) in the location parking lot that will have a mobile device for taking orders and payments. In addition to displaying the menu or ordered items there is provided, in some embodiments, an interactive sensor (usually a button) on the menu display on the customer's mobile device or computer that will allow nutritional information for the menu or ordered items to be displayed. The nutritional information also contains, in some such embodiments, various diet “points,” such as used by diet services as, for example, Weight Watchers™, Jenny Craig™, NutriSystem™, and the like. The information is stored in data storage in the Data Processing Unit or other suitable accessible data storage, and is accessed by the appropriate computer program of the computer system. Kitchen orders are prepared by the preparation staff in parallel, not necessarily in linear sequence, as determined and arranged by an algorithm of the data processing unit. The system algorithm determines which order to make next and dictates promised completion times. Simple orders and preferred guests' orders are moved forward in progression. Multiple orders are worked simultaneously. When orders are complete, the customer is notified and the order preparation algorithm is updated to calculate a new preparation display for the preparation kitchen staff. When the order is ready the customer is notified (112) (FIG.2D) by visual display (FIG.2D) or text message on his/her mobile device or computer, or by any other suitable means, and the customer moves to an order pick up window,120. An order placed for delivery also receives a text message on his/her mobile device or computer, or by any other suitable means, to notify them that a delivery driver has departed from the restaurant with their order and gives them an updated arrival time based on current traffic conditions available on various internet sites. Thus, there is no sequential waiting for order pick up: the order is picked up when ready and, since the order is prepaid, no wait is necessary for payment and change making. This is facilitated by the arrangement of the location physical layout explained in more detail below. The data processing unit or module,110, is the computing and data generation heart of the system. It contains suitable data storage capacity for menu items, prices, preparation time, customer identification, order details, payment details and the like. It is capable of computing preparation wait time and pre-preparation times from inputted and stored data and causing the resulting information to be displayed on customers' devices and display panels (116), for customers and preparation personnel. The data processing unit and associated data storage will suitably be a computer programed and running software to perform the functions described. Implementation of such a data processing system is well within the capabilities of those skilled in the art. FIG.3—Ordering System Referring toFIG.3, representing various embodiments, there is a flow process diagram of functions of the data processing unit.FIG.3,110is the computing data processing unit of the system. It will receive or access data from136fixed data storage and write to the data storage system. The data, such as menu items, prices, preparation time, and the like, are “fixed” in the sense that they are not immediately variable. The system will have a mechanism allowing a manager to log in and mark any item, such as “Sold Out,” so that customers cannot continue ordering an item no longer in stock. The data may be updated as often as needed and there are means for updating the data. This management or administrative unit allows for input, for changes in data storage, and for receiving data output (seeFIG.5). Temporary, calculated, and intermediate calculation values are stored in data storage unit132, and can be accessed and written to by the data processing unit,110. For example, the preparation time is affected by staff levels and skill levels in the algorithm. Customer input is from134and includes ordered item, edits, delay or requested wait time, and the like. Order taker(s) (116) provides input/output. These staff members may be located at a call center, or in a restaurant, and will take orders by phone from customers and enter into the system. They may be assigned to roam the parking lot of a restaurant location and take orders from customers in cars. Input from the preparation staff is shown as138, and includes start time, order ready information and may include continuous update of order status. There is also provided means for the Preparation Staff and order administration/management to input data on orders that are being processed. These will include, in some embodiments, individual computer tablets, or equivalent or larger display panels, that will have data on customer ID, ordered item, requested delay time, and calculated preparation order sequence. In some embodiments it is not expedient for the preparation staff to use touch or keyboard inputs, but foot operated input devices are suitable. Voice control input to suitable receivers are preferable in some embodiments. There an abundance and variety of voice activated/control technology available which can easily be adapted for use in the system of the disclosure. The data processor unit will calculate the needed information and send it to the appropriate location. In general, communication for the data processing unit to customer's display and preparation staff will be wireless. The data processing unit and data storage may be a dedicated system or operated by remote shared distributive computing (“the cloud”). A cloud system will generally be preferred. FIG.4—Parking Lot Referring toFIG.4, which is an illustrative restaurant/lot arrangement of an embodiment of the disclosure,202is a restaurant building,212is an order pick up window,204and206are customary parking spaces, and208is a plurality of non-sequential parallel single vehicle parking spaces for cars placing orders and awaiting order ready notification. An access road220has lot entrances221and222. The side by side non-sequential single vehicle parking spaces, with access to a non-sequential drive through lane215leading to the non-sequential pick-up window212, allows customers to: a) not feel rushed when placing an order because no car is behind their vehicle waiting to order; and b) proceed to the pick-up window immediately when their order is indicated as ready without the potential wait that can be caused in a traditional drive-through by other customer's queued vehicles in front of them awaiting their orders' production and completion. This can greatly reduce waiting time and improve the ordering experience. Having payment prior to pick up also reduces wait time and makes the entire process more efficient. Items224,226and228are lot perimeters. Item214is an ORB, used in some embodiments and not in others, for displaying customers order ready information. Item218is a payment kiosk provided only in some embodiments. Customers (guests) may enter the lot and park in the customary parking spaces, or in the non-sequential single vehicle slanted spaces, as they desire. Menus are displayed on customer's mobile devices, or computers through the internet or other suitable distributive computing/communication system, and orders are made and processed when customers are at any location. In preferred embodiments, orders are entered before the guest enters the parking lot, scheduled for times in the future, etc. Customers wanting immediate, “as soon as possible” (ASAP), service will generally be located in the parking lot, be prepared to depart their current location for the restaurant, or be headed towards the restaurant lot. The most expedient location will be the slanted non-sequence single vehicle spaces208. Customer without the ability to pay by mobile device may, in some embodiments, drive by the optional kiosk218and pay with cash or credit card or, in some embodiments, may pay via an optional attendant that roams the parking lot to take orders and payment. There is no provision for payment at the order pick up window. FIG.5—Multi-Location System The system, in various embodiments, interconnects more than one restaurant (store) unit into a combined system and connects to the data processing unit with a complete feedback loop to and from each restaurant to provide information of all orders to each location. An embodiment of such a system is illustrated in the flow diagram ofFIG.5. This information is updated when every order in progress is noted as complete. This multi-location system allows routing of order preparation to the most efficient location, where possible. It is especially helpful for scheduling and preparing pre-scheduled and delivery orders. Each restaurant unit,320,322and324, will have its own server (data processor) that will be able to communicate with the central data processor310. The order data starts off on the server (data processor) hosting the website, ordering system, or application, and is then passed to the appropriate store server. One unique feature is that the system data processors are then passing data back and forth at least to the system website or applications to update the “overall order queue” timing of current orders. Each restaurant unit,320,322and324, has the ability to adjust certain timing variables based on current in-unit conditions to increase or decrease wait times displayed to guests currently ordering. The system will access in-unit schedules and staff positions and skill levels to determine team's productive capacity at any given time interval on any given day. The management display and input unit,312is connected to the data processing unit,310, to allow administrative input and to be able to obtain real time and calculated information of the operations. Inputs include, pricing, staff level at each location, order status, particularly prescheduled and delivery orders, etc. The system also allows for delivery drivers,314, to be re-routed from one store pick up location to another store pick up location by a central logistics control mechanism factoring in variables to shorten the overall wait time for customers,334. Manager(s) can login and update the system to current staff levels and in store conditions so that adjustments to timing are made; for example, sick staff members, delivery driver in vehicle accident, etc. The system will also allow managers,312, to manually increase or decrease wait timing to slow or speed up order inflow. The system will supply data from future guest orders (for tomorrow, two days out, etc.) to kitchen/bakery production software and vendor inventory ordering software to help better prepare product quantities for future work dates.FIG.5illustrates the system having multiple stores interconnected. The data processing unit310and data storage units332are central (on-site or cloud). The data processing unit receives inputs and provides data and output to each of the interconnected restaurants A (320), B (322) and C (324). It can receive and provide data (directions etc.) to a delivery driver (or drone) and provide order information to Kitchen preparation staff,338, and to the Administration unit312. Order taker(s) are also an optional input/output source,316. These staff members may be located at a call center or in a restaurant, and will take orders via phone, chat, etc. from customers,334, and enter the orders into the system. They will also roam the parking lot and take orders from customers in cars. This will allow orders by those customers who do not wish to use their own mobile devices. It will also allow better customization of future orders, as the order takers will be able to gather and input to the system identification data such as the customer's name, phone number, credit card number, automobile license plate number and the like to make the ordering process more convenient for guests. For example, the customer's vehicle license plate number is saved to the guest's user profile so that staff members will know the guest's name, have stored payment information tied to the account, see the guest's favorites, past orders, etc. as they approach the guest's car. This should make the ordering experience more convenient for the guest. With payment types stored to the user profile, on return visits in the same car, guests will not need to physically provide their credit card. In some embodiments, the restaurant location is provided with at least one license plate reader, camera, or similar technology, to allow restaurant staff, the ordering system, or both to identify the customer or customer's vehicle before the customer arrives at the order pick up window. In some embodiments the disclosure is a system and process for managing and scheduling an order in a restaurant with both pick up business and delivery business. Scheduling take-out orders and delivery orders in the same preparation location (kitchen) is made more efficient while reducing wait time on pick up orders. This management process works basically the same as for pick up orders, except that driver pick up, driver time availability, and various orders' delivery locations proximity to each other will be taken into account in the data process unit to determine the scheduling of preparation and calculate delivery time. This allows proximate orders to be clustered with one driver to speed up overall times. Driver location and arrival time will be displayed on the delivery customer's computer or device in the same manner as for pick up customers. FIG.6—Another Parking Lot Referring toFIG.6, which is an illustrative restaurant/lot arrangement of an embodiment of the disclosure,202is a restaurant building,212is an order pick up window, and206is customary parking spaces. The restaurant is, in some embodiments, a traditional restaurant with a sequential drive-through order lane236, such as at a menu board with a microphone (not shown), with an additional non-sequential drive-through pick up-only lane238accessing a single order pick up window212. In some embodiments, payments are accepted at window212. In other embodiments, payment is not accepted at window212. Customers enter by lot entrance222and leave by lot exit221. In some embodiments, side by side, non-sequential, single vehicle parking spaces (not shown), with access to the non-sequential drive through lane238leading to the pick-up window212. Items224,226,228, and230are lot perimeters. Item214is an ORB for displaying customers' order ready information. Item2141is an ORB that is narrower, and vertically-oriented, used alternatively or additionally in some embodiments. Item2142is an alternate ORB location used additionally or alternatively in some embodiments. Pre-order customers (guests) may enter the lot and park in the customary parking spaces, or the non-sequential single vehicle slanted spaces (not shown), if provided, as they desire. Menus are displayed on customer's mobile devices, computers through the internet, or other suitable distributive computing/communication system. Pre-orders are made and processed when customers are at any location. In preferred embodiments, pre-orders are entered before the guest enters the parking lot, scheduled for times in the future, etc. Customers wanting immediate, ASAP, service when pre-ordering will generally be located in the parking lot, be prepared to depart their current location for the restaurant, or be headed towards the restaurant lot. Customers who wish to drive through and have not pre-ordered, and do not wish to place mobile or online orders, are referred to herein as order-in-line customers. Such customers will enter sequential drive-through lane236, and will typically place their order at a microphone-equipped menu board (not shown), previous to stop-go light234,2341, or2342. Stop-go light234indicates to order-in-line customers whether they may proceed in the drive-through lane236to the order pick up window212. Stop-go light2341is alternatively or additionally used in some embodiments. Pre-order customers will enter non-sequential drive-through lane238, preferably when their order is ready, as seen on at least one ORB (214,2141,2142, or some combination thereof). Customer presence detector232detects when a vehicle is present in lane238, and staff inside restaurant location202are notified. In some embodiments, stop-go light2342, stop-go light2341, or both, indicate to non-sequential pre-order customers whether to merge in to lane236and approach the order pick up window212. In some embodiments, wait staff are also provided with customer identification, as discussed elsewhere herein—such as a license plate reader, notification from the customer's mobile device, etc.—in order to have the appropriate order already located and waiting at the window. Pre-order customers without the ability to pay by mobile device may, in some embodiments, pay at the order pick up window, similar to sequential order-in-line customers. FIG.7—Coordinated Ordering Method Referring toFIG.7, which shows a method700coordinating ordering between mobile devices. The method700may be implemented as computer instructions stored on one or more memory devices and executed by one or more processing devices. As shown inFIG.7, method700beings at step702. At step702, one or more pieces of contact information is received over a network. The one or more pieces of contact information pertain to one or more customers. In some embodiments, the one or more pieces of contact information may be stored using a mobile application on the customer's mobile device. In some exemplary embodiments, the one or more customers are associated with each other based on characteristics noted in the one or more pieces of contact information. For example, in some embodiments, the one or more customers can be members of family. In such an example, the one or more customers may share the same home address, home phone number, or combinations thereof. Additionally, in some embodiments, the contact information received at step702may show that the customers are associated with one or more entities. For example, the one or more pieces of contact information pertaining to the one or more customers may correspond to ordering from a particular physical location, be associated with a particular company, or combinations thereof. At step704, a selection of a restaurant to initiate an order for the one or more customers is received from at least one customer. At step706, one or more notifications to one or more customer computing devices associated with the one or more customers are transmitted. In some embodiments, the transmission can be based on the one or more pieces of contact information. Following, at step708, the system receives one or more selections of items ordered from the menu associated with the restaurant. For example, in some embodiments, the one or more selections of items can be ordered in parallel by the one or more computing devices associated with the one or more customers. At step710, the method700continues through transmitting the one or more selections of items included in the order to a computing device associated with the restaurant. Responsive to the receipt of the one or more selections, in such an embodiment, the restaurant may begin preparing the items included in the order. For method700, the method may additionally include dynamically repeating at least one of steps702through710responsive to an update of the provisional input from any one of the one or more customers. Such an update of the provisional input may include a change to the selections of items, an update to the payment credentials, or the addition of a selection from another customer associated with the one or more pieces of contact information. In some embodiments, the method700may additionally include a step of receiving a cutoff time for when the order must be placed. In such an embodiment, after receiving a cutoff time, the method700may continue with disabling an ability to modify the order after the cutoff time. Additionally, in another embodiment, method700may include receiving a selection relating to designating the order as a pickup order or a delivery order. For example, if the order is designated as a delivery order, the one or more pieces of contact information may be utilized to determine the delivery location of the order. In certain embodiments, the method700may additionally include a step of determining a property of the one or more customers. The property of the one or more customers, in some example embodiments, may be determined based on contact information common between the one or more customers. In other embodiments, the determination of the property of the one or more customers may be specified from the provision of additional contact information at the time the order is placed. From this, the method700may also continue with limiting one or more items associated with the menu based on the property of the one or more customers. In certain embodiments, the method700may additionally include a step of receiving a time selection for the items to be prepared. For example, the one or more customers can select a designated pickup time from the restaurant to retrieve the entirety of the selections made by the one or more customers. In response to the time selection, in some embodiments, method700may additionally include a step of transmitting the time selection to the computing device associated with the restaurant to cause the items to be prepared at the time. For certain embodiments pertaining to delivery, the method700may also include a step of transmitting, by the computing device, a location to which the restaurant is going to deliver the items. The transmission, in such an embodiment, may be made to a website, application, customer mobile device, or combinations thereof. Further, in this embodiment, the method700can simultaneously transmit a timeframe at which the restaurant plans to make the delivery. Accordingly, in the example embodiment, the method700may include receiving, from the one or more customer computing devices, one or more confirmation messages indicating the one or more customers will be at the location and the timeframe to pick up the items. FIG.8A-B—Coordinated Ordering Systems Referring toFIGS.8A and8B, representing various embodiments, there is a flow process diagram of functions of the data processing unit. InFIG.8A, the system, in various embodiments, interconnects more than one restaurant (store) unit into a combined system and connects to the data processing unit with a complete feedback loop to and from each restaurant to provide information of all orders to each location. This multi-location system allows routing of order preparation to the most efficient location, where possible. It is especially helpful for coordinating pickup and delivery orders associated with multiple customers, which may order from separate computing devices. In the embodiment ofFIG.8A, one or more customer mobile processors814can be communicatively coupled to an order taker812. The information from the order taker can be accessed through interaction with the data processor810. Further, in such an embodiment, the data processor810may receive or access data from data storage818and write to the data storage system. The data, such as menu items, prices, preparation time. In addition to the stored data, the information, inputs, and real-time conditions may be provided by the kitchen preparation staff816and information stored in the restaurant data processor820. For example, the input from kitchen preparation staff816can include start time, order ready information and may include continuous update of order status. Accordingly, in some embodiments, based on the information received from the order taker812in communication with the customers mobile processors814and considering the input from the data storage818, kitchen preparation staff816, and restaurant data processor820, the data processor810may communicate optimal pick up information to an assigned delivery driver using a delivery driver processor822. Further, as shown inFIG.8B, each customer unit may contain its own unit (data processor) that will be able to take an order, such as order taker812,824, and826, and then communicate with the central data processor810. The order data starts off on the server (data processor) hosting the website, ordering system, or application, and is then passed to the appropriate store server. One unique feature is that the system data processors are then passing data back and forth at least to the system website or applications to update the “overall order queue” timing of current orders. Immediate changes to customer input—prior to the retention in the provisional storage828—may come from order takers812,824,826. The change information coming from order takers812,824, and826can include changes to ordered item, edits, delay or requested times for arrival at the dine-in restaurant location, and the like. In some embodiments, the inputs from the kitchen preparation staff816, data storage818(based on information from provisional input storage828and fixed data storage830), and restaurant data processor820work in conjunction with data processor unit810for storage of data and rules to help optimize the order completion time being determined by the data processor810in the system for coordinated item preparation. The data processor unit will calculate the needed information and send it to the appropriate location. In general, communication for the data processing unit to customer's display and preparation staff will be wireless. The data processing unit and data storage may be a dedicated system or operated by remote shared distributive computing (“the cloud”). A cloud system will generally be preferred. FIG.9—Coordinated Ordering Process Referring toFIG.9, representing some exemplary embodiments, depicts a flow diagram of a process starting at block902where customers access a menu of available items and select an order based on available options on the menu. In some embodiments, the menu can be available on an interactive website or mobile application (app) that can be accessed by customer through his/her mobile device (cell phone, computer tablet, computer, and the like). Multiple customers, in the embodiment depicted inFIG.9, can place orders from anywhere, even separate from the other members of the group that the customers plan to dine with, and nonetheless still pickup (or receive delivery) at the same time as the rest of the group. As shown inFIG.9, the process continues from order selection to the identification of the order at block904. For example, in some embodiments, prior to and during the placement of the order, the customer may select or confirm one or more pieces of customer information that relate to the customer making the order selection. Based upon the one or more pieces of customer information, at block904, the order is identified as belonging to a particular group associated with the customer. In some embodiments, as shown inFIG.9, once the order is selected in block902and identified block904, the customer will have the ability to modify the order at block906. The steps between blocks902and906can repeat through multiple iterations of order selection. Based on the order selection and determination of the group that the customer is ordering from, the process can involve sending a customer notification918. For example, the customer notification at block918may notify members of the group that an order has been initiated, identified, modified, or combinations thereof. Responsive to receiving a notification at block918, a new customer sharing one or more pieces of contact information with the original customer may begin the process starting at902. In some embodiments, the notification arises when customers have input a particular code associated with an anticipated group order. At block908, the order can be finalized. In some embodiments, there can be pre-set or pre-determined time at which the order is finalized. In other embodiments, the customer can select when to finalize the order in block908. Once the order is finalized at block908, the information may proceed to a processing unit910. Following from block910, on the customer facing side, the customer can in some embodiments be prompted to select and/or provide payment information in block912. Following from block910, on the restaurant facing side of the process depicted inFIG.9, the restaurant can begin order preparation in block914. After the order is prepared in block914, the order may be set for retrieval in block916. In certain embodiments, the retrieval can encompass a delivery driving picking up the order to bring to a pre-selected location and a pre-determined time. In other embodiments, the order retrieval at block916can occur from a customer—such as the customer making the order selection in block902—picking up the food directly from the restaurant. In some embodiments, the delivery is determined and optimized based on geolocation of the user devices involved in the group order. Further, in some embodiments, the customer managing the group order can provide details to delivery drive about arrival time, location, preferences, and combinations thereof. In some embodiments, and as shown inFIG.9, the customer payment in block912and the order retrieval at block916may initiate a customer notification in block916. In some embodiments, the customer notification pertaining to payment, pickup, and order selections can be distributed to each of the customers in the group associated with the order being placed through the process. FIG.10A-C—Contact Information Mobile Display Screen FIGS.10A-Cdepict a schematic view of a possible mobile device display of an embodiment of the present disclosure relating to contact information of customers using an application to order menu items from a coordinated menu system. In some embodiments, the displays shown inFIGS.10A-Ccan be shown on a user device10000, which can be a mobile device, a personal computer, a tablet, a desktop computer, or combinations thereof. FIG.10Ashows an example screen on the user device10000for inputting one or more pieces of contact information. In some embodiments, the customer may select a picture identifier10002for use in regard to the profile that the contact information is associated with. In some embodiments, the user can input his or her name10004, addresses10006, phone numbers10008, and email addresses10010. In some embodiments, the customer can identify related contacts10012with whom the customer anticipates making a group order. The addresses10006provided by the customer can correspond to home address, work address, common business addresses, or combinations thereof. FIG.10Bdepicts an exemplary customer's profile on the user device10000after inputting the information requested onFIG.10A. The example profile, in some embodiments, can show the picture identifier10002and name10004. In some embodiments, the customer can have the option to update, change, correct, or add new information relating to pieces of contact information by selecting an editing button10014. Additionally, in some embodiments, the information can be displayed along with the saved address information10016and saved personal contacting information10018. Further, in some embodiments and as shown inFIG.10B, the related contact profiles10020can listed on the customer's profile. In some embodiments, the customer can select the related contact profiles10020and link the profiles together under shared entities (such as work, family, or interest groups) or events. FIG.10Cdepicts a schematic view that a customer may see on a user device10000before selecting a restaurant for a group order. In some embodiments, the screen for selecting a restaurant may include restaurant identifiers10026, as well as real-time information about the restaurant10022. The real-time information about the restaurant10022may include, as shown inFIG.10C, the wait time and the maximum party size for dine-in customers. Additionally, in some embodiments, the real-time information about the restaurant10022can include customer reviews, delivery/pick-up options, details about special pricing or menu features, or combinations thereof. In some embodiments, the screen will include an order button10024that the customer can select to initiate a group order at the particular restaurant corresponding with the order button10024. FIG.11A-B—Ordering Mobile Display Screen FIGS.11A and11Bdepict a schematic view of a possible mobile device display of an embodiment of the present disclosure before placing an order (shown inFIG.11A) and after placing an order (shown inFIG.11B). In some embodiments, the displays shown inFIGS.11A and1Bcan be shown on a user device, which can be a mobile device, a personal computer, a tablet, a desktop computer, or combinations thereof. FIG.11Ashows an example view that a customer ordering in a coordinated ordering system may view after selecting a restaurant and making preliminary ordering determinations. In some embodiments, a restaurant identifier10026may be present on the screen to confirm for the group that the order is being selected at one particular restaurant location. In various embodiments, shown inFIGS.11A-B, when the customer accesses the menu a customer identification (ID), through the form of an order number11002, is generated by a data processing unit (e.g., processing device), or the customer uses an existing unique ID number or code. When the customer selects items from the menu for group ordering purposes, there is generated and displayed on the customer's mobile device or computer the menu items selected, the price for each item, and the option to set a drive to the dine-in restaurant, as illustrated inFIG.2A. Thus, the customer can see (104display on customer's mobile device, see alsoFIG.11Aor appropriate computer screen) which menu items to expect when arriving at the dine-in location and can edit the order accordingly. The system data processor(s) will calculate all the variables. Thus, the customer may edit the order, for example, delete pizza, and select another item with shorter preparation time (see106to102ofFIG.1, which may be used in conjunction with the screen ofFIG.8A). In some embodiments, after making menu selections, the customer may see the summary of their particular order along with the associated information for the rest of the group order. For example, as shown inFIG.11A, the display may include a customer identifier11004. In some embodiments, the customer identifier11004can include one or more contact information pertaining to the customer. For example, the customer identifier11004can include the phone number, email, or relation to the rest of the group pertaining the specified customer. Moreover, in some embodiments, the customer identifier11004may include information regarding whether the customer has already paid for the selected menu items. In addition to the customer information, the screen may also show the price for the items selected for order under a specified price heading11006. For each customer order under the customer identifier11004, the selected menu items11008can be displayed along with the associated selected menu item prices11010. Moreover, in some embodiments, when there are multiple customers involved in the group order, the display can providing a pending order status11012to highlight which members of the group still need to place an order. In some embodiments, the pending order status11012can identify which members of the group still need to place an order. In certain embodiments, the customers that are a part of the group order may have an option to select a finalize order button11014. In certain embodiments, the finalize order button11014may only be available to select members of the group order. For example, in the situation where a company is organizing a group order for multiple employees (and the company is financing the order), the company representative may select to be sole customer in the group that can finalize the order to ensure that the order is not prematurely finalized. In some embodiments, the order system can include an automatic finalization button to allow the restaurant sufficient time to prepare the order. As illustrated onFIG.11B, once the order is placed and at least a first provisional input has been sent to the restaurant, the order summary may be displayed on the computing device10000. Moreover, the provisional input may be changed through a selection to edit until the order finalization has been initiated. In some embodiments, after the order is placed, the entire group—each customer that is a part of the group order—may review a summary of the order. The summary of the order can, in some embodiments, provide the order number11002to help the group confirm that the order pertains to their multiple orders. Additionally, the summary of the order, as shown inFIG.11B, can include a listing of the customers11016that ordered as a part of the group, the type of order11018, the status of the order11020, and the predicted order completion time11022. For example, the listing of the customers11016can include the expected members of the group based on the one or more pieces of contact information that was received and specified prior to the order. In some embodiments, the listing of the customers11016can highlight which expected customers placed an order and which expected customers did not order with the group. Additionally, the type of order11018, in some embodiments, can identify whether the order is going to be picked up or delivered through a third-party. In certain embodiments, the type of order11018may specify exactly which group member is anticipated to pick up the order. In some embodiments, the status of the order11020can show each item ordered by the group and whether such item is ready or being prepared. In certain embodiments, the order completion time11022may be provided by the restaurant. In such an embodiment, the order completion time11022pertains to the entire group order, which can be picked up as a single order even though it was placed by multiple different customers on multiple different devices. In some embodiments, the display after the order selection can also include a payment button11024, which can take the customer to a payment screen. In some embodiments, the payment button11024can allow a customer to pay for the entire group order. In other embodiments, the payment button11024can allow a customer to pay for his or her particular order without paying for the other orders in the group. FIG.12—Payment Mobile Display Screen FIG.12provides a schematic view of a possible mobile device display of an embodiment of the present disclosure showing a payment screen for a group order. In some embodiments, the payment process for a group order is pre-determined by the customer responsible for organizing or initiating the group order. For example, if a customer begins a group order, the customer may select that he or she will personally pay for the entire group ordering. In such an embodiment, the customer will next be prompted to select payment type and input payment information, which will then be charged after the time for modification is closed, as is further discussed and shown inFIG.16. Further, in some embodiments, the customer that is organizing a group order may be brought to the payment selection screen on the user device10000directly after initiating the group order. In this embodiment, the customer would provide payment information before the other customers in the group order begin selection of menu items. Accordingly, in such an embodiment, the customer organizing the group order may provide payment information prior to the other customers input their contact information (corresponding to the screen shown inFIG.10B). In other embodiments, the customer organizing a group order can pre-select that each guest will pay for his or her own meal. In such an embodiment, the individual customers may be brought to a payment screen on the user device10000, as illustrated onFIG.12. In such an embodiment, while the customer may to select options for payment using their mobile device or may select to pay at the store12006, the customers (apart from the customer organizing the group order) will not be able to adjust whether the payment is for the individual or the group. For example, in such an embodiment, the an individual payment selection12002or a group payment selection12004would not be active on the user device10000screen. In some embodiments, if the customer selects a payment option, the customer is asked to make required payment, as illustrated inFIG.12. When the customer selects to make a payment, the user device10000may depict a screen that confirms the transaction pertains to the customer's group order based on the provision of the restaurant identifier10026and the order number11002. The customer can then, in some embodiments, have the choice to pay for the order through an individual payment selection12002or a group payment selection12004. For example, if the customer selects the individual payment selection12002, then the customer would only be charged for the portion of the group order that includes menu items that the particular customer selected. Whereas, in an embodiment where the customer selects the group payment selection12004, the customer can pay for all of the menu selections in the group order. Further, in such an embodiment, the customer may be able to select options for payment using their mobile device or may select to pay at the store12006. In certain embodiments, only electronic payment from a customer's mobile or computer device connected by the internet (or other distributive computing method or connectivity method) is accepted as payment. There are many mobile payment systems available and more are being developed all the time. These include Square Wallet, virtual prepaid cards, Google Pay, Apple Wallet, Android Pay, Dwolla, and the like. In some embodiments, there is provided a kiosk at the restaurant location for payment by credit card, cash, other payment means, or combinations thereof. Also, in some embodiments, there is provided a customer service representative (order taker) in the location parking lot that will have a mobile device for taking orders and payments. FIG.13—Confirmation Mobile Display Screen Referring toFIG.13, which is a schematic view of a possible mobile device display of an embodiment of the present disclosure showing a notification sent to one or more corresponding customers,10000is a user device showing a group order notification13002. In some embodiments, the group order notification13002can alert a customer that a group that shares contact information with the customer has begun an order. For example, if a business is ordering for its employees, the group order notification13002may be sent out to each employee's user device10000to ensure that the employees are aware of the group order. In some embodiments, the group order notification13002can alert customers to update regarding the group order. In certain embodiments, the group order notification13002can relate to payments made in respect to the group order. FIG.14—Delivery Information Mobile Display Screen Referring toFIG.14, which is a schematic view of a possible mobile device display of an embodiment of the present disclosure showing a confirmation message sent to a customer,10000is a user device showing an order placement confirmation message14002and an order creation confirmation message14004. In some embodiments, the customer that begins the group order can select to receive one or more confirmation messages to ensure that the group order is being handled by the coordinated system. For example, a customer that begins a group order can select to receive a confirmation—in the form of an order creation confirmation message14004—when the group order is begun and the rest of the group is notified. In some embodiments, a customer that begins a group order can select to receive confirmation when each other member in the group places an order through an order placement confirmation message14002. In some embodiments, the selection to receive a confirmation message can be present for each of the customers involved in the group order. FIG.15—Status Mobile Display Screen Referring toFIG.15, which is a schematic view of a possible mobile device display of an embodiment of the present disclosure showing delivery information,10000is a user device showing the delivery details for a particular order based on the order number11002. In some embodiments, the customers can view the delivery details once the group order has been placed. In particular, in such an embodiment, each customer involved in the group order may view the progress on the delivery. In some embodiments, the real-time information regarding the delivery driver may be tracked and displayed on a real-time map15002. Further, in some embodiments, the display can also include delivery information15004, including but not limited to, the status of the food that was ordered by the group, the customers in the group order, and the expected time and place of delivery. In some embodiments, the delivery driver may also view the delivery information screen including the delivery information15004. In such an embodiment, the delivery driver can use the delivery information to confirm that the group order is delivered to the corresponding group that placed the order. FIG.16—Final Order Mobile Display Screen FIG.16depicts a schematic view of a possible mobile device display of an embodiment of the present disclosure after a coordinated order is closed for modification. As shown inFIG.16, in some embodiments, when the order has been placed for a particular amount of time, the order can be closed for modification. In some embodiments, the particular amount of time may be determined by the customer beginning the group order. In another embodiment, the particular amount of time before modifications are closed may be set by the restaurant. When the order can no longer be modified, as shown inFIG.16, the user device10000can present a screen summarizing the final order. In some embodiments, the final order can presented with confirmation information, including for example the order number11002, to allow the customers in the group to ensure that the summary relates to the expected group order. In some embodiments, the final order summary can explicitly show that modifications are closed through the inclusion of a modification closure notification16002. In some embodiments, the final order can be organized alphabetically based on the customers in the group order. In some embodiments, once the modifications are closed, the summary presents the customer identifiers11004for those customers in the group that placed orders along with their final order. Additionally, in some embodiments, once the modifications are closed the remaining selection is a payment button16004, which can take a user to the screen on the user device10000depicted inFIG.12. FIG.17—Restaurant Visual Display Screen In various embodiments, shown inFIG.17, the restaurant can utilize displays that present the information collected and determined by a system for coordinating ordering from the restaurant location. As shown inFIG.17, at the restaurant location, a screen17000can be present to display information to the restaurant employees—including cooks, employees, and wait staff—that allows for item preparation of the coordinated order. In some embodiments, the screen may list the orders that are currently received from dine-in customers. Additionally, as displayed as an exemplary embodiment inFIG.17, the orders may be listed as a queue that is sequenced with the earliest order fire time being displayed first in the queue. As shown on screen17000, the orders can be grouped by order number. Accordingly, in such an embodiment, the restaurant can prepare a group order—placed by different customers on different devices—as a single order that is treated no different than if the order was placed by one customer off of a single device. The screen17000in the restaurant location, including that shown inFIG.17, can be communicatively coupled with the systems that are processing the customer orders from various devices. The data processing unit and associated data storage will suitably be a computer programed and running software to perform the functions described. Implementation of such a data processing system is well within the capabilities of those skilled in the art. FIG.18—Customer-Facing Visual Display at Restaurant Beyond the staff-facing visual display shown inFIG.17, the dine-in restaurant location can include customer-facing visual displays18000, such as the display shown inFIG.18, for use in the dining and pick-up area of the restaurant. In some embodiments including such customer-facing visual displays, as shown in the illustrative embodiment ofFIG.18, the particular order number that is associated with a group order is specified. In such embodiments, the customers may utilize the customer-facing visual display18000to determine whether the food orders are ready for pickup, delivery, or dine-in. By allowing the customers to locate their order, the customer-facing visual display18000shown inFIG.18can help to reduce wait time at the restaurant location. In some embodiments, the customer-facing visual display shown inFIG.18includes whether food and drinks have already been packaged, are in the process of being prepared, or have yet to be prepared. For example, the customer-facing visual display may show that drinks are ready for pickup (and can be brought to a car) and that the food is about to be ready to pickup. Further Components and Variations Presence Detection and Approaching Customer Identification In various embodiments, the restaurant location is provided with presence detection means (as discussed elsewhere herein), vehicle identification means (as discussed elsewhere herein), or both. In some such embodiments, the vehicle is identified as it approaches the order pick up window, and the identification is provided to wait staff. In some such embodiments, the wait staff select the order for the customer approaching the window, place the prepared orders in the order that customers are approaching the window in the order pick up lane, or both. In some embodiments, a customer's license plate is associated with their order. The license plate identification sequence (‘number’), in various embodiment and in various situations, is entered automatically by a license plate reader apparatus, is entered by the customer placing the order, is entered by a staff member taking the order, or other suitable means. In various embodiments, another identification means is used alternatively or in combination, including an image of the vehicle, an order number, a color and make of the vehicle, a one-dimensional or multi-dimensional scan code (such as a barcode, QR code, etc.), a store-provided order device (such as a device with a unique number that alerts the customer when an order is ready, and can also be identified by the restaurant location to direct customers when to merge), a mobile device (mediated, in some embodiments, by an application), etc. In various embodiments, the order processing system automatically notifies customers to merge when the order is marked ready, at a specific time (e.g. a pre-determined amount of time before the calculated order ready time) or event (e.g. a trigger time, status change of order directly in queue before the customer's order to ‘ready’, etc.), another suitable trigger, or combinations thereof. Lane Merging In some embodiments, the restaurant location offers both a drive-through order lane(s) and a non-sequential order pick up window(s). Such embodiments are particularly advantageous for restaurant locations with existing traditional drive-through order lanes, or with a significant customer base that wishes to preserve a traditional drive-through ordering experience. In some such embodiments, an ordering lane is provided with a linear, sequential-access drive-through lane, where customers enter the lane, approach the ordering window, and place their order. At this point, customers do not wait to receive their order—thereby holding up other customers waiting to place their order. In some embodiments, they directly merge into a non-sequential access drive-through lane as their order is ready. In some embodiments, they enter a waiting area, in common with people who have placed mobile or online orders, and enter a non-sequential drive-through lane as their order is ready (e.g. when they are notified by an order-ready board, by a text message, notification device handed to them at the order placement window and returned to the restaurant at the pick-up window, etc.). In some such embodiments, the customer is directed to circle the restaurant building and enter a waiting area, such as non-sequential waiting spaces, non-sequential queue lanes, parking spaces, etc. The restaurant location can, thus, offer drive-through ordering and payment, while still preserving the convenience and time advantages to customers who have pre-ordered. Customers who wish to order at the location (order-in-line customers) can do so, while customers who wish to pre-order can pick up their order as it is ready, without being trapped in line behind order-in-line (non pre-order) customers. Some embodiments merging a sequential drive-through order line with a non-sequential order pick up line are provided with merging control means to control the flow of traffic from multiple lanes into one (or at least into fewer) non-sequential order pick up lane. In some such embodiments, the merging control means comprises one or more presence detectors, such as a magnetic loop embedded in the road, ultrasonic sensor, video sensor, radar sensor, or other suitable apparatus. The merging control means, in some embodiments, further comprises signaling means to direct traffic from various lanes when to enter the non-sequential pick up lane. In some such embodiments, the signaling means comprises a light signaling system for merging, such as having a red and green (or other suitable colors) for each lane. When a customer is to enter the non-sequential pick up lane, the light for their lane turns green. In some such embodiments, direct access (as opposed to access from the ordering/payment lane(s)) to the pick-up lane (such as from the parking lot, from queuing spaces and/or lanes, etc.) is ‘green’ (for go/enter) by default, while access from the ordering lane is ‘red’ (for stop/do not enter) by default. When the next order in line in the ordering/payment lane is ready, the direct access lane signal switches to ‘red’ (or other ‘stop’ signal), and the access from the ordering lane switches to ‘green’ (or other ‘go’ signal). In various embodiments, other appropriate signaling is used, such as words, rotating signs, audible signals, text messaging, etc. Embodiments with an ordering lane merging directly into a pick up lane preserve the advantages of non-sequential access to order pick up based on order ready time, thereby preserving efficiency for pre-orders, and preserving order pick up time accuracy (e.g. not unnecessarily extending order pick up time by forcing customers to wait on orders being placed, prepared, and delivered in sequence)—customers are enabled to ‘jump the line’ at the restaurant by pre-ordering. Such embodiments are particularly useful for locations that are presently relatively traditional, sequential access ordering/payment I pick up locations, allowing them to add a ‘jump the line’ feature for non-sequential order pick up to incentivize customers who prefer the advantages of pre-ordering instead of waiting in line. In some embodiments, such as some referenced above, a further advantage is added by extending the benefit of non-sequential order pick up to drive-through ordering customers. Such embodiments include those in which traditional drive-through order/payment I pick up locations are converted into non-sequential pick up locations by providing a non-window ordering and payment station. Such embodiments are particularly useful for locations that do not have the capability for two or more windows, or merging lanes together. Locations with only one window will, in some embodiments, convert their window into a non-sequential pick up only window. Various such embodiments are provided with at least one of: an ordering station with a microphone, separate from the flow of the non-sequential pick up window; an ordering kiosk without a microphone; an ordering kiosk with a touchscreen with or without a microphone; one or more attendants with mobile ordering and payment stations (such as a tablet) in the parking lot; or other suitable means for taking orders. All such order and payment taking means, at least when used in these embodiments, are placed outside of the flow of the non-sequential order pick up window lane(s), thereby preserving customer access to the order pick up window when their order is ready. All such order taking means preferably accept payment as well. Some are capable of accepting cash, checks, or both, while others only accept electronic forms of payment (such as at least one of debit and credit cards, Apple Pay, Paypal, Google Pay, Venmo, Bitcoin, etc.). In some embodiments extending the benefit of non-sequential order pick up to drive-through ordering customers, at least one ordering lane is provided. The ordering lane provides access to a plurality of queuing parking spaces, enabling a customer to place an order and pay for it, and then move to a queuing space and wait to enter the non-sequential order pick up window just like pre-order customers. Such embodiments can, in a measure, provide the ‘best of both worlds’ for pre-order and drive-through-ordering customers, allowing both to order in their preferred way, while also allowing both pre-order and drive-through-ordering customers to pick up their order in a non-sequential manner according to the order-ready time. Multiple Order Channels In some embodiments, the restaurant merges orders from multiple order-receiving channels, and distributes the orders after preparation back out to the proper channel. Channels include, in various embodiments, at least one of: custom mobile phone application, custom website, third-party app, third-party website, or food services (such as GrubHub, Favor, FourDoor, Dash, etc.). In some such embodiments, the restaurant provides a separate pick up area, a separate order pick up window, or both, for delivery services. In such embodiments, a third-party delivery driver comes to a designated pick up area I window, while a direct customer comes to a different pick up area I window. In some embodiments merging orders from multiple order-receiving channels, the order-receiving channels are reduced by restricting all orders from the order pick up window, and re-directing them to another channel (such as mobile ordering). Multiple Locations In some embodiments, a common ordering system is provided across multiple restaurant locations. In some such embodiments, calls for multiple locations are routed to a common call center (or regional call centers). Such embodiments allow a single call center for multiple stores, which provides advantages to customers and restaurant staff. Restaurant staff are calmer—wait staff are not having to handle juggling phone calls, or at least not the same frequency of phone calls, and wait staff do not have the problems with hearing customers due to the background noise of a busy restaurant. Customers receive calmer, more focused service, less background noise, and more accuracy in their orders. The ability to call is useful, for example, for people who want to pre-order but are not comfortable with mobile or online ordering, for larger orders that are inconvenient or unable to purchase over the online or mobile ordering system, and for questions regarding policies, menu, service, billing issues, etc. Order Selection Accuracy In many embodiments, the system and methods are optimized for accurate order delivery (including handoff at an order pick up window) to the customer. When the orders are prepared and waiting for pick up, it is always a risk that the wrong order is handed off to the wrong customer. This is particularly challenging when there is more than one order pick up window. It is also particularly challenging when there are multiple customers with the same name, if the name is used to identify the order. In many embodiments, order accuracy is increased—as well as operational efficiency—by enabling a single order pick up window to be used, because the customers in the drive through at any given time are greatly reduced by non-sequential access as the orders become ready (as discussed elsewhere herein). In some embodiments, the order system displays order information on screens (such as a tablet, a computer screen, an order display in the production area, etc.). In some embodiments, the order system prints off a sheet for each order, or a sheet with multiple orders, with the relevant details of the order for wait staff to use in production and delivery. In some embodiments, as discussed elsewhere herein, the staff are notified as a customer is approaching the order pick up window, and given identifying information on the customer (such a license plate, order ID by identifying a mobile device in the customer's vehicle, etc.), giving staff time to locate and double-check the order before the customer appears at the pick-up window. In some embodiments, the order system highlights ‘doubled names’ to alert staff that there are multiple orders with the same or similar identifying information. Various embodiments highlight doubled identifying information—customer-input order ID, vehicle physical characteristics, customer name, etc. In some embodiments, a unique ID is provided to each order that prevents doubling. Customer Locating The order system is also capable, in some embodiments, of acquiring the location of a customer by receiving information from a global positioning system (GPS) system in the customer's mobile device or computer. GPS coordinates of the ordering customer is received from their mobile device and sent to the ordering system, or locating service or system connected to the ordering system, to aid in calculating travel time to better estimate a “future” pick up time. This is especially helpful for a restaurant along a highway. Potential customers can search down their travel route for a suitable restaurant, order using their mobile device, and have the system tell them how much time is required to reach the destination pick up location. Operation of a similar GPS system for ordering is disclosed in US published application U.S. 2006/0293971, the relevant disclosures of which are incorporated herein by reference. U.S. Pat. No. 8,059,029 discloses a GPS tracking system with helpful information on the way and means to set up an appropriate GPS ordering system. The relevant disclosure of U.S. Pat. No. 8,059,029 are incorporated herein by reference. In another embodiment the same GPS tracking is used to enable drone delivery or any other delivery method to static locations or moving vehicles while in transit. For the example above, the customer may wish to order, but not stop; preferring to have a drone meet the moving vehicle with the food order. Mapping In some embodiments, the ordering system is provided with, or connected to, mapping software. In some such embodiments, the ordering system is provided with internal maps with delivery-time zones, used to calculate delivery time to the customer's location. In some embodiments for multi-location restaurants, the ordering system is further provided with store-delivery-range zones. The customer's location is determined, and the order is routed to the appropriate location to make and deliver, based on store-delivery-range zones. In some such embodiments, some locations provide delivery service, and some do not; in such embodiments the ordering system routes delivery orders only to locations providing delivery service. The ordering system takes delivery time into account in queuing the order and providing an estimated order delivery time, as discussed elsewhere. It is a particular advantage to accurately estimate delivery time for the customer and for production timing and slip-logic, as for many restaurants and locales, delivery time is greater, and often much greater, than production time. Accordingly, providing accurate timing to the customer, and efficient production, relies heavily on reasonably accurate delivery timing and estimation. In some embodiments, the restaurant is listed with at least one mapping service or app (such as Google Maps, Apple Maps, Bing Maps, OpenStreetMaps, MapQuest, Yahoo! Maps, Wikimapia, etc.), travel service or app (such as Tripit, Airbnb, Roadtrippers, TripAdvisor, etc.), or other such service or app. A customer is preparing a trip, or is on the road, and searches restaurants near a given location. When the restaurant appears on the search, and is selected by the customer, the ordering system (an app, website, or other suitable means) receives a customer's location from the app or service (such as through attributes of the URL passing location (such as a ‘GET’ method), variables passed through an application programming interface (such as a ‘POST’ method), permission to access the current location of the customer from the device directly, etc.). The order system highlights menu options that will be ready by the time the customer arrives, restricts items that will not be ready, or some combination thereof. The customer places the order as discussed elsewhere herein, and the restaurant prepares the order likewise. The customer can, in such embodiments, have a meal ready for them—potentially higher quality than fast food in terms of taste, options, health, etc.—with minimal delay in their trip. In various embodiments, the order system estimates the time from the customer's current location to the restaurant location by at least one of: receiving an estimated travel time from the mapping service or app, receiving a distance from the mapping service or app and calculating an estimated travel time therefrom, receiving a current customer location from the mapping service or app and using a third-party mapping service to estimate travel time, receiving a current customer location from the mapping service or app and using an internal mapping algorithm to estimate travel time, other appropriate means, or some combination thereof. In some embodiments, at least part of the ordering system is provided by a third party, and individual restaurants or restaurant chains have the option of subscribing to or otherwise participating in this multi-vendor ordering system. In some such embodiments, the multi-vendor ordering system integrates with one or more mapping services or apps—which in various embodiments are third-party or are directly incorporated into the software. In such embodiments, customers can search a map for restaurants near a given location, or along a given route. The customer can filter for restaurants participating in the multi-vendor ordering system (or the search is restricted only to participating restaurants), and then can place an order seamlessly. The multi-vendor ordering system presents ordering information to the customer, and sends the order to the restaurant. In some embodiments, the multi-vendor ordering system at least handles all interaction of the customer with the ordering system, such that the customer never has to leave the unified interface, and may order from one or more restaurants directly from the interface. In some embodiments, the multi-vendor ordering system allows restaurants to customize the look and feel of the menu on their ordering system, within general system or app parameters. Various embodiments using mapping and order systems allows the customer to order through virtual assistants, such as Cortana, Siri, Alexa, Google, etc., using voice commands. It is said that the most common restaurant internet (including mobile) search is “restaurants near me.” It may well become “restaurants on my route.” The present disclosure provides advantages, for example, in convenience and increased choice to customers. It also provides advantages, for example, in efficiency and increased customer engagement and potential customer base for restaurants. Delivery In some embodiments, the restaurant provides external order delivery service, actually delivering the order to a customer-specified location. In some embodiments, the restaurant provides external order delivery through at least one third-party delivery service (such as GrubHub, Favor, FourDoor, Dash, etc). In some embodiments, the restaurant provides external order delivery at least through restaurant-specific delivery personnel, whether employees or contractors. In some embodiments, the external order delivery personnel pick up orders to deliver to customers at the order pick up window. Such embodiments provide easy integration of delivery drivers; the non-sequential access prevents delivery personnel from unduly interfering with the flow of customers. In some embodiments, the personnel pick up orders to deliver to customers at a separate location, such as a dedicated delivery-personnel order pick up window. Such embodiments are especially useful for locations with relatively high volumes and/or relatively high percentages of external delivery orders, separating the delivery personnel from the flow of normal customer pick up traffic, and preventing delivery personnel traffic from slowing down the flow of customer pick-ups, particularly when the drivers are picking up multiple orders to deliver. Such embodiments having a dedicated delivery personnel order pick up window preserve the efficiency of non-sequential access for drivers, especially by eliminating the need for drivers to find a parking space and enter the restaurant. In some embodiments, a customer is provided with the option to convert their pick up order to an external delivery order. For example, a customer may have placed a pick up order, received an order-ready time (for example, of 15 minutes), and planned to leave the office in 10 minutes, drive for 5 minutes, and pick up the order when ready. If the customer then became engaged in a meeting, phone call, car refused to start, etc., the customer can access the order again (e.g., on a mobile device, computer with online access, telephone, etc.) and request that the order be converted to a delivery order. The order is assigned to a delivery driver, and the order-delivery time is then calculated. No interruption of restaurant workflow is caused, and the customer can still conveniently receive their order. Such an embodiment works particularly smoothly in a location with an order pick up window for customers and delivery drivers, where the order is simply picked up by a delivery driver instead of the customer. The order system is updated to indicate pick up by the delivery driver, and the restaurant wait staff can verify the order pick up person accordingly. In embodiments having a separate delivery driver window, the order can be transferred to the delivery driver pick up area in the restaurant, or a shared area having access to the customer order pick up window(s) and the delivery driver order pick up window(s). In some embodiments, the order system allows the customer to designate another person, such as a family member or third-party delivery driver, to pick up the order. The customer can, in various such embodiments, designate another person by email, phone number, name, etc. In some cases, in some embodiments, the ordering system allows the customer to send the order, or certain data regarding the order, directly to the designated person. The order system updates the information associated with the order, and wait staff at the restaurant location can validate the person picking up the order against the information in the order system. Location-Triggered Order Preparation In some embodiments, the ordering system is optimized for fresh-cooked food, which is particularly advantageous for restaurant locations that specialize in food being just prepared as the customer receives it. In some such embodiments, the ordering system is provided with internal or external maps, and the capability of estimating travel time to the restaurant location(s), either internally, or through connection with an external module or system. Customers with mobile devices having location abilities (such as equipped to communicate with global positioning system (GPS), GLONASS, etc.). The mobile device (potentially embedded in a vehicle) runs a software (such as mobile app) that conveys the customer's location to the order system. The order system monitors the customer's location, calculates the time required to arrive, and triggers order preparation to start once the calculated time from the customer's current location is approximately the same as the order preparation time. In various embodiments, positive or negative buffers are added to increase the likelihood of the order being ready when the customer arrives (positive buffer—time is added), or to make sure that the order is being completed as the customer arrives (negative buffer—time is subtracted)—such as for a restaurant that completes and serves an order in the presence of the customer. In some embodiments, the system does not continually calculate the time from the customer's current location to the restaurant location, but instead is provided with a pre-determined distance range from the restaurant location: when the customer enters that range, the order system triggers preparation of the order. In some embodiments, mobile devices, vehicles, etc. having software reading at least one inertial sensor (such as accelerometer, gyro, etc.), and the customer inputs the location from which they will be departing. Once the software (such as a mobile app) detects steady motion of the mobile device or vehicle rate indicating the customer is driving, the software notifies the restaurant location. The restaurant location calculates (or has previously calculated) the distance from the customer's location to the restaurant location, as well as the time required for order preparation, and begins preparation of the order in time for it to be finished at or about the time the customer arrives. EXAMPLE EMBODIMENTS Example 1 In some embodiments of the present disclosure, a coffee shop with in-store service and a typical drive-through window is adapted for non-sequential order pick up. In some such embodiments, the window is converted to order pick up only, or an additional order pick up only window is added. In either case, the order pick up window has direct access, and no microphone, and is designed to not be blocked by traffic that is ordering. In some alternative embodiments, an ordering station is provided, such as by adapting the lanes to provide independent access to the pick-up window, and to the previous ordering station. The previous ordering station is converted to an independent order placement station, at which customers may place (and, in some embodiments, pay for) their order, and then exit the ordering station and lane, and enter parking or queuing spaces until their order is ready, at which point they enter a non-sequential drive-through lane to approach the order pick up window. Example 2 In some embodiments, a primarily dine-in restaurant utilizes the ordering system of the present disclosure. In some such embodiments, customers place at least some portion of their order, including a desired dining time, via an internet-connected device or mobile device, by phone, etc., and receives an expected dining time. The ordering system provides the expected dining time by taking into account the current number of tables and seating available, current and expected number of customers, wait staff levels, etc. The restaurant prepares the order, sets the table, and is ready for the customers when they arrive at or near the expected dining time. More than just ‘reserving’ a table, the ordering system allows the table to be reserved easily, without having to call or stop by the restaurant. It also allows the restaurant to maximize usage of available seating, tables, staff, etc. by reserving for a more accurate time and providing an accurate expected dining time. It reduces the inconvenience and annoyance to customers of standing in line waiting to be seated, by providing them an accurate expected dining time. Example 3 Some embodiments of the present disclosure comprise a convenience store or travel center that serves food, such as sandwiches, hot dogs, breakfast pastries, tacos, hamburgers, desserts, etc. Customers can pre-order a menu item, or at least choose from a subset of the menu provided in the store. In some embodiments, the store adds a non-sequential order pick up window and associated lane. In some embodiments, the customer picks the order up in-store at a dedicated non-sequential pick up area. Accordingly, the customer can use time during travel to place the order, and minimize time waiting for a hot meal at the travel center, convenience store, etc. Example 4 Some embodiments of the present disclosure comprise a restaurant offering a customizable build-your-own entree—such as build-your-own sandwiches, burritos, tacos, pizzas, hamburgers, salads, etc. The restaurant accepts pre-orders at least online or through a mobile device, including all or a subset of available customizations. Customers are able to place an order online, including their customizations, and receive an order-ready time (depending on various factors, including whether the order is placed with a desired pick up time or as an ASAP order). The restaurant provides a dedicated non-sequential order pick up area, non-sequential order pick up window, or both, where customers can pick up their order without waiting in line. This provides an especial advantage for customers and restaurants in such locations, where the line typically moves more slowly because of the many choices customers must make during customization. Additionally, customers are easily able to distinguish when placing their order between free and add-on customizations, and the price of add-on customizations, without the annoyance of repeatedly asking restaurant staff or searching a menu or menu board. Example 5 Some embodiments of the present disclosure comprise a restaurant offering delivery of the order to the customer's desired location through at least one third-party food delivery service, either in combination with, or in place of, restaurant delivery staff. In some such embodiments, the customer requests delivery (versus pick-up) when placing the order, or at some point after placing the order. The order system queues the order as discussed elsewhere herein, and schedules a driver to make the delivery with a third-party food delivery service (such as Favor, GrubHub, etc.). The driver comes to a non-sequential order pick up area (such as a common pick up window for both drivers and customers, or a dedicated driver pick up window), picks up the order, and delivers it to the restaurant. In various embodiments, the order is initially placed through the restaurant's order system, or through a third party order system (such as for a food delivery service) and then transferred to the restaurant order system. In some embodiments, the restaurant is a food delivery service only location, having a pick up window or area (preferably a drive-through window) with non-sequential access for food delivery service drivers. In some embodiments, an additional calculated time—driver summons—is provided that is calculated at least based on available drivers, time required for drivers to arrive at the restaurant, and order preparation time. In some such embodiments, the order system obtains information on present driver availability and location through at least one connection to food delivery service systems (such as through an application programming interface). In some such embodiments, the order system does not use or calculate the driver summons time. In some embodiments, the order system queues the order, and reserves a pick up time with a driver. In some embodiments, the order system calculates driver summons time based on the likelihood of a driver being available within a given driving distance (or time, or both), and triggers a summons of a driver when the driver summons time is reached. The driver summons time may be before order production begins, or afterwards, depending on the calculated production time of the order, and the estimated time for a driver to arrive. Example 6 Some embodiments of the present disclosure comprise a ‘fast-food’ type restaurant that traditionally does not have a drive-through option, such as many quick-preparation or pre-prepared pizza locations. Such restaurant locations can add a non-sequential access order pick up window (in some such embodiments, the window having no microphone and no provision for placing or paying for an order) and mobile/online ordering, such that customers can order online, and pick up their order at a pick up window. While many such locations would not be able to add a traditional drive through window due to space constraints, the present disclosure, as discussed elsewhere herein, allows the addition of a drive-through pick up window with minimal impacts on available space. Example 7 The present disclosure is advantageous in various embodiments for locations with restricted space insufficient for current requirements for traditional drive-through order and pick up lines. For example, a restaurant seeking to utilize a location on a corner lot that is ideal for a fast casual food drive-through pick up location due to proximity to target clientele, but prevented from doing so by having a lot too small for the required number of vehicles in a sequential access drive-through lane, can apply an embodiment of the present disclosure in order to utilize the location for drive-through pick up. In one particular such situation, a location was currently being used for both customer sit down and inside customer pick up, as well as for in-store delivery driver pick up. Customer drive-through order pick up was planned to add to the location, but the lot size, surrounding development, and city requirements prevented a standard, sequential drive-through lane and window to be added, because the length of the lane required to accommodate the number of vehicles at one time required by the city (in order to prevent the wait line from spilling onto the road or adjoining businesses) was too large for the lot. The location incorporated a non-sequential drive-through order pick up window configured only for pick-up of previously placed orders, successfully eliminating the need for a long, space-inefficient sequential drive-through lane. Additionally, the location offers the convenience and speed advantages of the non-sequential order pick up lane and window to its customers, offering the convenience of picking up orders without exiting the vehicle, and the speed of entering the pick-up lane and approaching the window only when the order is ready, avoiding trapping customers in a lane and requiring them to wait on slow order placement or preparation of large orders. Example 8 Some embodiments of the present disclosure comprise a plurality of food trucks utilizing one or more ordering systems, the ordering system(s) having a common customer interface. Customers order online, through a mobile device, or at a kiosk, at least by selecting the food truck, and then placing an order with that food truck. The common customer interface passes the order to the individual food truck's ordering system for production queuing, and provides the customer an order-ready time for pick up. The customer can then go to the specified food truck to pick up their food in a non-sequential pick up manner at a given time. Food trucks particularly lend themselves to providing a dedicated pre-order pick up area (such as a window, or part of a large window or bay), as they are typically not drive-through. In some embodiments, having food trucks that move from place to place, the common customer interface provides the customer with the location of the food truck at the time the order is to be picked up. In such embodiments, customers are able to more fully engage the offerings of food trucks with greater convenience, by not having to find the food truck and peruse the menu at any given time. Instead, the customer can access the food truck's menu electronically, place the order, and then pick up the order at the present location of the food truck. This is particularly useful in crowded cities and areas where food trucks are often popular. Additionally, such embodiments are useful to food trucks to extend their customer base to people who do not have the time to track down the food truck, place an order, and wait for preparation. Example 9 Some embodiments of the present disclosure comprise a travel center, visitor's bureau, university campus, library, employee lounge, or other common area, having a kiosk, a guest wi-fi with a landing web page, or other such commonly accessed interface. The interface provides a selection of local restaurants to choose from, each having ordering systems providing production timing and slip-logic control of orders, and providing non-sequential order pick up. The ordering systems have a common user interface, or application programming interface that is used by the commonly accessed interface (CAI). A user selects a restaurant on the CAI, and places an order, as discussed elsewhere herein. The CAI provides an order ready (or order delivery, if delivery is chosen) timing estimate which, in preferred embodiments, is generated by the restaurant's ordering system and passed to the CAI to display to the customer. If order pick up is chosen, the CAI provides the customer the restaurant location for pick up. Such embodiments are particularly useful for customers who wish to quickly access restaurants serving a common area, without having to filter through internet search results, a phonebook, or the like, for a reasonable driving, walking, or delivery time. Additionally, it provides an excellent marketing opportunity for restaurants in an area to make their location and menu accessible to a relatively large, targeted customer base. Example 10 Some embodiments of the present disclosure may receive as an input the fact that multiple orders are being received from one physical location. Examples for determining that two or more customers are ordering from the same physical location can include identifying the IP address from which each order is made, requesting location data associated with each device placing an order and/or examining the destination address listed for delivery. Some embodiments upon receiving such information may make a determination, based on this information or historical information regarding the ordering habits of the two or more customers, as to whether the orders ought to be grouped together. Additionally, some embodiments may leverage machine-learning algorithms, such as machine learning models or neural networks having one or more hidden layers, to perform the determining process that results in orders being grouped together. In such embodiments, the orders may be assigned the same order code subsequent to determining that the one or more orders ought to be grouped together. Example 11 In some embodiments, multiple customers may provide other customers in the same ordering group a link or code. The link or code, when entered into a mobile application, may be associated with a pre-grouped order to which each of the customers may add themselves. For example, a pharmaceutical representative wishing to treat the staff of a target doctor's office to lunch and may provide a link associated with a URL or application; each member of the staff then click on the link and place an order to the restaurant associated with the link. In some embodiments, the link may not originally be associated with any particular restaurant; instead, upon clicking the link, the customers may be prompted with a poll asking them to vote on a pre-selected restaurant. Such voting may be ranked-choice voting, first past the post voting, or any other like voting system. In other embodiments, the list of restaurants may not be pre-selected; instead, the list of restaurants may be generated (or procedurally generated) based on restaurants that are particularly popular that day, that are particular fast that day, that have the highest rate at accurately completing orders, etc. In some embodiments, the organizer may predefine how much money can be spent per meal. In some embodiments, such information may be displayed to each user. In some embodiments, the cap on how much each of the two or more customers may spend may not be equal among each of the two or more customers. Where an organizer is planning a meal for a birthday party or a celebration with a guest of honor, the organizer may designate a higher threshold amount for the guest of honor than for each of the other two or more customers. Example 12 Some embodiments of the present disclosure may include marketing incentive opportunities. For example, system administrators may market to a building, such as an office building or apartment complex, as a means of establishing recurring order patterns in an area. In some embodiments, such marketing may involve financially incentivizing one or more customers to order via reducing the overall cost of the order by a fixed amount or a fixed percentage of the overall price or a fixed percentage of the overall price up to a certain amount. As another example, in some embodiments, the present disclosure can involve giving a company a deadline for group ordering along with an associated code. In such an embodiment, if the company uses the code for a group order by the deadline, the restaurant providing the marketing incentive can offer perks, such as free delivery, based on the use of a group ordering feature. Example 13 Some embodiments of the present disclosure comprise group ordering system to be used at a business center, multi-office building, skyscraper, structure that houses multiple independent entities, or other such entity-hosting structure. In such an embodiment, the group order functionality can be utilized by a variety of customers, all located in the multiple businesses or entities that are located in or at the same approximate address. The employees, agents, workers, staff, or other customers operating out of the location can input contact information and customer characteristics pertaining to their location. After doing so, in some embodiments, the customer may receive a notification of the existence of a pre-set group order for the area. If the customer so chooses, the customer can join the preset group. In some embodiments, the customer will have until a pre-determined time in the day to place an order as a part of the group order. In such an embodiment, once the pre-determined time arises the group order will be closed for modification or additional orders. One illustration of the present example is that each of the customers in an office building may input a recurring code that is linked to a group order to be delivered to the office building, even if the customers work for separate companies with offices in the office building. In this example, the input of recurring code allows the customer to join a single building-wide group order. Accordingly, when the customer places an order for food, it is prepared, picked up, and delivered with the other menu selections made by other customers in the group order. When the group order is placed, even though the customer can pay for his or her meal individually, the restaurant can receive the group order information as a single order. In this embodiment, the restaurant can execute the order as if it was placed off a single device. By joining the group order, the customer can avoid paying separate delivery fees, as a single delivery driver can bring over all of the orders in the group order to a designated drop off location that is convenient for the customers in the group. In some embodiments, the customer that joins the group order for the office building can avoid paying delivery fees altogether. Further, in this example, the customer will know exactly when and where his or her food will arrive each day. Example 14 Some embodiments of the present disclosure comprise group ordering system to be used at apartment complexes, dormitories, multi-family homes, or other living structures that house more than one individual that may place a mobile order. In some embodiments, the residents living in the specific building can input contact information and customer characteristics relaying to the system that the customer lives at or near a particular address associated with a group order. After doing so, in some embodiments, the customer may receive a notification of the existence of a pre-set group order for the location. If the customer so chooses, the customer can join the preset group. In some embodiments, a management group or university system may be able to utilize the group order system to host events for the residents. In some embodiments, the residents may be able to join a group order that delivers food to one particular location in the building, complex, community, or dormitory at a preset time. Similar to Example 13, by joining the group order, the customer can avoid paying separate delivery fees. In some embodiments, the customer that joins the group order for the residential community can avoid paying delivery fees altogether. Further, in this example, the customer will know exactly when and where his or her food will arrive each day. CONCLUSION The disclosure claimed has been herein disclosed sufficiently for persons skilled in the art to comprehend and practice. While the disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. The various embodiments, examples, and illustrations disclosed herein, while representing the best and various alternative modes of carrying out the disclosure as currently contemplated by the inventors, are by no means limiting or exhaustive, but serve as an aid to comprehending the full nature and scope of the disclosure. It should be understood that the disclosure is not intended to be limited to the particular forms disclosed. Rather, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the following appended claims. Various other embodiments will become apparent which fall within the scope of this disclosure and claims. It should be noted that section titles or headers are provided for convenience only, and are not to be taken as limiting the scope of the descriptions thereunder The various aspects, embodiments, implementations or features of the described embodiments can be used separately or in any combination. Various aspects of the described embodiments can be implemented by software, hardware or a combination of hardware and software. The described embodiments can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, hard disk drives, solid-state drives, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion. Consistent with the above disclosure, the examples of systems and method enumerated in the following clauses are specifically contemplated and are intended as a non-limiting set of examples. Clause 1. A computer-implemented method including receiving, by a computing device, over a network, one or more contact information pertaining to one or more customers; receiving a selection of a restaurant to initiate an order for the one or more customers; transmitting, via the computing device and based on the one or more contact information, one or more notifications to one or more customer computing devices associated with the one or more customers, where the one or more notifications comprises a prompt to at least a menu associated with the restaurant; receiving, by the computing device, one or more selections of items ordered from the menu associated with the restaurant; and transmitting, by the computing device, the one or more selections of items included in the order to a computing device associated with the restaurant to cause the items to be prepared. Clause 2. The computer-implemented method of any foregoing clause, where the one or more selections of items are ordered in parallel by the one or more computing devices associated with the one or more customers. Clause 3. The computer-implemented method of any foregoing clause, further including receiving a cutoff time for when the order must be placed; and disabling an ability to modify the order after the cutoff time. Clause 4. The computer-implemented method of any foregoing clause, further including receiving a selection relating to designating the order as a pickup order or a delivery order. Clause 5. The computer-implemented method of any foregoing clause, further including determining a property of the one or more customers; and limiting one or more items associated with the menu based on the property of the one or more customers. Clause 6. The computer-implemented method of any foregoing clause, further including receiving a time selection for the items to be prepared; and transmitting the time selection to the computing device associated with the restaurant to cause the items to be prepared at the time. Clause 7. The computer-implemented method of any foregoing clause, further including transmitting, by the computing device, a location and timeframe at which the restaurant is going to deliver the items to a website or an application; and receiving, from the one or more customer computing devices, one or more confirmation messages indicating the one or more customers will be at the location and the timeframe to pick up the items. Clause 8. The computer-implemented method of any foregoing clause, where the one or more customers are members of a family. Clause 9. The computer-implemented method of any foregoing clause, where the one or more customers are associated with one or more entities. Clause 10. A non-transitory, computer-readable medium storing instructions that, when executed, cause a processing device to: receive, over a network, one or more contact information pertaining to one or more customers; receive a selection of a restaurant to initiate an order for the one or more customers; transmit, based on the one or more contact information, one or more notifications to one or more customer computing devices associated with the one or more customers, wherein the one or more notifications comprises a prompt to at least a menu associated with the restaurant; and receive one or more selections of items ordered from the menu associated with the restaurant; and transmit the one or more selections of items included in the order to a computing device associated with the restaurant to cause the items to be prepared. Clause 11. The computer-readable medium of any foregoing clause, where the one or more selections of items are ordered in parallel by the one or more computing devices associated with the one or more customers. Clause 12. The computer-readable medium of any foregoing clause, where the processing device is further to: receive a cutoff time for when the order must be placed; and disable an ability to modify the order after the cutoff time. Clause 13. The computer-readable medium of any foregoing clause, where the processing device is further configured to receive a selection relating to designating the order as a pickup order or a delivery order. Clause 14. The computer-readable medium of any foregoing clause, where the processing device is further configured to: determine a property of the one or more customers; and limit one or more items associated with the menu based on the property of the one or more customers. Clause 15. The computer-readable medium of any foregoing clause, where the processing device is further configured to: receive a time selection for the items to be prepared; and transmit the time selection to the computing device associated with the restaurant to cause the items to be prepared at the time. Clause 16. The computer-readable medium of any foregoing clause, where the processing device is further configured to: transmit a location and timeframe at which the restaurant is going to deliver the items to website or an application; and receive, from the one or more customer computing devices, one or more confirmation messages indicating the one or more customers will be at the location and the timeframe to pick up the items. Clause 17. A system including a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: receive, over a network, one or more contact information pertaining to one or more customers; receive a selection of a restaurant to initiate an order for the one or more customers; transmit, based on the one or more contact information, one or more notifications to one or more customer computing devices associated with the one or more customers, wherein the one or more notifications comprises a prompt to at least a menu associated with the restaurant; and receive one or more selections of items ordered from the menu associated with the restaurant; and transmit the one or more selections of items included in the order to a computing device associated with the restaurant to cause the items to be prepared. Clause 18. The system of any foregoing clause, where the one or more selections of items are ordered in parallel by the one or more computing devices associated with the one or more customers. Clause 19. The system of any foregoing clause, where the processing device is further configured to: receive a cutoff time for when the order must be placed; and disable an ability to modify the order after the cutoff time. Clause 20. The system of any foregoing clause, where the processing device is further configured to receive a selection relating to designating the order as a pickup order or a delivery order.
152,048
11861745
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION This document describes methods, systems, devices and computer readable medium that facilitate creation of client-initiated segments, including client-initiated conditional segments. As used throughout this document, a segment refers to a flight between an origin and a destination. The term segment refers to any type of flight that carries clients, including shuttles (e.g., a flight between a set of locations specified by the service provider) and charters (e.g., a flight between two locations freely specified by a client). As described in detail below, the segment can be initiated by a client (e.g., a member of a service and/or a user of an application that facilitates creation of the segment), and made available to other clients, for example, by way of a native mobile application (or another appropriate interactive environment, such as a web interface). The aircraft used to travel between the origin and destination is typically a non-commercial aircraft (e.g., a private jet). While any appropriate type of aircraft (e.g., a propeller aircraft, a jet aircraft, or a rotorcraft) can be used, they will be collectively referred to using the term “jet” for brevity. A conditional segment is a segment that is initiated by a client and contingent on at least a specified number of spots being claimed by one or more other clients (or the client) before the segment will be confirmed. A spot refers to seat or other appropriate area of occupancy for a client on a jet that is used for the segment. A segment service provider may require that at least a specified minimum number of spots are claimed before confirming the segment for the clients. The ability to initiate a conditional segment enables a client to create a segment without being responsible for all of the minimum number of spots. Instead, the segment service provider can notify other clients that the conditional segment is available and allow other clients an opportunity to claim a spot on the segment. If the clients claim at least the minimum number of spots on the segment, the segment service provider can convert the conditional segment to a confirmed segment that will be scheduled for the clients that claimed a spot on the segment. If the segment includes additional spots above the minimum number that have not been claimed, the segment service provider can make those spots available to other clients as well. In some implementations, the minimum number of spots required to convert a conditional segment can be based on a duration of time between the time at which the conditional segment is initiated and a departure time for the conditional segment. For example, the minimum number of spots can be greater for shorter durations of time as this leaves a segment service provider less time to fill the spots on the segment and less time to secure a private jet for the segment. The minimum number of spots can vary dynamically over time while the conditional jet segment remains conditional rather than confirmed. For example, as the time gets closer to the departure time, the minimum number of spots can increase for similar reasons. In some implementations, the creator may be required to claim a minimum number of spots to create a conditional segment. Similarly, this minimum number of spots can be based on the duration of time between the time at which the conditional segment is initiated and a departure time for the conditional segment. For example, the minimum number of spots can be greater for shorter durations of time as this leaves a segment service provider less time to fill the spots on the segment and less time to secure a private jet for the segment. In some implementations, clients that create or claim one or more spots on a conditional segment can enter and leave the conditional segment up until the conditional segment is confirmed. In particular, the clients can give up their claimed spots on the conditional segment if the clients no longer want to travel on the conditional segment. The clients can also reclaim spots on the conditional segment if the spots are available. After the conditional segment is confirmed, the clients that claimed a spot on the segment and no longer want to travel on the segment can offer their spots to other clients. If another client claims these spots, the client that originally claimed the spots can be refunded or not required to pay for the spots. In some implementations, the client that originally claimed the spot can specify the cost for those spots, e.g., which may be the same, less, or more, than the client agreed to pay for the spots. This amount can be presented to the other clients, e.g., in a calendar interface, as described below. FIG.1is a block diagram of an example environment100in which a segment management system110enables clients to initiate segments, including conditional segments. The example environment100includes a network150, such as a local area network (LAN), a wide area network (WAN), the Internet, a mobile network, or a combination thereof. The network150connects client devices130(e.g., client device A130-A and client device B130-B) of clients, the segment management system110, and operator systems142of operators140. The example environment100may include many different client devices130and operators140. The segment management system110, which can be operated and maintained by a segment service provider, allows users to arrange transportation on segments provided by the segment service provider. The segment service provider can provide scheduled segments between origins and destinations. The segment service provider can also allow clients (e.g., members of the segment service provided by the segment service provider) to initiate segments with custom attributes (e.g., custom departure date, origin, destination, and/or type of jet). For example, a client may want to travel from an origin to a destination on a day in which the scheduled segment(s) from the origin to the destination are full or on a day that no segments are scheduled from the origin to the destination. In this example, the client can initiate a segment from the origin to the destination on the day and/or time that the client wants to travel. A client that initiates a segment is also referred to as a creator. Clients can initiate confirmed segments and conditional segments. As described above, a conditional segment is a segment that is contingent on at least a specified minimum number of spots being claimed by the client and optionally one or more other clients. A confirmed segment is a segment that the service provider will guarantee when the creator provides an amount (e.g., pay) for at least the specified number of spots at the time of creation. For example, the segment service provider may require that a minimum of three spots be claimed on a segment between Miami and Dallas. In this example, the creator can create a confirmed segment in which the creator claims at least three spots. As described below, the creator can open some of the creator's claimed spots on a confirmed segment to other clients, e.g., for flight credit or a monetary amount. If the creator only wants one or two seats and does not want to provide the amount for all three spots, the creator can create a conditional segment and allow other clients to claim a spot. If at least the minimum number of spots are claimed by the other clients, the segment service provider can convert the conditional segment to a confirmed segment. If not, the segment service provider can cancel the conditional segment (e.g., at the expiration time). The minimum number of spots can vary between different conditional segments. For example, the segment service provider can require that at least three spots be claimed on one conditional segment while requiring at least five spots be claimed on a different conditional segment. The minimum number of spots can vary based on the total number of spots on the segment, the amount of time between the time at which the segment it initiated and the departure time for the segment, an estimated amount for the segment, and/or other appropriate criteria. A client can initiate and manage segments, claim a spot on a segment, manage other travel arrangements with the segment management system110, and/or perform other appropriate tasks related to the segment management system110using a client-side application132executed on the client's client device130. The application132can transmit data to, and receive data from, the segment management system110over the network150. The application132can be implemented as a native application developed for a particular platform or a particular device, web browser that provides a web interface, or another appropriate type of application. The application132can present and detect user interactions with various interfaces that allow the client to initiate conditional segments, manage conditional segments, and/or claim a spot on segments. Some example interfaces generated and presented by the application132are illustrated inFIGS.4A-5Eand described in detail below. The segment management system110includes one or more front-end servers112and one or more back-end servers114. The front-end servers112can transmit data to, and receive data from, client devices130, e.g., client device A130-A and client device B130-B, and operator systems142of operators140over the network150. For example, the front-end servers112can provide, to the application132of a client's client device130, interfaces or data for presentation with the interfaces. The front-end servers112can also receive data specifying user interactions with the interfaces of the application130, such as attributes of a conditional segment initiated by the client. As described in more detail below, the front-end servers112can update the interfaces, provide new interfaces, and/or update the data presented by the interfaces based on user interactions with the application132. The front-end servers112can also communicate with the back-end servers114. For example, the front-end servers112can identify data that is to be processed by the back-end servers114, e.g., data specifying attributes of a conditional segment, and provide the identified data to the back-end servers114. The front-end servers112can also receive, from the back-end servers114, data for a particular client and transmit the data to the client device130of the particular client over the network150. The back-end servers114include a segment scheduling engine116, a spot assessment engine118, and a segment sourcing engine120. The segment scheduling engine116manages the creation, confirmation, and/or cancellation of segments including conditional segments. The segment scheduling engine116can receive data specifying attributes of a segment initiated by a client and create the segment within the segment management system110. For example, a client that uses client device A130-A can interact with the application132to initiate a segment and specify attributes of the segment. The attributes can include a departure geographic identifier (e.g., an origin city or airport code), a destination geographic identifier (e.g., a destination city or airport code), a departure date (which can include a date and/or time) at which the segment will depart from the origin, a type of jet (e.g., light, midsize, heavy, propeller, rotorcraft, etc.), a number of spots the client wants to claim on the segment, and/or other appropriate attributes. As used herein, the term engine refers to a data processing apparatus that performs a set of tasks. The application132can generate a segment request134and cause the client device A130-A to transmit the segment request134to the segment management system110over the network150. The segment request134can include one or more of the client-specified attributes. In some implementations, the segment request134can include all of the attributes. For example, the application132can cause the client device A130-A to transmit the segment request134after all of the appropriate attributes have been obtained from the client. As described in more detail below, the application132can prompt the client for the attributes using multiple interfaces. In some implementations, the segment request134includes only a portion of the attributes (e.g., less than all of the attributes required by the segment service provider). For example, the segment scheduling engine116can cause the application132to prompt the client for additional attributes or other information based on initial attributes received in the conditional segment request134. In a particular example, the segment request134can include the departure geographic identifier, destination geographic identifier, and departure date. The segment scheduling engine116can receive these attributes, identify what types of jets are available for travel from the origin to the destination, and provide data specifying the available types of jets to the client device A130-A for presentation by the application132to the client. The client can then select from the available types of jets and the application132can cause the client device A to transmit data specifying the selected type of j et to the segment management system110. The application132can also allow the client to designate the segment as a confirmed segment or a conditional segment. For example, when a client creates a segment, the application132can present an interface control that allows the client to specify whether the segment should be a confirmed segment or a conditional segment. The application132can detect user interaction with the user interface control, determine the type of segment indicated by the user interaction, and provide the data to the segment management system110. The segment scheduling engine116can receive the data and create the appropriate type of segment within the segment management system110based on the data and the attributes received from the client device A130-A. The segment scheduling engine116can also store the data for the created segment in a schedule data storage unit124. The schedule data storage unit124can include one or more databases (or other appropriate data storage structures) stored in one or more non-transitory data storage media (e.g., hard drive(s), flash memory, etc.). The segment data storage unit124can store data for each segment that is provided by the segment service provider. For example, the segment data storage unit124can store data for each scheduled and each client-initiated segment. The data for each segment can include the departure geographic identifier for the segment, the destination geographic identifier for the segment, the departure date for the segment, the type of jet and/or an identifier of the actual jet being used for the segment, an identifier for each client that has claimed a spot on the segment, and/or other appropriate segment data. The data for each segment can also include data specifying whether the segment is a client-initiated segment, whether the segment is confirmed or conditional. The segment scheduling engine116can notify other clients of the created segment. In some implementations, clients can view the various segments from an origin to a destination. For example, the application132can present segments from an origin to a destination using a calendar interface. The calendar interface can include, for each date, zero or more segment indicators for each segment scheduled (or conditional) to travel from the origin to the destination on that date. For example, each segment indicator may be a dot under the date in the calendar. In this example, after the segment is created, a segment indicator will be presented under the departure date for the segment to represent the created segment. If a different client is viewing the calendar interface for flights from the same origin and to the same destination as the created segment, the client can see the dot for the created segment and interact with the dot or the date (e.g., by selecting the dot or the date) to view more information about the created segment and/or claim a spot on the created segment. In some implementations, the segment scheduling engine116notifies clients using push segment notifications136(and/or through client navigation of the application, as discussed in more detail below). For example, the segment scheduling engine116may send messages (e.g., within the application132, via text messaging, and/or via e-mail) to the clients to notify the clients of the created segment. The messages can include a link to an application page within the application132(or to a web page in a web interface) to claim a spot on the segment. In some implementations, the segment scheduling engine116sends the notifications136to clients that are likely to be interested in the created segment, e.g., based on previous segments on which the clients were passengers, the location of the clients, and/or favorite locations specified by the clients (e.g., using the application132). Data regarding previous segments on which clients were passengers is stored in a historical data storage unit122, e.g., as part of data regarding each previously operated segment. A client, e.g., the client associated with (e.g., logged into the application132on) the client device B130-B, can request a spot on a conditional segment using the application132. For example, the client can interact with a segment indicator for the segment in a calendar interface, select a link in a push notification, or another appropriate way. The application132can then generate a request for a segment spot137and transmit the request for the segment spot137to the segment management system110. The request for the segment spot137can include data specifying the client that submitted the request and an identifier of the segment on which the client is requesting a spot. The segment scheduling engine116can receive the request for the segment spot137and determine whether there is still a spot available on the conditional segment. For example, there may not be a spot available if the conditional segment has expired or if other clients have claimed all of the available spots. If there is still a spot available on the conditional segment, the segment scheduling engine116can add the client that submitted the request for the segment spot137to the conditional segment. If not, the segment scheduling engine116can send a notification to the client that the segment is no longer available, e.g., on an interface of the application132. If the segment scheduling engine116adds the client to the conditional segment, the segment scheduling engine116can determine whether to convert the conditional segment to a confirmed segment. For example, the segment scheduling engine116can compare the number of spots that have been claimed on the conditional segment to the specified minimum number of spots that have to be claimed to convert the conditional segment to a confirmed segment. If the number of spots that have been claimed on the conditional segment meets or exceeds the minimum number of spots, the segment scheduling engine116can convert the conditional segment to a confirmed segment. If not, the segment scheduling engine116can maintain the conditional segment as a conditional segment until the specified number of spots on the segment are claimed or the expiration time elapses. The segment scheduling engine116can provide segment status notifications135to the client that created the conditional segment and/or the clients that claimed a spot on the conditional segment. For example, the segment scheduling engine116can provide segment status notification135in response to the conditional segment either being cancelled (e.g., to notify the clients of the cancellation) or being confirmed (e.g., to notify the clients that the segment has been confirmed). Each client that claimed a spot on the segment may have provided a respective amount to claim the spot on the segment. If the conditional segment is cancelled, the segment management system110can return the respective amount to each client. The segment scheduling engine116can enable clients to claim spots and cancel claimed spots freely. For example, if a client that claimed a spot no longer needs to travel on that segment, the client can cancel the claimed spot. In some implementations, the segment scheduling engine116can limit this functionality to conditional segments that have not yet been confirmed as confirmed segments are scheduled by the segment scheduling engine116and a jet is procured for the confirmed segment. However, the segment scheduling engine116can enable clients that claimed a spot on the now-confirmed segment to offer their spots to other clients. If so, the segment scheduling engine116can notify other clients of the available spots. In some implementations, the segment scheduling engine116can allow the clients offering a claimed spot to specify the amount for another client to claim the spot. This amount can any amount, e.g., the same, more, or less than the amount the client previously agreed to provide for the claimed spot. This provides the client flexibility in recouping some of the amount if it is close to the departure date and no other clients have claimed the spot. After converting a conditional segment to a confirmed segment, the segment scheduling engine116can allow other clients to claim a spot on the confirmed segment if there are additional spot(s) available. For example, the minimum number of spots required to convert the conditional segment to a confirmed segment may be less than all of the spots on the actual jet being used for the segment. The segment scheduling engine116can continue presenting a segment indicator (e.g., dot) for the segment in the calendar interface and/or send notifications to clients that spots are available on the confirmed segment. As described in more detail below, the color of the segment indicator for a confirmed segment may be different from the color of the segment indicator for a conditional segment to allow a client to quickly and easily determine the types of segments available on each date without requiring client interaction with the presented date or otherwise navigating to another user interface that provides more detailed information about the segments available on each date. The segment management system110also includes a spot assessment engine118. The spot assessment engine118can determine an amount that clients are required to submit for each spot on a segment. The amount can be based on various factors, such as the type of jet, the departure and destination geographic identifiers, the departure date, the duration of time between the time the segment is initiated and the departure date, the type of segment (e.g., conditional or confirmed), and/or other appropriate factors. The application132can present the amount for a spot at the interfaces that enable the clients to initiate a segment and/or to claim a spot on a segment. For example, the application132can request current amounts for a particular segment from the spot assessment engine118in response to a client selecting to view information about the particular segment. In another example, the application132can request the information before the client selection, e.g., when the client opens the application132, when the client selects the same origin and destination as the particular segment, or periodically. In this way, the latency in presenting the information can be reduced. The segment management system110also includes a segment sourcing engine120. The segment sourcing engine120can interact with the operator systems142of the operators140to select jets for the segments and obtain information about the jets (e.g., number of spots on the jet, costs for particular segments, range, etc.). For example, when the segment scheduling engine116creates a segment (e.g., a confirmed segment), the segment sourcing engine120can interact with the operator systems142to identify a jet of the same type as the created segment that can be used for the segment. For example, the segment sourcing engine120can submit a request to each operator system142for a jet of the type of the created segment. In response to receiving a request, the operator systems142can obtain data regarding available jets from their respective operator data storage units144and provide, to the segment sourcing engine120the information about any available jets that the operator140would be willing to operate for the created segment (e.g., number of spots on the jet, costs for particular segments, range, etc.). If multiple operators140have an available jet, the segment sourcing engine120can select a jet for the created segment based on the information provided by the operator systems142. In some implementations, the front-end servers112of the segment management system110communicate with the operators systems using application programming interfaces (APIs). The use of the APIs require computational power to communicate data. To reduce the amount of computational power used by the APIs, the segment management system110may identify a subset (e.g., less than all) of operators140for a particular segment and provide the request to only the operators in the subset. The segment management system110can identify the subset of operators based on the departure geographic identifier (e.g., identify operators that operate jets in the geographic area from which the segment will depart), previous segments provided by the operators (e.g., previously provided a jet for the same origin and destination), types of jets that the operator operates, and/or other appropriate criteria. In some implementations, communications with operators can be carried out using other communications means (e.g., phone). FIG.2is a swim lane diagram that illustrates an example process200for creating and either confirming or cancelling a conditional segment. The operations of the process200are described with reference to the segment management system110, the client device A130-A, client device B130-B, and the operator system142ofFIG.1. The client device A130-A sends attributes of a conditional segment to the segment management system110(202). For example, a client associated with the client device A130-A can initiate a conditional segment using the application132. The application132can enable the client to specify attributes of the conditional segment. As described above, the attributes can include a departure geographic identifier, a destination geographic identifier, a departure date, a type of j et, a number of spots the client is claiming on the segment, and/or other appropriate attributes. The segment management system204creates the conditional segment based on the attributes received from the client device A130-A (204). The segment management system204can create the conditional segment within the system so that other clients can view information about the segment and/or claim a spot on the segment. For example, when the application132executing on other clients' devices requests information about segments, the segment management system110can provide information about the created conditional segment for presentation by the application132to the client. The segment management system110notifies other clients about the conditional segment (206). For example, the segment management system110can send information about the conditional segment to other clients that may be interested in claiming a spot on the conditional segment. The segment scheduling engine116may send messages (e.g., within the application132, via text messaging, and/or via e-mail) to the clients to notify the clients of the conditional segment. The client device B130-B receives the notification and presents information about the conditional segment to the client associated with the client device B130-B (208). If the notification is in the form of a message within the application132, the application132executing on the client device B130-B can present a message indicator that indicates that a new message has been received. The client can interact with the message indicator (e.g., select the message indicator) to open the message and view information about the conditional segment. The notification can also be in the form of a segment indicator on a calendar interface that presents segments that are to travel from a particular origin to a particular destination. The client can interact with the segment indicator (e.g., select the segment indicator) to view information about the conditional segment. The application132executing on the client device B130-B detects a user interaction requesting a spot on the conditional segment (210). For example, the client can select a spot on the segment while viewing information about the segment. Example interfaces that show the selection of a spot on a segment are illustrated inFIGS.5A-5Eand described below. The client device B130-B sends the request for the spot to the segment management system110(212). The segment management system110receives the request and updates the conditional segment (214). The segment management system110can update the conditional segment to reflect the spot claimed by the client associated with the client device B. For example, the segment management system110can update the list of clients that are to travel on the segment and the number of spots claimed by the clients. The segment management system110determines whether to convert the segment to a confirmed segment (216). As described above, the segment service provider may require that at least a specified minimum number of spots on a conditional segment be claimed to convert the conditional segment to a confirmed segment. The segment management system110can compare the number of spots that have been claimed to the specified number of spots for the conditional segment. If the number of spots that have been claimed does not meet or exceed the specified number of spots, the segment management system110can determine to not convert the conditional segment to a confirmed segment at that time. Instead, the segment management system110can maintain the segment as a conditional segment until the specified number of spots on the segment are claimed. In another example, the segment management system110can cancel the conditional segment if no spots on the conditional segment are claimed, e.g., based on the client that initiated the conditional segment leaving the conditional segment and any other clients that claimed a spot on the conditional segment leaving the conditional segment. If the number of spots that have been claimed does not meet or exceed the specified number of spots, the segment management system110, the segment management system110determines whether the expiration time has elapsed (218). If not, the segment management system110can continue maintaining the segment as a conditional segment until the expiration time elapses. The segment management system110can also notify other clients of the conditional segment so that other clients can claim a spot on the segment to enable the segment to be converted to a confirmed segment. If the expiration time has elapsed, the segment management system110cancels the conditional segment (220). The segment management system110notifies the client that created the conditional segment and the client(s) that claimed a spot on the conditional segment that the conditional segment has been cancelled (222). For example, the segment management system110can send messages to the clients notifying the clients of the cancellation. The application132executing on the client device A130-A presents the notification to the client that created the segment (226). Similarly, the application132executing on the client device B130-B presents the notification to the client associated with the client device B130-B. In some implementations, the operations218and220are optional. For example, there may not be an expiration time. Instead, the minimum number of spots required to confirm the conditional segment can be based on a time duration between the time at which the conditional segment is created and the departure time for the conditional segment. Returning to operation (216), if the number of spots that have been claimed does meet or exceed the specified number of spots, the segment management system110converts the conditional segment to a confirmed segment and notifies the client that created the conditional segment and the client(s) that claimed a spot on the segment (236). For example, the segment management system110can send messages to the clients notifying the clients of the confirmation. The application132executing on the client device A130-A presents the notification to the client that created the segment (240). Similarly, the application132executing on the client device B130-B presents the notification to the client associated with the client device B130-B. After the conditional segment is confirmed, the clients that have claimed spots on the segment may no longer be able to give up their spots on the segment. However, the clients may be able to allow other clients to claim their spots, e.g., for a flight credit or monetary amount. For example, the segment management system110can enable a client that has one or more spots on a confirmed segment to make the spot(s) available to other clients. The segment management system110can update the calendar interface or send push notifications to other clients notifying them of the available spots. If another client claims a spot made available by the client, the segment management system110can release the client from the confirmed segment and assign the other client to the spot(s) claimed by the other client. The segment management system110also request a jet for the segment (228). For example, the segment management system110can submit a request to the operator system142of one or more operators140for one of the operators to source a jet (and its crew) for the segment. The one or more operators can include operators that operate jets in the same geographic area (e.g., same city, state, or other appropriate geographic area) as the area from which the segment is scheduled to depart. The request can specify information about the segment, e.g., the type of jet requested, a minimum number of spots requested, the departure and destination identifiers for the segment, and/or other appropriate information that can be used by the operator system142to identify an appropriate jet for the segment. Each operator system that receives the request identifies a jet for the segment (230). For example, the operator system142can access information about the operator's jets and compare the information in the request to the information about the operator's jets to identify a jet that is appropriate for the segment (e.g., a jet that is of the same type as specified by the request and includes the minimum number of spots). The operator system142can also access availability or scheduling information for the jets to ensure that the appropriate jet is available for the segment based on the departure date for the segment. If the operator system142identifies an available jet that is appropriate for the segment, the operator system142provides information about the jet to the segment management system110(232). The information about the jet can include the number of spots on the jet, a cost for the segment, range of the jet, and/or other appropriate information. If operator systems for multiple operators provide information about an available jet, the segment management system110can select a jet for the segment based on the information provided by the operator systems142(234). After selecting a jet, the segment management system110can notify the operator140of the selected jet. In turn, the operator system142of the operator140can provide a confirmed itinerary for the segment. The segment management system110can forward the itinerary to the client that initiated the segment and the client(s) that claimed a spot on the segment, e.g., using message within the application132. As described above, if there are any available spots on the segment, the segment management system can make those spots available to other clients, e.g., via the application132. FIG.3is a flow chart of an example process300for creating and confirming a conditional segment. Operations of the process300can be performed, for example, by one or more data processing apparatus, such as the segment management system110. Operations of the process300can also be implemented as instructions stored on a non-transitory computer readable medium. Execution of the instructions cause one or more data processing apparatus to perform operations of the process300. Attributes of a segment created by a creator are received (302). The creator can create the segment through an interface of the device. During the creation process, the creator can specify attributes of the segment using the interface. The creator can also specify how many spots the creator is claiming on the segment. A determination is made that the attributes specified by the creator include a departure geographic identifier, a destination geographic identifier, a departure date at which the segment will depart, and a type of jet selected for the segment (304). The interface of the device can prompt the creator to specify these attributes or present interface controls that allow the creator to select from multiple options of one or more of the attributes. Interaction with an interface control that designates the segment as a conditional segment is detected based on data received from the device (306). For example, the interface can present an interface control that allows the creator to select between confirmed and conditional segments. The device can detect the selection made by the creator and provide data specifying the selection. The data can be used to detect the type of segment created by the creator, e.g., either confirmed of conditional. In some implementations, the creator is required to claim at least a minimum number of spots on the conditional segment as a requirement to create the conditional segment. As described above, the minimum number of spots can be based on the duration of time between the time at which the creator created the conditional segment and the departure time for the conditional segment. Clients are notified that the conditional segment is available (310). In some implementations, the interfaces of the clients' devices are updated to present information about the conditional segment. For example, a calendar interface may be updated to present a segment indicator for the conditional segment on the departure date of the conditional segment. The calendar interface for a particular client may be updated the next time the client navigates to the calendar interface that would present the segment indicator for the conditional segment, e.g., a calendar interface that presents segments from the same origin and to the same destination as the conditional segment. In some implementations, push notifications are sent to the devices of the other clients. For example, a message can be sent to the devices of clients that may be interested in claiming a spot on the conditional segment. The notifications can specify how many more spots need to be claimed to convert the conditional segment to a confirmed segment. Clients are enabled to claim a spot on the conditional segment through a client-side application (312). A client can interact with an interface of the application to claim a spot on the conditional segment. For example, the client can interact with (e.g., select) the segment indicator or the departure date for the segment on the calendar interface to view additional information about the conditional segment. The interface that presents the additional information can also include an interface control (e.g., a selectable button or icon) that enables the client to claim a spot on the conditional segment. Similarly, a push notification sent to the client's device can include a link or other interface control that, when selected, presents an interface that enables the client to claim a spot on the conditional segment. A determination is made whether to convert the conditional segment to a confirmed segment (314). The determination can be based on the number of spots claimed on the conditional segment. For example, the segment service provider may require that at least a specified number of spots be claimed on the segment to convert the conditional segment to a confirmed segment. As such, the segment management system110can continually (or periodically) monitor the number of spots that have been claimed, compare the number of spots claimed to the specified number of spots, and determine whether to convert the condition segment to a confirmed segment based on the comparison. While the conditional segment is pending, the creator and/or other clients that claimed a spot on the conditional shuttle can leave, e.g., relinquish their spots, without penalty until the conditional segment is confirmed. The same clients can reclaim a spot on the conditional segment if a spot is still available. If the number of spots claimed on the conditional segment does not meet or exceed the specified number of spots for the conditional segment when the expiration time lapses, the conditional segment is cancelled (316). If the number of spots claimed on the conditional segment does not meet or exceed the specified number of spots for the conditional segment, the conditional segment is converted to a confirmed segment (318). In some implementations, the number of spots claimed by the creator is compared to a specified number of spots that are required to be claimed to convert the conditional segment to a confirmed segment after the creator designates the segment as a conditional segment. If the creator has claimed at least the specified number of spots, the conditional segment may be converted to a confirmed segment, e.g., without obtaining the expiration time or notifying other clients of the conditional segment. Some situations, the creator may originally claim fewer than the specified number of spots, but then return to the application to claim additional spots at a later time. In this example, the creator can still specify the expiration time (or the system can specify the expiration time), and other clients will be notified of the conditional segment, but the segment can be converted to a confirmed segment based on the creator proceeding to claim additional spots, thereby bringing the total number of spots claimed by the creator to meet or exceed the specified number of spots. FIGS.4A-4Iare screenshots of example graphical interfaces for creating a conditional segment. In this particular example, the screenshots illustrated inFIGS.4A-4Iand described below show interfaces for creating a conditional shuttle. However, similar interfaces and techniques can be used to create other types of conditional segments, including conditional charters or other conditional flights. The interfaces can be presented by an application executing on a client device. FIG.4Ais a screenshot of an example route selection interface410that enables a client to select a route between two geographic indicators (e.g., two cities). The route selection interface410presents routes for which shuttles are available. The routes can be grouped based on various factors, e.g., geography, upcoming events, etc. In this example, a first group of routes416for coast-to-coast shuttles in the U.S. is presented at the top of the route selector interface410. A second group of routes417for shuttles to and from cities that will be hosting upcoming events is presented below the first group of routes416. A third group of routes418for routes between northeastern U.S. cities is presented below the second group of routes417. There may be additional groups of routes below the third group of routes. For example, the route selection interface410can allow a client to scroll down and view additional groups of routes, e.g., by swiping a touchscreen of the device presenting the interface upwards. The routes can be grouped based on the client viewing the route selection interface410and/or the groups may be ordered based on the client viewing the route selection interface410. For example, if the client is located in New York, groups of routes that include New York City can be presented at the top of the route selection interface. Similarly, if a different client has most often been a passenger on shuttles to and from San Francisco, groups of routes that include San Francisco can be presented at the top of the route selection interface410. The route selection interface410includes, for each route, a route reversal element419that, when selected, switches the origin and destination of the route. For example, the city on the left of each route may be the origin and the city on the right of each route may be the destination. In this example, the top route between New York and San Francisco may represent shuttles from New York to San Francisco. The application can reverse the route to represent shuttles from San Francisco to New York in response to selection of the route reversal element419. The route selection interface410includes a shuttle selector element411that, when selected, causes the application to present the route selection interface410. The route selection interface410also includes a charter selector element412that, when selected causes the application to present an interface that enable a client to initiate a chartered shuttle from an origin to a destination. The route selection interface410also includes a specials selector element413that, when selected, causes the application to present current specials. The route selection interface410also includes a messenger selector element414that, when selected, causes the application to present a messenger interface to the client. The messenger interface allows the client to view messages sent to the client and/or send messages to other clients or administrators of the shuttle service provider. For example, the messages can include notifications of conditional shuttles on which the client may be interested in claiming a spot. The route selection interface410also includes a profile selector element415that, when selected, causes the application to present a profile interface that allows the client to manage the client's profile. FIG.4Bis a screenshot of an example calendar interface420that presents a month view421of a calendar and shuttle indicators422and423for shuttles. The application can present the calendar interface420in response to user interaction with (e.g., a selection of) a route presented in the route selection interface410. In this example, the calendar interface420is presented in response to a selection of the route from New York to San Francisco, as shown in a route identifier element425. Similar to the route selection interface410, the calendar interface420includes the shuttle selector element411, the charter selector element412, the specials selector element413, the messenger selector element414, and the profile selector element415. The calendar interface420includes zero or more shuttle indicators for each day of the month. In this example, each shuttle indicator is a dot that represents a particular shuttle for the route identified by the route identifier element425. The shuttle indicator for a particular shuttle is presented under the date that corresponds to the departure date for the particular shuttle. For example, the shuttle indicator422represents a shuttle that has a departure date of Mar. 25, 2018, an origin of New York, and a destination of San Francisco. If there is no shuttle indicator under a date, then there are no shuttles scheduled for that date from New York to San Francisco. The color of the shuttle indicator represents the status of the shuttle. For example, a white shuttle indicator (or another color) can represent a confirmed shuttle that has one or more available spots that a client can claim. A red shuttle indicator (or another color that differs from the color used for the confirmed shuttle) can represent a sold out shuttle that does not have any available spots. A gold shuttle indicator (or another color that differs from both of the colors used for the confirmed shuttle and the sold out shuttle) can represent a conditional shuttle that has at least one spot that needs to be claimed to convert the shuttle to a confirmed shuttle. The shuttle indicators423for Mar. 29, 2018 represent one conditional shuttle, two confirmed shuttles with availability, and five sold out shuttles based on the different colors assigned to the dots. When the shuttle is converted from a conditional shuttle to a confirmed shuttle, the application can update the color of the shuttle indicator for the shuttle, for example, from gold to white or red depending on whether there are any remaining spots on the now confirmed shuttle. If the confirmed shuttle has at least one spot available, the application can update the color of the shuttle indicator, for example, from gold to white to indicate that the shuttle is confirmed and that there is at least one spot left on the shuttle. If one or more clients claim the remaining spots on the confirmed shuttle, the application can update the color of the shuttle indicator from white to red to indicate that the shuttle is no longer available. The use of color-coded dots in combination with a calendar interface allows users to quickly and easily identify shuttles from an origin to a destination during a relevant time period. This also reduces the number of interactions with the interface for a client to obtain the information needed, which reduces the processing load imposed on the client device's processor by the application. For example, absent this interface design, a client would have to select each date to view the shuttles on those dates and then select each shuttle to determine its status. This interface provides a quick overview of the availability and types of shuttles for a particular route during a particular time period. Note that specific colors are used for purposes of illustration, but that other colors or other types of visual indicators can be used to differentiate between different types of shuttles. The shuttle indicators or the dates in the month view421can be selectable interface controls. For example, the application can present an interface that enables a client to claim a spot on a shuttle in response to selection of the shuttle indicator for the shuttle. If a date has multiple shuttle indicators, selection of the date or shuttle indicators under the date can cause the application to present an interface that enables the client to select between the shuttles and then claim a spot on one of the shuttles. Example user interfaces for claiming a spot on a shuttle are illustrated inFIGS.5A-5Eand described below. The route identifier element includes a route reversal element419. If the client selects the route reversal element419, the application can update the calendar interface420to present shuttle indicators for shuttles from San Francisco to New York in the month view421. In addition, the client can navigate between months by interacting with the calendar interface420. For example, if the client swipes from right to left across the month view421, the application can update the calendar interface420to present the next month (e.g., April, 2018 in this example) and shuttle indicators for shuttles with departure dates during the next month. If the client swipes from right to left, the application can update the calendar interface to present the previous month (e.g., February, 2018 in this example) and shuttle indicators for the previous month. The calendar interface420also includes creation interface control424that enables a client to switch between finding shuttles that have already been scheduled or created and creating a shuttle. A client can switch between finding a shuttle and creating a shuttle by sliding a slider bar426of the shuttle creation interface control424from one side to the other. If the client slides the slider bar426to the “create your own” side of the shuttle creation interface control424to switch to creating a shuttle, the application can update the calendar interface420to present the calendar interface430ofFIG.4C. The calendar interface430ofFIG.4Cincludes similar elements as the calendar interface420ofFIG.4B, including the shuttle selector element411, the charter selector element412, the specials selector element413, the messenger selector element414, the profile selector element415, the route identifier element425, and the shuttle creation interface control424. The calendar interface430includes a month view431that presents an amount432for each day of the month. The amount reflects the amount per spot for a shuttle that is created by the client to travel from the origin (New York) to the destination (San Francisco) on that date. As described above, a client that creates a segment is also referred to as a creator. As shown inFIG.4C, the amount may vary based on the departure date for the shuttle. The amounts can be determined by the spot assessment engine118and provided to the creator's device, e.g., in response to the creator interacting with the shuttle creation interface control424to create a shuttle. In another example, the spot assessment engine118can provide the amounts prior to the interaction, e.g., in response to selection of the route in the interface410. In this way, the amounts can be stored locally at the creator's device and presented with less latency than requesting the amounts in response to the interaction with the shuttle creation interface control424. The creator can select a date in the month view431to create a shuttle on that date. In response to the selection of (or other interaction with) a date, the application can update the interface to present a jet selection interface, e.g., the jet selection interface440ofFIG.4D. The jet selection interface440presents jet information elements441and445. Assume for the purposes of this example that the client selected a departure date of Mar. 10, 2018. In this example, there are two types of jets available for the shuttle and a respective jet information element for each type of jet. If more than two type of jets are available for the shuttle, more jet information elements could be presented in the jet selection interface440or the jet selection interface could allow the creator to scroll down to view additional jet selection elements. The jet selection element441is for a super midsize jet and includes a representative image442of a custom midsize jet (e.g., not an image of the actual jet that would be used for the shuttle) and information443about custom midsize jets. The information443includes a number of spots on the super midsize jet, an amount for the first three spots on the super midsize jet, and an amount for each additional spot. Similarly, the jet selection element445is for a heavy jet and includes a representative image446of a heavy jet and information447about heavy jets. The information447includes a number of spots on the heavy jet, an amount for the first three spots on the heavy jet, and an amount for each additional spot. If the creator selects one of the jet selection elements441or446, the application can update the interface to present a shuttle customization interface, e.g., the shuttle customization interface450ofFIG.4E. In this example, the shuttle customization interface450ofFIG.4Eenables the creator to customize a super midsize jet for a shuttle from New York to San Francisco in response to a selection of the jet selection element441 The shuttle customization interface450includes a representative image452of a super midsize jet and information452about super midsize jets, similar to the jet selection element441. The shuttle customization interface450also includes a conditional shuttle interface control453that allows the creator to select between a confirmed shuttle (e.g., guaranteed shuttle) and a conditional shuttle (e.g., pending shuttle). The creator can designate the shuttle as a conditional shuttle by sliding a slider bar458of the conditional shuttle interface control453to the “pending” side of the conditional shuttle interface control453. Similarly, the creator can designate the shuttle as a confirmed shuttle by sliding the slider bar458of the conditional shuttle interface control453to the “guaranteed” side of the conditional shuttle interface control453. Note that the described sliding can be initiated through a tap or other user interaction with the slider bar458. The shuttle customization interface450also includes tips454on customizing confirmed shuttles when the slider bar458of the conditional shuttle interface control453is on the “guaranteed” side of the conditional shuttle interface control453. The shuttle customization interface450also includes a seat selection interface control455that presents a spot element456(e.g., seat icon) for each spot (e.g., seat) on the shuttle. For a confirmed shuttle that is guaranteed at creation, the shuttle service provider can require the creator to claim a specified number of the spots. In this example, the shuttle service provider requires the creator to claim three spots. The three required spots are represented by included spot elements457that have a label “included” to indicate that the three spots are included for the creator. The remaining five spots are represented by available spot elements457that include an amount for the available spot. The creator can claim more spots by interacting with (e.g., selecting) the available spot elements457. For example, if the creator wants to claim a total of five spots, the creator can select two of the available spot elements457. In response, the application can convert the selected available spot elements to included spot elements in the shuttle customization interface450. The included spot elements457can be presented in a different color than the available spot elements458. For example, the included spot elements457can be presented in red while the available spot elements458are presented in grey. In some implementations, the creator can offer one or more of the required spots to other clients, e.g., by selecting the included spot element457for the spot. If the creator offers an included spot, the application can enable other clients to claim the spot. For example, when another client selects the shuttle, e.g., from a calendar interface, the offered spot may be presented as an available spot that can be claimed by the other client. If the creator offers a required spot to other clients, the application can update the shuttle customization interface450to present the spot element for the spot in a different color, e.g., in blue. If a client claims one of the offered spots, the shuttle service provider can provide a flight credit (or other credit) to the creator for the spot. The creator may still be responsible for paying the amount for the claimed spot. If the creator designates the shuttle as a conditional shuttle using the conditional shuttle interface control453, the application can update the interface to present a conditional shuttle customization interface, e.g., the conditional shuttle customization interface460ofFIG.4F. In the conditional shuttle customization interface460, the application converts the spot elements457for some of the required spots to pending spot elements459. The pending spot elements458represent the number of spots that other clients would be required to claim to convert the conditional shuttle to a confirmed shuttle. In this example, the shuttle service provider requires a total of three claimed spots and the creator has a minimum of one claimed spot. Initially, the application may convert all but one of the included spots to pending spots and convert all but one of the included spot elements457to pending spot elements459to reflect the conversion. The creator can then interact with (e.g., select) a pending spot element459to reclaim the pending spot, e.g., for someone traveling with the creator. Each pending spot not claimed by the creator can then be made available to other clients. The pending spot elements459can be presented in a different color than the included spot elements457for the included spots and the available spot elements458. For example, the pending spot elements459can be presented in gold to match the conditional shuttles in the calendar interfaces. The conditional shuttle customization interface460also includes tips464on customizing conditional shuttles when the slider bar458of the conditional shuttle interface control453is on the “pending” side of the conditional shuttle interface control453. If the creator slides the slider bar458back to the “guaranteed” side of the conditional shuttle interface control453, the application can update the interface to present the shuttle customization interface450ofFIG.4E. For example, the application can convert the pending spots back to included spots and convert the pending spot elements459back to included spot elements457. In some implementations, the application can update the interface to present an expiration time selection interface470ofFIG.4G. This interface is optional and may not be included in implementations in which the creator is required to claim at least a minimum number of spots on the conditional segment. In some implementations, the expiration time selection interface470is presented in response to the creator swiping a touchscreen of the device upwards to scroll down or otherwise scrolling down in the conditional shuttle customization interface460. For example, the top portion of the expiration time selection interface470includes the bottom portion of the conditional shuttle customization interface460in which the spot elements456are presented. The expiration time selection interface470includes an expiration time interface control471. The expiration time interface control471enables the creator to specify an expiration time for the conditional shuttle. The expiration time interface control471includes an expiration time indicator472that presents the currently specified expiration time. The expiration time interface control471also includes a first row of numbers473that represent days and a second row of numbers474that represents hours. The number of days and the number of hours in the two rows can be limited based on the maximum time period to which the expiration time is limited for the shuttle. The second row of numbers can include repeating sets of the numbers one through twenty-four. The expiration time interface control471also includes a line475on which (or near) the creator can align a number in the first row of numbers and a number in the second row of numbers to set the expiration time. In the illustrated position, the expiration time is one day and twelve hours based on the number one (one day) in the first row of numbers473being to the left of the line475(without the number two being on or to the left of the line475) and the number twelve (twelve hours) in the second row of numbers474being on the line475. The creator can cause each row of numbers to move, e.g., by swiping the rows of numbers to the left or right. For example, swiping to the right within the expiration time interface control471can cause the second row of numbers to move to the right going from higher numbers to lower numbers. The first row of numbers can move in the same direction but an amount that corresponds to the number of hours the second row of numbers has moved. For example, if the creator causes the second row of numbers to move from the twelve to the next zero by swiping to the right, the first row of numbers may move to the right an amount that corresponds to half a day. In other examples, the creator can move each row of numbers individually to specify the expiration time. Of course, other interface controls, such as text entry boxes, clock interfaces, etc. could also be used to specify the expiration time. In this example, the creator has specified an expiration time of zero days and five hours, as shown in the expiration time selection interface470ofFIG.4H. The expiration time selection interfaces470and480include a shuttle information element476that presents information about the conditional shuttle, e.g., the departure geographic identifier, the destination geographic identifier, and the departure time. This information can be updated by the creator, e.g., by interacting with the shuttle information element476. For example, the creator can select the departure geographic identifier to change the departure city or airport for the conditional shuttle. The expiration time selection interfaces470and480also includes a confirmation element475that enables the user to complete the creation of the conditional shuttle, e.g., after specifying the expiration time for the conditional shuttle. If the creator selects the confirmation element475, the application transmits the attributes of the conditional shuttle to the shuttle management system110ofFIG.1. In response, the shuttle management system110can notify other clients of the conditional shuttle, as described above. For example, when other clients navigate to the calendar interface420for the month of March, 2018 and for shuttles from New York to San Francisco, an orange shuttle indicator (e.g., dot) will be presented under the departure date (Mar. 10, 2018) for the conditional shuttle, as shown inFIG.4I.FIG.4Iis a screenshot of an example calendar interface490. The calendar interface490includes many of the same elements as the calendar interface of420ofFIG.4B. However, the calendar interface490has been updated related to the calendar interface420ofFIG.4Bto include a shuttle indicator491for the client-initiated conditional shuttle under the departure date of Mar. 10, 2018. The clients can interact with the shuttle indicator491or the departure date to view information about the conditional shuttle and/or claim a spot on the conditional shuttle. If the conditional shuttle is converted to a confirmed shuttle, the color of the shuttle indicator491can be updated to white. If all of the available spots on the shuttle are claimed, the color of the shuttle indicator491can be updated to red. In some implementations, when the shuttle management system110ofFIG.1converts a conditional shuttle to a confirmed shuttle, the shuttle management system110may remove the shuttle indicator491for the shuttle from the calendar interface until a jet is sourced from an operator for the shuttle. Once a jet is confirmed from an operator, the shuttle management system110can add an updated shuttle indicator (e.g., a white shuttle indicator) for the confirmed shuttle back to the calendar interface to allow other clients to claim a spot on the shuttle. This allows the shuttle management system110the ability to determine whether the actual jet will have available spots over the number of spots claimed by the creator and the other clients. If so, the shuttle management system110can add the white shuttle indicator to the calendar interface. If not, the shuttle management system110can add a red shuttle indicator to the calendar interface to indicate that there is a shuttle, but that the shuttle is full. FIGS.5A-5Eare screenshots of example graphical interfaces for claiming a spot on a conditional segment. In this particular example, the screenshots illustrated inFIGS.5A-5Eand described below show interfaces for claiming a spot on a conditional shuttle. However, similar interfaces and techniques can be used to claim a spot on other types of conditional segments, including conditional charters or other conditional flights. The interfaces can be presented by an application executing on a client device. FIG.5Ais a screenshot of an example interface510that presents an example route selection interface510that enables a client to select a route between two geographic indicators (e.g., two cities). The route selection interface510is similar to the route selection interface410ofFIG.4A. For example, the route selection interface510includes three groups of routes516-518, a shuttle selector element511, a charter selector element512, a specials selector element513, a messenger selector element514, and a profile selector element515. If the client selects one of the routes, the application can update the interface to present a calendar interface, e.g., the calendar interface520ofFIG.5B. In this example, the calendar interface520is presented in response to a selection of the route from South Florida to New York, as shown in a route identifier element525. Similar to the route selection interface510, the calendar interface520includes the shuttle selector element511, the charter selector element512, the specials selector element513, the messenger selector element514, and the profile selector element515. The calendar interface520includes zero or more shuttle indicators for each day of the month. In this example, each shuttle indicator is a dot that represents a particular shuttle for the route identified by the route identifier element525. The shuttle indicator for a particular shuttle is presented under the date that corresponds to the departure date for the particular shuttle. For example, the shuttle indicator522represents a shuttle that has a departure date of Apr. 1, 2018, an origin of South Florida, and a destination of New York. If there is no shuttle indicator under a date, then there are no shuttles scheduled for that date from South Florida to New York. As described above, the color of the shuttle indicator represents the status of the shuttle. For example, a white shuttle indicator can represent a confirmed shuttle that has one or more available spots that a client can claim. A red shuttle indicator can represent a sold out shuttle that does not have any available spots. A gold shuttle indicator can represent a conditional shuttle that has at least one spot that needs to be claimed to convert the shuttle to a confirmed shuttle. The shuttle indicators523for Apr. 27, 2018 represent one conditional shuttle and one confirmed shuttle with availability. The calendar interface520also includes shuttle creation interface control524, similar to the shuttle creation user interface424ofFIG.4B. To claim a spot on one of the shuttles, the client can select one of the shuttle indicators or data. For example,FIG.5Cis a screenshot of a calendar interface530in which a client has selected a date. The calendar interface530is similar to the calendar interface520ofFIG.5C. However, the calendar interface530has been updated to highlight the client's selection of the date Apr. 27, 2018 using a highlight box526. Then highlight box526provides feedback to the client of the date the application detected being selected by the client. In response to the selection of a date, the application can present a shuttle selection interface, e.g., the shuttle selection interface540ofFIG.5D. The shuttle selection interface allows the client to select from multiple shuttles available on the selected date. If only one shuttle is available on the selected date, then the shuttle selection interface540may present information about the available shuttle only. The shuttle selection interface540includes two shuttle information elements541and544. Each shuttle information element541and544includes information for a respective available shuttle. The shuttle information element541includes a representative image541of the type of jet selected for the shuttle and information543about the shuttle. The information543indicates that the shuttle is a custom shuttle created by a creator, that the shuttle is conditional (e.g., pending), and that the shuttle expires in ten hours. The shuttle information element541can also show the number of minutes and/or second until the shuttle expires. The color of portions of the shuttle information element541can be based on the status of the shuttle. For example, as the shuttle is conditional, the background color of the shuttle information element where the pending status and expiration time are presented may be gold to indicate the conditional status of the shuttle. If the shuttle is confirmed, the color may be blue to indicate that the shuttle is confirmed. The shuttle information element541also includes a spot indicator549that indicates the number of spots claimed (with a checkmark in the spot) and the number of spots remaining to be claimed (without a checkmark in the spot) to convert the conditional shuttle to a confirmed shuttle. In this example, there is one spot with a checkmark and one spot without a checkmark. Thus, one spot has been claimed and one spot is still required to be claimed before the conditional shuttle is converted to a confirmed shuttle. The shuttle information element544includes a representative image545of the type of jet selected for the shuttle and information546about the shuttle. The information546indicates that the shuttle is a confirmed shuttle. The information does not indicate that the shuttle is a custom shuttle, so the shuttle may be a regularly scheduled shuttle by the shuttle service provider. As the shuttle is confirmed, the background color of portions of the shuttle information element may blue. The client can select one of the shuttle information elements541or544to claim a spot on the shuttle represented by the selected shuttle information element. In response to the selection, the application can present a confirmation interface, e.g., the confirmation interface550ofFIG.5E. In this example, the client has selected the conditional shuttle represented by the shuttle information element541. The confirmation interface550includes a representative image551of the type of jet selected for the shuttle, an information element552with information about the shuttle (similar to the information element543ofFIG.5D), an estimated itinerary element553with information about the estimated itinerary for the shuttle, and a confirmation element554. The client can claim a spot on the shuttle by selecting the confirmation element554. Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices). The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources. The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures. A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks). The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
85,201
11861746
DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to those skilled in the art, that embodiments may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Users typically spend a great deal of time and effort making travel arrangements, such as looking for hotels, airfares, and car rentals that are within their budget. Existing travel sites allow a user to input various travel criteria, such as travel dates and destinations, to search for all the available hotels, airfares, and car rentals that match the travel criteria. The existing travel sites also allow the user to sort and filter results based on a specified cost. While such existing travel sites generally work well for making travel arrangements, there is often a large disparity among cost and availability for the same or similar hotels, airfares, and car rentals across different travel sites. This ends up burdening the user as the user has to search through the available options across multiple travel sites to make sure he or she gets the best price within their budget. Searching through multiple travel sites takes a great deal of time and effort and forces the user to navigate through multiple pages of information and manually compare results to make travel arrangements. Even still, the travel arrangements the user finally settles on may not provide the user with the best available options for their budget. The disclosed embodiments improve the efficiency of using an electronic device by providing a better way for users to conduct travel, such as by making travel arrangements using a subscription-based travel service. The subscription-based travel service, according to the disclosed embodiments, allows a user to search for travel services and make reservations for travel services (e.g., such as hotels, rental cars, airfares, homes/residences, experiential travel, guided tours, cruises, train fares, private aviation, “glamping,” bespoke travel, event-based travel, and/or space travel) for a fixed annual or monthly subscription fee. Specifically, the user can pay a monthly or annual subscription fee and make an unlimited number of reservations for travel services without having to consider budgetary constraints. Namely, the travel service automatically identifies, curates, and generates a predetermined list of all of the best available travel service options for a specified travel period and destinations from which the user can select based on the user's subscription value as a function of the booking start date (e.g., the date on which the user views and selects to reserve a given travel service) and the travel date (e.g., the date on which the selected travel service begins, such as the first night at the hotel). In this way, the amount of time and effort the user has to spend searching for travel services that meet the user's budget are significantly reduced. Also, by providing a single interface and travel site for making travel arrangements that automatically take into account various travel service costs in providing travel services options to the user, the number of steps, pages, and interfaces the user has to navigate through to make travel arrangements are reduced. This provides a better way for a user to consume travel. Namely, the user does not need to search through multiple travel sites and pages of information to find travel arrangements that satisfy the user's needs. To automatically provide the user with the list of available travel services, the disclosed embodiments receive travel information (e.g., automatically, before the user requests to view a curated list of travel service options or in response to the user specifying the travel information) with a travel date and, in response, compute one or more subscription values as a function of a booking date, the travel date, and one or more transportation criteria. The transportation criteria represents a level of difficulty experienced by the user in reaching a given travel service from a geographical location of the user and includes at least one of expense, number of stops, duration of travel, mode of transportation, and/or level of agony. Specifically, the disclosed embodiments compute a likelihood of consumption value for the user based on the one or more transportation criteria associated with arriving at a given travel service in the list of travel services (e.g., arriving at a particular hotel) from the geographical location of the user. A travel cost value associated with the given travel service is adjusted (increased or decreased) in response to determining that the likelihood of consumption value for the user is less than or greater than the threshold. The disclosed embodiments generate for display, in a graphical user interface to the user, an interactive visual representation of the given travel service that represents the travel cost value and which the user can select to reserve the given travel service. In some cases, the one or more subscription values each includes an accumulated value portion, representing the total amount the user will have paid for the subscription from the booking date to the travel date, and an amortized value portion, representing an estimate of the total amount of the subscription the user will have paid from the booking date to a given time period, such as a week, relative to the annual subscription cost. In particular, a first subscription value is computed as a function of a booking date, the travel date, and a first set of transportation criteria associated with arriving at each of a first set of travel services in the list of travel services, and a second subscription value is computed as a function of the booking date, the travel date, and a second set of transportation criteria associated with arriving at each of a second set of travel services in the list of travel services. In some cases, the second set of transportation criteria represents a higher level of travel difficulty than the first set of transportation criteria. In some implementations, the first subscription value includes a first maximum purchase amount that is computed by applying a first weight to a function (e.g., an average) of the amortized value portion and the accumulated value portion of the first subscription value. The second subscription value includes a second maximum purchase amount that is computed by applying a second weight to a function (e.g., an average) of the amortized value portion and the accumulated value portion of the second subscription value. The weights that are applied are determined based on the set of transportation criteria associated with each subscription value. In this way, the second maximum purchase amount can be computed as a higher number than the first maximum purchase amount if the second set of transportation criteria associated with the second subscription value represents a greater level of difficulty of arriving at the destination from the geographical location of the user than the first set of transportation criteria associated with the first subscription value. A list of travel services that are available on the travel date is searched to identify candidate travel services that each has a first cost (e.g., a cost available through a publicly available database or travel site) that exceeds a previously computed minimum travel value of each of the subscription values of a given user. Then, a subset of the candidate travel services that each have a second cost (e.g., a cost available exclusively to subscribers of the travel service via the subscription travel services system database) that is less than the first and second maximum purchase amounts is selected and generated for display to the user in a graphical user interface using one or more interactive visual representations. Particularly, the transportation criteria associated with reaching each travel service in the list of travel services from the geographical location of the user is determined and used as a basis to compare against the first or second maximum purchase amounts. Namely, second costs of travel services that are associated with the first set of transportation criteria (e.g., that are within a first distance to the geographical location of the user) are compared against the first maximum purchase amount and second costs of travel services that are associated with the second set of transportation criteria (e.g., that are within a second distance to the geographical location of the user that is greater than the first distance) are compared against the greater second maximum purchase amount. As a result, the travel cost value of the travel services associated with the second transportation criteria is increased. The user can select a given one of the displayed visual representations to instruct the system to automatically reserve the travel service associated with the selected visual representation. According to the disclosed embodiments, users who are geographically located in disparate places are presented different sets of available travel services based on the likelihood that the respective users will consume the available travel services. The likelihood is determined based on at least one of the transportation criteria associated with arriving at the available travel services from the geographical location of the user and/or the time between when the travel service is booked and the time when the travel service is planned to be consumed. As an example, a user who is geographically proximate to a given travel service (e.g., is associated with a set of travel criteria that has a low level of difficulty in arriving at the given travel service) is provided less value for consuming the given travel service (e.g., because the likelihood of the user consuming the given travel service is high) than another user who is far away from the given travel service (e.g., is associated with a set of travel criteria that has a high level of difficulty in arriving at the given travel service). Specifically, two users who are searching for the same hotel in a particular region may be provided different values for that hotel (e.g., different lengths of stays) based on how likely each user is to reserve a stay at the hotel. As such, a user who has an easy time arriving at the hotel (e.g., because the user is within a threshold distance to the hotel and/or because the user's trip to the hotel takes less than a certain amount of time) is provided a shorter duration of stay as an option to reserve the hotel than another user who has a difficult time arriving at the same hotel (e.g., because the user is further than the threshold distance to the hotel and/or because the user's trip to the hotel takes more than a certain amount of time). FIG.1is a block diagram illustrating a networked system100for a subscription based travel service, according to some example embodiments. The system100includes one or more client devices such as client device110. The client device110comprises, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDA), smart phone, tablet, ultrabook, netbook, laptop, multi-processor system, microprocessor-based or programmable consumer electronic, game console, set-top box, computer in a vehicle, or any other communication device that a user may utilize to access the networked system100. In some embodiments, the client device110comprises a display module to display information (e.g., in the form of graphical user interfaces). In further embodiments, the client device110comprises one or more of touch screens, accelerometers, gyroscopes, cameras, microphones, global positioning system (GPS) devices, and so forth. The client device110may be a device of a user that is used to access and utilize subscription based travel services via a travel services system124implemented by an application server102. For example, the client device110may be used by a user to navigate to a website of the travel services system124. In some embodiments, the client device110may include a dedicated travel services system124application with the same or similar functionality as the website. After accessing the website, the user inputs personal information (e.g., name, address, phone number, payment information, geographical location, home address, and so forth) to subscribe to the travel services system124. In some embodiments, the subscription fee is paid monthly but can be paid on any other periodic interval (e.g., weekly, daily, every other month, annually, lifetime, and so forth). After subscribing to the travel services system124, the user is provided with login credentials that can be used to navigate and browse available travel services on the travel services system124. For example, the user can access the travel services system124to browse hotel rooms available in various luxury categories in a selected geographical location on a particular date or range of dates. In some embodiments, the client device110presents a graphical user interface with data entry regions allowing the user to select from a predefined list of travel destinations (e.g., at various geographical locations) available on a travel date input by the user. In some embodiments, the graphical user interface allows the user to manually type in a name of a desired geographical location destination and the desired travel date (e.g., the date the user plans to take the trip and consume the travel service). As the user types in the name of the desired geographical location destination, the travel services system124searches through travel destinations available on the travel date that are at the desired geographical location(s) and presents the available travel destinations to the user for selection. In some cases, the list of travel services that are presented to the user are selected based on a likelihood that the user will consume (e.g., reserve or book) the travel services. The likelihood may be determined based on one or more travel criteria associated with reaching the travel services from the geographical location of the user and/or an amount of time between when the travel services are presented to the user for booking and the travel date on which the travel services will be consumed. In some cases, the likelihood is determined based on an output of a machine learning model. In response to receiving a user selection of one or more of the travel destinations, prior to or during selection of the destination, the client device110presents a data entry region for the user to input a specific travel start date (e.g., an arrival date at the hotel) and a number of days for the trip. In some embodiments, the list of available travel services are automatically searched for on a daily basis without receiving the user selection of the travel destination and/or travel start date. The travel services system124retrieves subscription information for the user specifying the amount the user pays on a monthly or other periodic basis. Using the subscription information, the travel services system124computes one or more subscription values as a function of the booking date and the travel date and a corresponding set of one or more transportation criteria associated with reaching a given set of travel services. The booking date may be computed based on the current date on which the user selection of the travel destination is received and/or the current date on which a list of travel services is searched and curated. The travel services system124utilizes the subscription values and a value guard to search for travel services that satisfy the subscription values and the value guard. The value guard is used as a filter of travel services to ensure that the travel service options presented to the user have a cost and/or value that satisfies a minimum travel value amount and does not exceed a maximum purchase amount corresponding to the subscription value. As an example, a first subscription value is computed as a function of the booking date, the travel date, and a first set of transportation criteria associated with arriving at each travel service of a first set of travel services, and a second subscription value is computed as a function of the booking date, the travel date, and a second set of transportation criteria associated with arriving at each travel service of a second set of travel services. Transportation criteria associated with reaching each of a plurality of travel services from the geographical location of the user is determined. The cost for each travel service is compared to either the first or the second subscription value based on the transportation criteria determined for the travel service. For example, a first travel service may be determined to be associated with the first set of transportation criteria. As such, the cost of the first travel service may be compared to the first maximum purchase amount rather than the second greater maximum purchase amount to determine whether to include the first travel service in the list presented to the user. If the cost of the first travel service is less than the first maximum purchase amount, then the first travel service is included in the list presented to the user but if the cost of the first travel service is greater than the first maximum purchase amount and less than the second maximum purchase amount, then the first travel service is excluded from the list. As another example, a second travel service may be determined to be associated with the second set of transportation criteria. As such, the cost of the second travel service may be compared to the second maximum purchase amount rather than the first maximum purchase amount to determine whether to include the second travel service in the list presented to the user. If the cost of the second travel service is less than the second maximum purchase amount, then the second travel service is included in the list presented to the user but if the cost of the second travel service is greater than the second maximum purchase amount, then the second travel service is excluded from the list. In some cases, the second set of transportation criteria represents a higher level of travel difficulty than the first set of transportation criteria. For example, the second set of transportation criteria represents a duration of travel for reaching a travel service from the geographical location of the user that exceeds a threshold and the first set of transportation criteria represents a duration of travel for reaching a travel service from the geographical location of the user that is less than the threshold. As another example, the second set of transportation criteria represents a number of stops, agony, and/or types of transportation modes needed to reach a travel service from the geographical location of the user that is greater than a number of stops, agony and/or types of transportation modes needed to reach a travel service from the geographical location of the user associated with the first set of transportation criteria. In some implementations, transportation criteria of each travel service may be determined by searching various travel sites to identify a set of transportation criteria that is the quickest, cheapest, and with the least agony for the user to reach the respective travel service from the geographical location of the user. In some implementations, each travel service is associated with different predetermined lists of transportation criteria, with each criteria associated with different geographical locations of users from which transportation originates. The travel services system124provides matching results to the client device110for presentation of the results in the graphical user interface using one or more interactive visual representations. The graphical user interface of the client device110may be utilized to access reviews, comments, and additional information for each of the travel services represented by the interactive visual representations. In some cases, travel services that are presented to the user on the client device110may be ranked or sorted based on the transportation criteria associated with each travel service for the user. As an example, a first travel service that is associated with a first set of transportation criteria representing a low level of difficulty in reaching the first travel service from the geographical location of the user may be ranked lower (and presented at a lower position in the list or not at all) than a second travel service. The second travel service may be associated with a second set of transportation criteria representing a high level of difficulty (and a low likelihood of consumption) and accordingly may be ranked higher than the first travel service in the list. In this way, two different users may be presented the same set of results of travel services but in different ways (e.g., different rankings or organization) based on the geographical locations of the users and the different sets of transportation criteria for the users to reach the travel services. Namely, the first travel service may be determined to be associated with a first set of transportation criteria for a first user based on a first geographical location of the first user while the first travel service may be determined to be associated with a second set of transportation criteria for a second user based on a second geographical location of the second user. Accordingly, if the second set of transportation criteria represents a high level of difficulty in reaching the first travel service, the first travel service may be presented at the top of the list of travel service results presented to the second user and may be positioned at the bottom of the list of travel services presented to the first user, or may not be presented at all to the first user. The client device110receives a user input selecting one of the interactive visual representations for a travel service and communicates the selection to the travel services system124. The travel services system124automatically reserves the travel service (e.g., holds and pays for a room at a hotel) corresponding to the selected interactive visual representation. The client device110may present a confirmation page to the user informing the user of the travel service that has been reserved and the travel start date. In some implementations, the travel services system124may limit the number of concurrent travel services the user can reserve. For example, the travel services system124may allow the user to select only one travel service reservation at a time, such that the user is prevented from searching for and/or reserving additional travel services until the currently selected travel service that has been reserved expires or is canceled. As another example, the travel services system124may only allow the user to reserve three travel services at a time, such that when one of the three travel services expires, the user can reserve an additional travel service. Namely, after the start and end dates for the travel service elapse indicating that the user has utilized the travel service, the client device110may allow the user to search for additional travel services to reserve in a similar manner as before. Alternatively, the user can navigate to a cancelation page or graphical user interface using the client device110and cancel any reservations previously selected within a cancelation window (e.g., within 72 hours prior to the travel start date). In response to receiving a user request to cancel the travel service, the travel service system124may cancel the reservation and the client device110may allow the user to search for a new travel service in a similar manner as before. In some embodiments, the travel services system124provides an improved way for users to consume travel. The travel services system124performs such improved techniques in three phases or steps. In the first phase or step, the travel services system124generates an inventory of travel services by searching travel destinations across a range of dates or specific dates throughout the year. The travel destinations are searched from publicly available information sources (e.g., databases of other travel sites available to non-subscribers of the travel services system124), by direct access to a predetermined set of travel services, third party sources, proprietary sources, and travel services that have direct relationships and contracts for travel services with the travel services system124. The travel destinations are searched periodically (e.g., nightly or weekly) using various combinations of travel dates and destinations. The search returns travel services available at various dates throughout the world and includes the total cost for consuming the travel services on the particular combination of dates along with the cancelation policy of each travel service. The cancelation policy may indicate the fee for canceling the travel service once booked which may be free or a nominal charge. A list of transportation criteria is determined for each travel service based on different origin geographical locations of users. Namely, each travel service may be associated with a database that includes different sets of transportation criteria for different geographical locations. As a result, the output of phase one or step one is a collection or database of tens of millions of combinations of travel services (and travel service types), at different ranges of travel start dates, with corresponding prices or costs, corresponding transportation criteria for different geographical locations, and with corresponding cancelation policies. After the first phase or step, the travel services system124performs a second step or phase. In the second step or phase, the collection of the travel services identified in the first phase is curated or filtered in accordance with one or more rules. Specifically, any, all, or a combination of the information associated with each travel service (e.g., the travel start dates, the prices, the travel service type, the destination, the transportation criteria, and the cancelation policy) is analyzed and compared with the one or more rules to exclude and select a list of candidate travel services. In an embodiment, the rules include various criteria (e.g., the booking date or date on which the reservation for a given travel service is made or requested, the price with taxes and fees (cost of the reservation), transportation criteria for a given user's geographical location, and the cancelation fee or policy), which are used to curate or filter the collection of travel services. The rules may vary between users of the travel services system124as different users are in geographically disparate locations meaning they are associated with different transportation criteria for reaching the travel services. Specifically, the rules consider how much the travel services system124is willing or allowed (e.g., the maximum purchase amount) to spend for a given travel service which is leveraged against how far in advance the reservation is being made (e.g., the difference between the booking date and the travel start date). The maximum purchase amount may be computed based on various factors including payments received (e.g., the amount a subscriber will actually end up paying from the booking date to the travel date and an amortized amount by week of the subscriber's subscription cost) and the transportation criteria associated with a given user reaching the travel service. Namely, one or more maximum purchase amounts may be computed to be used as a basis for filtering the travel services based on cost. The one or more maximum purchase amounts may each be associated with different sets of transportation criteria, such that a second maximum purchase amount may be greater than a first maximum purchase amount because the second maximum purchase amount is associated with a second set of transportation criteria that represents a greater level of difficulty of arriving at a given travel service than a first set of transportation criteria associated with the first maximum purchase amount. In this way, different maximum purchase amounts are computed for different users based on the geographical locations of the different users and the transportation criteria associated with each of the users in reaching a particular travel service. In some cases, the amount the subscriber will actually end up paying may be computed by determining how many subscription cycles or how many payments will be collected between the booking date and the travel start date. For example, the subscriber pays monthly on the first day of every month and the booking date is in the middle of a given month and the travel start date is 2 months from the booking date. In such cases, the subscriber will end up paying 2 cycles of subscription fees—two monthly payments—by the time the trip starts. The amortized amount is less granular and represents on a repeated time interval (e.g., daily, monthly, weekly, hourly) basis how much the subscriber would end up paying. The maximum purchase amount is then offset by a margin (weight) which may be positive or negative. The margin (weight) may vary based on how far in advance the reservation is being made (e.g., the difference between the booking date and the travel start date) and/or based on the transportation criteria associated with the given user. In some cases, the margin may be greater for a second set of transportation criteria than the margin for a first set of transportation criteria that is associated with a lower level of difficulty of arriving at a given travel service from a geographical location of a user than the second set of transportation criteria. The margin may vary based on the type of travel service being booked or reserved. For example, the margin may be greater for travel services that include or relate to cruises and smaller for travel services that include or relate to homes/residences. The travel services system124computes a minimum travel value representing the maximum a given user would be willing to pay for the travel service. This may be computed as a percentage (e.g., 80%) of the amount the subscriber would have paid by the time the trip begins. Specifically, the amount is a percentage of the number of subscription cycle payments the subscriber would have made by the travel start date starting from the booking date. This amount is used to remove any travel services that have a cost that is less than the minimum travel value as the subscriber can shop those travel services independently of being a subscriber to the travel services system124. The travel services system124eliminates any duplicates from the travel services and maintains those travel services that have a maximum duration of travel dates. For example, if the travel services system124identifies the same hotel having 2, 3 and 5 night stay options in the same time period, the travel services system124selects only the 5 night option and removes or filters out the 2 and 3 night stay options during the same time period. The travel services system124searches the actual price or cost of the various travel services and applies a margin to the cost of each travel service. The margin may be positive or negative and may depend on how far in advance the travel date is relative to the booking date. The travel services system124filters any travel service that has a cost that exceeds the maximum purchase amount associated with the transportation criteria of the travel service and filters any travel service that has a cost that is below the minimum travel value. For example, travel services system124determines that a first travel service is associated with a first set of transportation criteria for reaching the first travel service from a location of the user. Accordingly, the travel services system124retrieves a first maximum purchase amount associated with the first set of transportation criteria to determine whether the first travel service has a cost that exceeds the first maximum purchase amount. The first travel service is filtered from the list of travel services presented to the user in response to the travel services system124determining that the first travel service has a cost that exceeds the first maximum purchase amount. Similarly, travel services system124determines that a second travel service is associated with a second set of transportation criteria for reaching the second travel service from a location of the user. In this case, the travel services system124retrieves a second maximum purchase amount associated with the second set of transportation criteria to determine whether the second travel service has a cost that exceeds the second maximum purchase amount. Namely, the travel services system124compares the cost of the second travel service to the second maximum purchase amount rather than the first maximum purchase amount because the second travel service is associated with the second transportation criteria. In such circumstances, the second travel service may have a cost that exceeds the first maximum purchase amount but the second travel service is not removed or filtered from the list presented to the user because the second travel service may have a cost that does not exceed the second maximum purchase amount. This results in greater or increased travel cost value being provided to the user for the second travel service because there is a lower likelihood that the user will consume or reserve the second travel service given the second set of transportation criteria that indicates it is rather difficult for the user to reach the second travel service. The travel services system124applies an additional filter based on cancelation policies of travel services that do not satisfy a given cancelation policy criteria. In some cases, based on a subscription type of a given user, only one maximum purchase amount is computed for the given user, which may be independent of the transportation criteria of the user reaching the travel service from the geographical location of the user. For example, travel services system124may determine that the subscription type of the user allows the user to book travel for one or more other users in different geographical locations than the given user. In such cases, the travel services system124may not adjust the maximum purchase amount to create multiple maximum purchase amounts based on the transportation criteria. Namely, the travel services system124may compute a single maximum purchase amount based on the amount a subscriber will actually end up paying from the booking date to the travel date and an amortized amount by week of the subscriber's subscription cost. This single maximum purchase amount is used to filter out travel services that have costs that exceed the maximum purchase amount. In some embodiments, the travel services system124presents the filtered list of travel services as options for the user or subscriber to select to make a reservation. The user can further filter the list based on various criteria (e.g., travel dates, travel destinations, etc.). In some embodiments, the travel services system124presents to a user a comparison of each travel service that is presented against what is available for the same travel service on a publicly available or other travel site. Specifically, the travel services system124presents next to each travel service or next to a portion of travel services an identification of another booking travel site that has the same travel service and the cost for booking that same travel service on the another booking travel site. This cost that is presented for comparison may be retrieved from storage based on what is in the collection that is analyzed and filtered to generate the list and/or may be determined automatically by accessing the other travel site, executing a search for the particular travel service and the particular range of travel dates, and retrieving the cost presented on the other travel site based on the executed search. One or more users may be a person, a machine, or other means of interacting with the client device110. In example embodiments, the user may not be part of the system100but may interact with the system100via the client device110or other means. For instance, the user may provide input (e.g., touch screen input or alphanumeric input) to the client device110and the input may be communicated to other entities in the system100(e.g., third-party servers130, server system108, etc.) via a network104. In this instance, the other entities in the system100, in response to receiving the input from the user, may communicate information to the client device110via the network104to be presented to the user. In this way, the user interacts with the various entities in the system100using the client device110. The system100further includes a network104. One or more portions of network104may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the public switched telephone network (PSTN), a cellular telephone network, a wireless network, a WiFi network, a WiMax network, another type of network, or a combination of two or more such networks. The client device110may access the various data and applications provided by other entities in the system100via web client112(e.g., a browser, such as the Internet Explorer® browser developed by Microsoft® Corporation of Redmond, Washington State) or one or more client applications114. The client device110may include one or more client applications114(also referred to as “apps”) such as, but not limited to, a web browser, messaging application, electronic mail (email) application, an e-commerce site application, a mapping or location application, an online home buying and selling application, a travel services application, a real estate application, and the like. In some embodiments, one or more client applications114are included in a given one of the client device110and configured to locally provide the user interface and at least some of the functionalities, with the client application114configured to communicate with other entities in the system100(e.g., third-party servers130, server system108, etc.), on an as-needed basis, for data and/or processing capabilities not locally available (e.g., to access location information, to access travel services information, such as cost and availability, to authenticate a user, to verify a method of payment, etc.). Conversely, one or more applications114may not be included in the client device110, and then the client device110may use its web browser to access the one or more applications hosted on other entities in the system100(e.g., third-party servers130, server system108, etc.). A server system108provides server-side functionality via the network104(e.g., the Internet or WAN) to one or more third-party servers130and/or one or more client devices110. The server system108includes an application server102that implements an application program interface (API) server120, a web server122, and a travel services system124, that may be communicatively coupled with one or more databases128. The one or more databases128may be storage devices that store data related to users of the system108, applications associated with the system108, cloud services, travel services data, one or more machine learning techniques and so forth. The one or more databases128may further store information related to third-party servers130, third-party applications132, client devices110, client applications114, users, and so forth. In one example, the one or more databases128may be cloud-based storage. In one example, the one or more databases128may be cloud-based storage. The one or more databases128may store subscription information for one or more users of the travel services system124. The subscription information may identify users of the travel services system124, the subscription start dates of the users, the subscription fee of the users, the geographical locations of the users, the total amount paid-to-date for a subscription of the users, and one or more travel services activities of the users. The travel services activities may include any combination of the number of reservations made in a given time period (e.g., within a given subscription year) by each user, the subscription duration (e.g., measured from the subscription start date to the present date) of each user, the booking duration (e.g., measured from the booking date to the travel date) of each user, the distance to the travel destination of each user (e.g., measured from an address of the user and the location of reserved travel destinations), the margin amount (e.g., how much profit was made in aggregate during the course of the subscription) for each user, the cancelation frequency (e.g., how often the user cancels a reservation made), and/or the reservation frequency (e.g., how much time elapses on average between the end of one reservation and the start of another). The one or more databases128may store the reservations (e.g., the destination and the travel start date and/or duration) of travel services of each user or subscriber of the travel services system124. The one or more databases128may store a list of all the available, or a selected set, of travel services in one or more geographical regions or destinations along with reviews and/or detailed information about the travel services. The one or more databases128may store first and second costs on a nightly basis or on some other periodic interval (e.g., per 6 night basis) for each travel service. The first cost that is stored in the one or more databases128may represent the cost for the travel service that is provided to non-subscribers of the travel services system124and is available by directly making the reservation through a dedicated server of the travel service and/or by making the reservation through an existing travel service search interface. The one or more databases128may access a dedicated existing travel service search interface on a periodic basis (e.g., nightly or weekly) to obtain and download the first cost of each, or a selected set, of travel services. The first cost may be computed by selecting a specified travel duration (e.g., 6 nights) and multiplying the per night rate (provided by the travel service) by the specified travel duration. The second cost of each travel service may be a dedicated cost that is changed on an annual or monthly basis and is provided by contract between the travel services system124and the corresponding travel service. The second cost may only be available to users who subscribe to the travel services system124. The second cost of each travel service may represent the cost for consuming the travel service during a specified travel duration (e.g., 6 nights). The one or more databases128may store the cancelation policy of each travel service indicating how much time in advance of the reservation start date at a given travel service the travel service reservation can be canceled without penalty (e.g., to receive a full refund). The one or more databases128may store the cost for canceling a given travel service outside of the cancelation policy. The one or more databases128may store an expected margin on a per user basis. The expected margin may increase over time (e.g., for subscribers classified as very active) or decrease over time (e.g., for subscribers classified as not very active). The expected margin may increase or decrease based on the transportation criteria associated with reaching a given travel service from a geographical location of the user. In this way, different travel services or destinations may be associated with different expected margins on the basis of the transportation criteria or level of difficulty each respective user experiences in reaching the respective travel service or destination. The expected margin may change by a predetermined factor based on a difference between a booking date and a travel start date (e.g., the margin may change based on how far in advance a user is making the reservation). This may be used to reduce the maximum purchase amount by a first factor if the reservation is made less than a predetermined number of days in advance of the travel date. This may be used to increase the maximum purchase amount by a second factor if the reservation is made more than a predetermined number of days in advance of the travel date. The server system108may be a cloud computing environment, according to some example embodiments. The server system108, and any servers associated with the server system108, may be associated with a cloud-based application, in one example embodiment. The server system108includes a travel services system124. The travel services system124includes one or more modules, storage devices, and databases. The storage devices in the travel services system124store various travel services activities for each user, travel services activities training data, and one or more machine learning techniques for classifying users of the travel services system124. The modules in travel services system124are configured to compute components of a subscription value, compute value guards, and search for available travel services to provide to the client device110in response to receiving a request for travel services at a given destination and time frame. The modules in travel services system124are configured to receive a user selection of one of the travel services matching the request and reserve the selected travel service for the user. The modules in travel services system124are configured to determine whether the number of pending reservations for a given user exceed an allowable number of pending reservations (e.g., more than one, or more than three) and, in response, prevent the user from making further reservations until the number of pending reservations is below the allowable number (e.g., by canceling a pending reservation or waiting for the reservation to expire). The modules in travel services system124are configured to train a machine learning technique to classify a given user or subscriber using the travel services activities of the user or subscriber by establishing relationships between known travel services activities and known or manually assigned classifications to users or subscribers. The modules in travel services system124are configured to filter the available travel services provided to a given client device110based on the classification of the user of the client device110, transportation criteria, and/or cancelation policies of the various travel services. The details of the travel services system124are provided below in connection withFIG.2. The system100further includes one or more third-party servers130. The one or more third-party servers130may include one or more third-party application(s)132. The one or more third-party application(s)132, executing on third-party server(s)130, may interact with the server system108via API server120via a programmatic interface provided by the API server120. For example, one or more the third-party applications132may request and utilize information from the server system108via the API server120to support one or more features or functions on a website hosted by the third party or an application hosted by the third party. The third-party website or application132, for example, may provide software version analysis functionality that is supported by relevant functionality and data in the server system108. Third-party servers130may include an existing non-subscription based travel service. Such non-subscription based travel services can be used to search for travel services at a first cost available to non-subscribers of the travel services system124. The travel services system124may query the third-party servers130on a periodic basis to obtain the first costs for the travel services provided by the travel services system124. The first costs may represent a per-night rate of the travel services multiplied by a predetermined number of nights (e.g., 6 nights). FIG.2illustrates a travel services system124, according to some example embodiments. The travel services system124includes a travel services training data module210, a machine learning technique training module220, a trained machine learning technique module230, a new travel service request module240, a subscription value module250, a travel services search module260, and a travel value guard module252. In some implementations, some modules of the travel services system124may be implemented on server system108and others may be implemented on third party servers130or client device110. In some implementations, all of the modules of the travel services system124are implemented on server system108, on third party servers130, or on client device110. In such cases, server system108communicates information to third party servers130based on the modules implemented and vice versa. The new travel service request module240may communicate with the client device110to receive parameters and criteria for a new travel service request. For example, via the graphical user interface of the client device110, the user can select a travel destination or geographical location and can, optionally, input the desired trip start date, end date, and/or trip length. The new travel service request module240may communicate this user selection to the travel services search module260to identify a list of available travel services. The new travel service request module240may communicate an identifier of the user of the client device110to the subscription value module250. In some embodiments, the parameters are automatically determined and computed on a nightly basis and used to curate a list of travel services over the course of a given user. In such cases, the user may enter a travel destination and the curated list is presented with previously selected travel dates (e.g., travel dates not inputted or selected by the user). In such cases, the new travel service request module240may, on a periodic basis (e.g., nightly) retrieve subscription values for one or more users, computed based on transportation criteria associated with each of the users in reaching the various travel destinations. The new travel service request module240may also retrieve one or more travel destinations along with the corresponding transportation criteria associated with the geographical location of the user. The new travel service request module240provides the subscription values and the travel destinations, together with the corresponding transportation criteria, as the selection to the travel services search module260. In this way, the travel services search module260identifies available travel services across a range of dates for one or more users and curates such a list for subsequent presentation to the user. The user can simply enter a desired destination, and the available and curated list of travel services at the destination, together with the available travel dates, are presented to the user. The travel services search module260may communicate with the subscription value module250to obtain the subscription values for the user of the client device110. The subscription value module250may communicate with the databases128to obtain the booking date and the subscription cost of the identified user along with a geographical location of the user. The booking date may be the current date indicating when the search module260conducts the search for available travel services and/or the date on which the user requests to view available travel services is received from the new travel service request module240. The subscription value module250may compute the subscription values based on various parameters: an aggregated subscription cost parameter, an amortized subscription cost parameter, and a set of transportation criteria. In an example, the subscription value module250computes a first subscription value as an average of the aggregated and the amortized subscription cost parameters and a first set of transportation criteria and computes a second subscription value as an average of the aggregated and the amortized subscription cost parameters and a second set of transportation criteria. For example, based on the data provided by the user, the subscription value module250may determine that the trip is scheduled to start 10 weeks from the present time. In such cases, the subscription value module250computes an estimate of the total amount the user will pay for the subscription by aggregating the total amount that will be paid from the present time until 10 weeks from the present time. Namely, the subscription value module250assumes the user will continue paying for the subscription until the travel start date from the booking date and estimates how much the user would have paid for the subscription from the current booking date until the future travel start date. As an example, if the subscription costs $2500 per month, the subscription value module250may determine that the trip will start 10 weeks from the present day and, in the next 10 weeks, three months' worth of subscription fees (e.g., $7500) will be paid (assuming the fee is paid on the first day of every month). Accordingly, the subscription value module250may compute $7500 as the aggregated subscription cost parameter of the subscription value that will be paid from present time (the booking date) until the trip start time. The subscription value module250may also compute as the subscription value an amortized amount of the subscription cost over an annual basis. For example, the subscription value module250may determine $30,000 as the total cost of the subscription for the entire year (e.g., by multiplying the number of months in a year, 12, by the monthly subscription fee, $2500). The subscription value module250may amortize the yearly subscription cost on a specified repeated period (e.g., daily, monthly, hourly, weekly) basis to determine the amount of the subscription paid from the booking date until the travel start date. For example, if the trip is planned to start in 10 weeks, the subscription value module250computes $5,769 as the amortized subscription cost parameter of the subscription value, which is a total of 10 weeks' worth of the weekly subscription cost (e.g., annual subscription fee $30,000 divided by 52 weeks per year and multiplied by 10 weeks). The subscription value module250may compute the first and second subscription values as a function of the aggregate subscription cost expected to have been paid by the time the trip starts and the amortized subscription cost by the time the trip starts as measured from the booking date and based on different transportation criteria. For example, if the user plans the trip to start in 10 weeks from today (the booking date), the subscription value module250computes an average of $7,500 and $5,769. Then, the subscription value module250computes the first subscription value by applying a first weight (e.g., multiplying) to the average and computes the second subscription value by applying a second weight to the average. The value of the first and second weights may be based on the transportation criteria associated with the different subscription values. For example, transportation criteria associated with a first level of travel difficulty or agony in reaching a given destination may be used to compute a first weight value (e.g., 1.0) and transportation criteria associated with a second level of travel difficulty or agony (that is greater or more difficult than the first level of travel difficulty or agony) in reaching a given destination may be used to compute a second weight value (e.g., 1.3). In this way, the second subscription value may be 30% greater (resulting in 30% more travel cost value being provided to a user) than the first subscription value. The subscription value module250provides the parameters of the subscription values to the travel value guard module252. The travel value guard module252is configured to compute a guard range having a minimum travel value and a maximum purchase amount based on the subscription values. The guard range ensures that the travel services identified by the travel services search module260satisfy minimum parameters that ensure a subscriber receives a better deal or bargain than making the same reservation for the travel service through another travel service system (e.g., a travel service system provided by the third-party servers130). The guard range also ensures that the travel services identified by the travel services search module260satisfy a margin amount that provides a positive or negative level of profitability to the travel services system124. The margin amount may be computed based on a difference between the booking date and the travel date, such that the margin is greater when the difference is smaller than a threshold and is lower when the difference is greater than a threshold. Namely, the minimum travel value is used to ensure that travel service results provided to the user have a value, as determined by the first cost associated with the travel services, that is greater than the minimum travel value. Also, the maximum purchase amounts of each subscription value are used to ensure that the travel service results provided to the user are not valued, as determined by the second cost associated with the travel services, greater than the respective maximum purchase amount. In some cases, the first and second costs may be the same values and in other cases they are different values. As an example, the travel value guard module252computes the minimum travel value as a function of the aggregated (or accumulated) subscription cost parameter of the subscription value. Specifically, the travel value guard module252computes the minimum travel value as 80 percent of the aggregated (or accumulated) subscription cost parameter. Accordingly, if the aggregated subscription cost is determined to be $7,500, the minimum travel value is computed to be $6,000 (e.g., 80 percent of $7,500). As an example, the travel value guard module252computes the maximum purchase amounts for each of the first and second subscription values as a function of an adjusted average of the aggregated and amortized subscription cost parameters and the corresponding first and second weights. The average may be adjusted based on a margin amount or value that is associated with the user retrieved by the travel value guard module252from the databases128. Specifically, the travel value guard module252computes a first maximum purchase amount as an average of the aggregated (or accumulated) subscription cost parameter and the amortized subscription cost parameter offset by the retrieved margin and the first weight and computes a second maximum purchase amount as an average of the aggregated (or accumulated) subscription cost parameter and the amortized subscription cost parameter offset by the retrieved margin and the second weight. The travel services search module260receives the guard range from the travel value guard module252and searches for travel services that fall within the guard range and that satisfy the travel criteria (optionally) supplied by the user received from the new travel service request module240. As an example, the travel services search module260first searches for all of the travel services that are available on the travel date range (e.g., the travel start date and the travel duration) received from the client device110and/or received automatically by the travel service request module240. The travel services search module260restricts or limits the search to those travel services that are within a specified range (e.g., 25 miles) of the travel destination or geographical region received from the client device110and/or received automatically by the travel service request module240. In some cases, the travel services search module260access a predefined list of travel destinations and searches all of the available travel services available in 6-day periods (or other defined periods) during the course of the entire year. The travel services search module260searches various combinations of travel dates and destinations to generate millions of combinations of possible travel destinations at various periods. After travel services search module260identifies the list of travel services that are available on the travel start date and that meet the travel destination or geographical region parameters, the travel services search module260obtains first and second costs associated with each of the travel destinations from the databases128and determines one or more transportation criteria for a given user to reach each of the travel destinations. The travel services search module260compares the first or second costs of each of the identified travel services to the minimum travel value received from the travel value guard module252. The travel services search module260removes or filters from the list any travel service that has a first or second cost that is below the minimum travel value. The travel services search module260may also filter out and remove any travel destination that has a cancelation policy that fails to satisfy cancelation policy criteria. The travel services search module260removes or filters from the list any travel service that has a second cost that is above the maximum purchase amount corresponding to the determined transportation criteria of the travel service received from the travel value guard module252. Specifically, travel services search module260obtains transportation criteria determined for a first travel service and retrieves the maximum purchase amount associated with the obtained transportation criteria. For example, if the first travel service is determined to be associated with a first set of transportation criteria for the geographical location of the user, the travel services search module260obtains the first maximum purchase amount. In such cases, the first travel service is filtered or removed from the list of travel services presented to the user if the first travel service has a cost that exceeds the first maximum purchase amount. On the other hand, if the first travel service is determined to be associated with a second set of transportation criteria for the geographical location of the user, the travel services search module260obtains the second maximum purchase amount. In such cases, the first travel service is filtered or removed from the list of travel services presented to the user if the first travel service has a cost that exceeds the second maximum purchase amount rather than the first maximum purchase amount. The second maximum purchase amount may be greater than the first maximum purchase amount if the second transportation criteria represents a greater level of difficulty or agony in reaching a given destination than the first transportation criteria. In some embodiments, to determine the first or second cost, the travel services search module260may multiply a nightly first and/or second cost of each travel service during the travel period by the number of days in the travel service request. In some cases, the travel services search module260communicates with the trained machine learning technique module230to obtain a classification for the user making the travel request and further filters or removes travel services based on the classification of the user. The travel services search module260provides the filtered list of travel services back to the new travel service request module240for provision to the client device110and presentation to the user for selection and requesting to make a reservation. To classify users, the trained machine learning technique module230is initially trained based on training data. Specifically, the travel services training data module210includes a list of travel services activities associated with various subscribers of the travel services system124. The travel services activities are obtained by the travel services training data module210from database128and/or from third party server130. For example, the travel services training data module210obtains the number of reservations made by a user from database128and obtains the cancelation frequency from third party server130. The travel services training data module210may access training data including the number of reservations made by each user, transportation criteria of each user that is typically experienced or used by the user booking a reservation, the subscription duration of each user, the distance to travel destination of each user, the margin amount of each user, the reservation frequency of each user, the cancelation frequency of each user, and an assigned classification of each user. The classification may represent a level of activity of each user from not active, to medium active, to very active. The classification is used to control and filter the types and quantity of travel services provided to different users. This can be used as a measure to ensure that users who are not very active are provided a greater quantity of a better type of travel services than a very active user to incentivize the non-active user to utilize the travel services system124. In some embodiments, machine learning technique training module220is trained to predict a classification for a subscriber of the travel services system124by establishing a relationship between one or more known travel services activities of other users provided by travel services training data module210and the corresponding known classification of the other users provided by the travel services training data module210. In some embodiments, machine learning technique training module220is trained to predict a likelihood of consumption of a given travel service for a subscriber of the travel services system124by establishing a relationship between one or more known travel services activities of other users (e.g., destinations the other users booked) and the locations of the other users provided by travel services training data module210and the corresponding transportation criteria such users experienced in reaching the destinations. Namely, the machine learning technique is trained to predict the types of transportation criteria a given user is willing to experience in reaching a destination. Machine learning is a field of study that gives computers the ability to learn without being explicitly programmed. Machine learning explores the study and construction of algorithms, also referred to herein as tools, that may learn from existing data and make predictions about new data. Such machine-learning tools operate by building a model from example training data (e.g., travel services activity information) in order to make data-driven predictions or decisions expressed as outputs or assessments. Although example embodiments are presented with respect to a few machine-learning tools, the principles presented herein may be applied to other machine-learning tools. In some example embodiments, different machine-learning tools may be used. For example, Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), matrix factorization, and Support Vector Machines (SVM) tools may be used for classifying a given user based on travel activities of the user. The machine-learning algorithms utilize features (e.g., various combinations of travel services activities performed by other users in interacting and making reservations with the travel services system124) for analyzing the data to generate assessments (e.g., a classification of the users). A feature is an individual measurable property of a phenomenon being observed. The concept of a feature is related to that of an explanatory variable used in statistical techniques such as linear regression. Choosing informative, discriminating, and independent features is important for effective operation of pattern recognition, classification, and regression. Features may be of different types, such as numeric features, strings, and graphs. In one example embodiment, the features may be of different types and may include one or more of a number of reservations made by each user, the subscription duration of each user, the distance to travel destination of each user, the margin amount of each user, the transportation criteria experienced or used by the users in reaching destinations, geographical locations of the users and the destinations, the reservation frequency of each user, and the cancelation frequency of each user. Transportation criteria experienced or used by users in reaching destinations includes any combination of total duration of travel in reaching the destination from the geographical location of the user, modes of transportation used in reaching the destination from the geographical location of the user, number of stops along the trip that were taken in reaching the destination from the geographical location of the user, and the like. The machine-learning algorithms utilize the training data to find correlations among the identified features that affect the outcome or assessment (e.g., the known or assigned classification of each user). In some example embodiments, the training data includes labeled data, which is known data for one or more identified features and one or more outcomes, such as the assigned classification of the user. Once the training data are collected and processed, machine learning technique training module220model can be built using either statistical learning or machine learning techniques. In one embodiment, regression analysis can be used to build the machine learning technique training module220model. Regression analysis is a statistical process for estimating the relationships among variables. There are a number of known methods to perform regression analysis. Linear regression or ordinary least squares regression, among others, are “parametric” in that the regression function is defined in terms of a finite number of unknown model parameters that can be estimated from training data. For days to pending prediction, a regression model (e.g., Equation 1) can be defined, for example, as: H≈f(X,β),  (Equation 1) where “H” denotes the known days to pending amount for a set of properties, “X” denotes a vector of input variables (e.g., any one of the travel services activities associated with the set of users), and “β” denotes a vector of unknown parameters to be determined or trained for the regression model. The training data that include travel services activities of various users and the corresponding classification (which can be manually or automatically specified for each user) provide a set of known H values (e.g., the classification of a user) having corresponding X values (e.g., feature vectors extracted from the travel services activities). Using these data, the model parameter β can be computed using data fitting techniques such as least squares, maximum likelihood, or the like. Once β is estimated, the model can then compute H (e.g., a user travel services classification) for a new set of X values (e.g., feature vectors extracted from a new set of travel services activities). As another example, the training data that include travel services activities of various users and the corresponding classification (which can be manually or automatically specified for each user) provide a set of known H values (e.g., the likelihood of consumption of a given travel service (based on geographical location of the travel service and a user) and/or the transportation criteria a given user is willing to experience) having corresponding X values (e.g., feature vectors extracted from the travel services activities). Using these data, the model parameter β can be computed using data fitting techniques such as least squares, maximum likelihood, or the like. Once β is estimated, the model can then compute H (e.g., a user travel services classification) for a new set of X values (e.g., feature vectors extracted from a new set of travel services activities). Machine learning techniques train models to accurately make predictions on data fed into the models (e.g., what was said by a user in a given utterance; whether a noun is a person, place, or thing; what the weather will be like tomorrow). During a learning phase, the models are developed against a training dataset of inputs to optimize the models to correctly predict the output for a given input. Generally, the learning phase may be supervised, semi-supervised, or unsupervised, indicating a decreasing level to which the “correct” outputs are provided in correspondence to the training inputs. In a supervised learning phase, all of the outputs are provided to the model and the model is directed to develop a general rule or algorithm that maps the input to the output. In contrast, in an unsupervised learning phase, the desired output is not provided for the inputs so that the model may develop its own rules to discover relationships within the training dataset. In a semi-supervised learning phase, an incompletely labeled training set is provided, with some of the outputs known and some unknown for the training dataset. Models may be run against a training dataset for several epochs (e.g., iterations), in which the training dataset is repeatedly fed into the model to refine its results. For example, in a supervised learning phase, a model is developed to predict the output for a given set of inputs and is evaluated over several epochs to more reliably provide the output that is specified as corresponding to the given input for the greatest number of inputs for the training dataset. In another example, for an unsupervised learning phase, a model is developed to cluster the dataset into n groups and is evaluated over several epochs as to how consistently it places a given input into a given group and how reliably it produces the n desired clusters across each epoch. Once an epoch is run, the models are evaluated and the values of their variables are adjusted to attempt to better refine the model in an iterative fashion. In various aspects, the evaluations are biased against false negatives, biased against false positives, or evenly biased with respect to the overall accuracy of the model. The values may be adjusted in several ways depending on the machine learning technique used. For example, in a genetic or evolutionary algorithm, the values for the models that are most successful in predicting the desired outputs are used to develop values for models to use during the subsequent epoch, which may include random variation/mutation to provide additional data points. One of ordinary skill in the art will be familiar with several other machine learning algorithms that may be applied with the present disclosure, including linear regression, random forests, decision tree learning, neural networks, deep neural networks, and so forth. Each model develops a rule or algorithm over several epochs by varying the values of one or more variables affecting the inputs to more closely map to a desired result, but as the training dataset may be varied, and is preferably very large, perfect accuracy and precision may not be achievable. A number of epochs that make up a learning phase, therefore, may be set as a given number of trials or a fixed time/computing budget, or may be terminated before that number/budget is reached when the accuracy of a given model is high enough or low enough or an accuracy plateau has been reached. For example, if the training phase is designed to run n epochs and produce a model with at least 95% accuracy, and such a model is produced before the nthepoch, the learning phase may end early and use the produced model satisfying the end-goal accuracy threshold. Similarly, if a given model is inaccurate enough to satisfy a random chance threshold (e.g., the model is only 55% accurate in determining true/false outputs for given inputs), the learning phase for that model may be terminated early, although other models in the learning phase may continue training. Similarly, when a given model continues to provide similar accuracy or vacillate in its results across multiple epochs—having reached a performance plateau—the learning phase for the given model may terminate before the epoch number/computing budget is reached. Once the learning phase is complete, the models are finalized. In some example embodiments, models that are finalized are evaluated against testing criteria. In a first example, a testing dataset that includes known outputs for its inputs is fed into the finalized models to determine an accuracy of the model in handling data on which it is has not been trained. In a second example, a false positive rate or false negative rate may be used to evaluate the models after finalization. In a third example, a delineation between data clusterings is used to select a model that produces the clearest bounds for its clusters of data. In some embodiments, the machine learning technique training module220is trained to establish a relationship to classify a user based on a logistic regression of one or more features (e.g., training data received from travel services training data module210). After being trained, the machine learning technique is provided to trained machine learning technique module230. In one example, the coefficient values of the machine learning technique (e.g., the linear model) are stored in a storage of trained machine learning technique module230. Trained machine learning technique module230is configured to receive new travel services activities of a new user from new travel service request module240. For example, the new travel service request module240receives a user input that identifies a desired travel destination and travel dates and accesses previously stored interaction information for the user (e.g., the number of prior reservations made by the user and the distance traveled by the user from the user's home address to the travel destinations). The new travel service request module240accesses database128and/or server130to obtain the travel services activities for the new user. For example, new travel service request module240obtains the number of reservations previously made by the user, the subscription duration of the user, the transportation criteria the user experienced in reaching destinations, the distance traveled by the user to the destinations, the margin amount stored for the user, the reservation frequency of the user, and/or the cancelation frequency of the user. The new travel service request module240instructs the trained machine learning technique module230to apply the trained machine learning technique using the previously computed coefficients to the data provided by the new travel service request module240. Trained machine learning technique module230provides a classification for the new user based on the data provided by the new travel service request module240. In another example, trained machine learning technique module230provides a likelihood of consumption for each travel service and/or transportation criteria for the new user based on the data provided by the new travel service request module240. In one example, after being trained, the machine learning technique training module220determines that a new user has a low likelihood of consumption (e.g., a likelihood of consumption that is below a threshold) for a given travel service. This is because the machine learning technique training module220determines that the new user is willing to experience a first set of transportation criteria in reaching a destination and that the given travel service is associated with a second set of transportation criteria. As a result, the system increases a travel cost value for the given travel service for the new user based on the determination provided by the machine learning technique training module220. Alternatively, after being trained, the machine learning technique training module220determines that a new user has a high likelihood of consumption for a given travel service (e.g., a likelihood of consumption that exceeds the threshold). This is because the machine learning technique training module220determines that the new user is willing to experience a first set of transportation criteria in reaching a destination and that the given travel service is associated with the first set of transportation criteria. As a result, the system decreases or maintains a travel cost value for the given travel service for the new user based on the determination provided by the machine learning technique training module220. FIGS.3-4illustrate flow diagrams of processes of the travel services system124, according to some example embodiments. The processes300,400may be embodied in computer-readable instructions for execution by one or more processors such that the operations of the processes300,400may be performed in part or in whole by the functional components of the server system108; accordingly, the processes300,400are described below by way of example with reference thereto. However, in other embodiments at least some of the operations of the processes300,400may be deployed on various other hardware configurations. The processes300,400are therefore not intended to be limited to the server system108and can be implemented in whole, or in part, by any other component. Any operation in the processes300,400can be performed in any order or entirely omitted and skipped. At operation301, a computing system (e.g., server system108) receives travel service information representing different types of travel activities performed by users of the travel service. For example, travel services training data module210obtains, from database128and/or server130, travel services activities of various types associated with users of the travel services system124(e.g., number of reservations made, subscription duration, distance to travel destinations, margin amount, reservation frequency, transportation criteria, cancelation frequency, etc.). At operation302, the computing system determines, for each of the users and for each of the travel services (or range of travel destination locations), a likelihood of consumption value, based on the different types of travel activities. At operation303, the computing system trains a machine learning technique to establish a relationship between the different types of travel activities and the determined likelihood of consumption value. For example, travel services training data module210provides the known travel activities of the users and the transportation criteria experienced by each of the users in reaching destinations booked by the users to machine learning technique training module220. Machine learning technique training module220inputs the received data into a linear model (e.g., a log odds model) to estimate or compute coefficients associated with each activity. In some implementations, machine learning technique training module220performs a regression technique to estimate the coefficients of the model. At operation304, the computing system applies the trained machine learning technique to travel activities associated with a new user to compute a likelihood of consumption value for each travel service for the new user based on the types of transportation criteria the new user is estimated to be willing to experience. For example, new travel service request module240obtains a travel request from a user, via a graphical user interface on a client device110, and obtains from database128travel activities previously performed by the user. The trained machine learning technique module230is applied to the information provided by the new travel service request module240to estimate a likelihood of consumption value for each travel service for the new user and/or new travel request. At operation305, the computing system filters a candidate list of travel services presented to the new user based on the likelihood of consumption value of each travel service for the new user. For example, the travel services search module260receives the user classification and filters identified travel services that match a travel search request based on the likelihood of consumption. In some cases, travel services with a low likelihood of consumption value are not filtered and are ranked higher in a list that is provided to a user than other travel services with a high likelihood of consumption value. At operation401, the computing system determines a geographical location of a first user of a subscription-based travel service. At operation402, the computing system searches a list of travel services that are available for the first user to consume on a travel date to identify candidate travel services. At operation403, the computing system computes a likelihood of consumption value for the first user based on one or more transportation criteria associated with arriving at a given travel service in the list of travel services from the geographical location of the first user. For example, the computing system determines that the first user is willing to experience a first set of transportation criteria in reaching a given travel service which is less difficult or has a lower level of agony than a specified amount. The computing system may compare the first set of transportation criteria to the one or more transportation criteria associated with arriving at the given travel service. The difference or deviation that results from the comparison reflects the likelihood of consumption value. For example, the first set of transportation criteria may include 1 stop over on a flight and the one or more transportation criteria associated with arriving at the given travel service may specify 3 stop overs on a flight. In such cases, the difference is 2 stops. As another example, the first set of transportation criteria may include 2 hours travel time from the user's location to a given travel service and the one or more transportation criteria associated with arriving at the given travel service may specify 4 hours travel time from the user's location to the location of the given travel service. In such cases, the difference is 2 hours. At operation404, the computing system determines that a likelihood of consumption value for the first user is less than a threshold. For example, the threshold may be set to 1 stop and the difference of 2 stops does not exceed this threshold. As another example, if the one or more transportation criteria associated with arriving at the given travel service includes 0 stops, the difference between 1 stop and 0 stops may be −1, which is greater than the threshold of 2 stops. As another example, the threshold may be set to 5 hours and the difference of 2 hours does not exceed this threshold. At operation405, the computing system increases a travel cost value associated with the given travel service in response to determining that the likelihood of consumption value for the first user is less than the threshold. For example, the computing system may apply a higher maximum purchase amount to determine whether to filter the given travel service based on a cost of the travel service. Namely, rather than applying a first maximum purchase amount which is exceeded by the cost of the travel service, the computing system applies a higher second maximum purchase amount which is not exceeded by the cost of the travel service. As a result, the given travel service is not filtered from the list of travel services presented to the user. At operation406, the computing system generates, for display in a graphical user interface to the first user, an interactive visual representation of the given travel service that represents the travel cost value. For example, the travel services search module260provides the list of search results that satisfy the travel criteria and the guard range (and optionally are filtered based on the likelihood of consumption value for each travel service provided by the trained machine learning technique module230) to the new travel service request module240. The new travel service request module240presents the search results back to the user at the client device110in a graphical user interface. FIGS.5A and5Bare illustrative graphical user interfaces of the travel services system124, according to some example embodiments. As shown inFIG.5A, a first user (Julie) can input travel search criteria501. This travel criteria may include various parameters502including a travel destination, distance to the destination, start date of the travel, end date of the travel, number of days in the trip, quality of the travel services, and/or any combination thereof. The travel services system124processes the travel search criteria and automatically generates a list of matching travel services for presentation using one or more interactive visual representations503. In some cases, the travel services system124processes the travel search criteria and automatically selects one of a plurality of previously generated and curated lists of travel services for presentation using one or more interactive visual representations503. A user can select any one of the interactive visual representations503to instruct the travel services system124to complete a reservation for the corresponding travel service associated with the selected visual representation. In some embodiments, the travel services in the graphical user interface ofFIG.5are generated using individualized travel service lists for the user based on travel behaviors, geographical location, transportation criteria, demographics, or a margin target for the user. For example, the travel services system124may further filter or reorganize the list of available travel services presented to the first user inFIG.5Abased on a profile of the first user that indicates various attributes of the user (e.g., travel behaviors, geographical location, demographics, cancelation frequency, number of reservations made in a given time interval, or a margin target specific to the user or classification of the user). The graphical user interface ofFIG.5Amay include an option enabling the user to sort or filter the travel services presented in the graphical user interface based on a predicted likelihood that the user will reserve the travel services using a preference technique or a recommendation technique based on transportation criteria associated with each travel service. In such cases, those travel services that have a greater likelihood may then be presented at the bottom of the list (or omitted entirely) as the user is more likely to be interested to reserve those travel services and those with higher likelihood are presented at the top of the list. As an example, after receiving the search criteria from the first user (Julie), the travel services system124may obtain a list of travel services that are available and that match the search criteria. The travel services system124generates one or more subscription values for the first user (Julie) that are each associated with different sets of transportation criteria. A first of the subscription values may be associated with a first set of transportation criteria and a second of the subscription values may be associated with a second set of transportation criteria. The second of the subscription values may be greater than the first of the subscription values. The travel services system124determines transportation criteria for each travel service in the list of travel services based on a geographical location of the first user. The travel services system124compares a cost of each travel service with a respective one of the subscription values that corresponds to the determined transportation criteria of the travel service. Based on this comparison, the travel services system124generates a filtered or curated list of travel services and presents the list to the user inFIG.5A. Specifically, the travel services system124provides a message510indicating to the first user (Julie) that the list of travel services the user can book are shown. As an example, the travel services system124may determine that, based on the first user's location, a given hotel is likely to be consumed by the first user because the user is in Georgia and the given travel service is nearby in Miami, Florida. For this hotel, the travel services system124may determine that the first set of transportation criteria applies for the first user and so uses a first subscription value to determine whether or not to include the hotel in the list. In this case, the hotel520with 3 rooms and valued at $3500 is determined to be less than the first subscription value and is presented to the first user. The same hotel may have 5 rooms available for $5500 but this amount exceeds the first subscription value and so is not presented as an available travel service to the first user. In some implementations, as shown inFIG.5B, a second user (Alex) may provide the same travel search criteria as the first user and at substantially the same time or date. The second user (Alex) may be in a different geographical location (e.g., California). As such, the travel services system124may determine that, based on the second user's location, the same hotel is unlikely to be consumed by the second user because the second user is in California, which is far away from the given travel service in Miami, Florida. For this hotel, the travel services system124may determine that the second set of transportation criteria for the second user applies and so uses a second subscription value to determine whether or not to include the hotel in the list. In this case, because the second subscription value is greater than the first subscription value, the same hotel550with 5 rooms available for $5500 (which was excluded from the list presented to the first user inFIG.5A) may be determined to be less than the second maximum purchase amount. Accordingly, the travel services system124provides a message540indicating to the second user (Alex) that the list of travel services the user can book are shown and includes the hotel550with 5 rooms available. This is an example of how the second user is provided with more travel cost value for the travel service (hotel550) than the first user for the hotel in response to determining that the second user is unlikely to consume or book the hotel (e.g., has a likelihood of consumption value for the hotel that is less than a threshold). In some embodiments, the list presented to the first and second users inFIGS.5A and5Bmay be sorted or organized based on the transportation criteria associated with each travel service. Specifically, the list may be sorted such that travel services associated with a second set of transportation criteria (or those unlikely to be consumed by the users), which may represent a greater difficulty in arriving at the travel service, are presented higher or are ranked higher in the list than other travel services associated with a first set of transportation criteria (or those likely to be consumed by the users), which may represent a lower difficulty in arriving at the travel service. For example, travel services that are likely to be consumed may be excluded from the lists presented to the users or may be positioned lower in the presented lists than other travel services. As an example, the hotel550may be presented to the second user (Alex) at the top of the list but may be excluded from presentation (or be presented at the bottom of the list) to the first user (Julie). This is because the first user is associated with a first set of transportation criteria for reaching the hotel550, which represents lower difficulty than a second set of transportation criteria for the second user to reach the same hotel550. FIG.6is a block diagram illustrating software architecture606, which can be installed on any one or more of the devices described above. For example, in various embodiments, client devices110and servers and systems130,108,120,122, and124may be implemented using some or all of the elements of software architecture606.FIG.6is merely a non-limiting example of a software architecture, and it will be appreciated that many other architectures can be implemented to facilitate the functionality described herein. In various embodiments, the software architecture606is implemented by hardware (including a hardware layer652with processing unit654, memory/storage656, and other hardware658) such as machine800ofFIG.7that includes processors804, memory/storage806, and input/output (I/O) components818. As explained below, the processing unit654is configured to execute instructions604that are stored in memory/storage656. In this example, the software architecture606can be conceptualized as a stack of layers where each layer may provide a particular functionality. For example, the software architecture606includes layers such as an operating system602, presentation layer614, libraries620, frameworks618, and applications616. Operationally, the applications616invoke API calls608through the software stack and receive messages612in response to the API calls608, consistent with some embodiments. In various implementations, the operating system602manages hardware resources and provides common services. The operating system602includes, for example, a kernel622, services624, and drivers626. The kernel622acts as an abstraction layer between the hardware and the other software layers, consistent with some embodiments. For example, the kernel622provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services624can provide other common services for the other software layers. The drivers626are responsible for controlling or interfacing with the underlying hardware, according to some embodiments. For instance, the drivers626can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. In some embodiments, the libraries620provide a low-level common infrastructure utilized by the applications616. The libraries620can include system libraries644(e.g., C standard library) that can provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries620can include API libraries646such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and in three dimensions (3D) graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries620can also include a wide variety of other libraries648to provide many other APIs to the applications616. The frameworks618provide a high-level common infrastructure that can be utilized by the applications616, according to some embodiments. For example, the frameworks618provide various graphic user interface functions, high-level resource management, high-level location services, and so forth. The frameworks618can provide a broad spectrum of other APIs that can be utilized by the applications616, some of which may be specific to a particular operating system602or platform. In an example embodiment, the applications616include built-in applications638including any one or more of a home application, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, a game application, and a broad assortment of other applications such as a third-party application640. According to some embodiments, the applications616are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications616, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application640(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application640can invoke the API calls608provided by the operating system602to facilitate functionality described herein. Some embodiments may particularly include a subscription based travel services application. In certain embodiments, this may be a stand-alone application that operates to manage communications with a server system such as third-party servers130or server system108. In other embodiments, this functionality may be integrated with another application. The subscription based travel services application may request and display various data related to subscription based travel services and may provide the capability for a user to input data related to the objects via a touch interface, keyboard, or using a camera device of machine800, communication with a server system, and receipt and storage of object data in a memory/storage device. Presentation of information and user inputs associated with the information may be managed by subscription based travel services application using different frameworks618, library620elements, or operating system602elements operating on a machine800. FIG.7is a block diagram illustrating components of a machine800, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.7shows a diagrammatic representation of the machine800in the example form of a computer system, within which instructions810(e.g., software, a program, an application616, an applet, an app, or other executable code) for causing the machine800to perform any one or more of the methodologies discussed herein can be executed. In alternative embodiments, the machine800operates as a standalone device or can be coupled (e.g., networked) to other machines. In a networked deployment, the machine800may operate in the capacity of a server machine130,108,120,122,124, and the like, as a client device110in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine800can comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions810, sequentially or otherwise, that specify actions to be taken by the machine800. Further, while only a single machine800is illustrated, the term “machine” shall also be taken to include a collection of machines800that individually or jointly execute the instructions810to perform any one or more of the methodologies discussed herein. In various embodiments, the machine800comprises processors804, memory814, and I/O components818, which can be configured to communicate with each other via a bus802. In an example embodiment, the processors804(e.g., a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a radio-frequency integrated circuit (RFIC), another processor, or any suitable combination thereof) include, for example, a processor808and a processor812that may execute the instructions810. The term “processor” is intended to include multi-core processors804that may comprise two or more independent processors804(also referred to as “cores”) that can execute instructions810contemporaneously. AlthoughFIG.7shows multiple processors804, the machine800may include a single processor804with a single core, a single processor804with multiple cores (e.g., a multi-core processor804), multiple processors804with a single core, multiple processors804with multiples cores, or any combination thereof. The memory/storage806comprises a main memory814, a static memory, and a storage unit816accessible to the processors804via the bus802, according to some embodiments. The storage unit816can include a machine-readable medium on which are stored the instructions810embodying any one or more of the methodologies or functions described herein. The instructions810can also reside, completely or at least partially, within the main memory814, within the static memory, within at least one of the processors804(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine800. Accordingly, in various embodiments, the main memory814, the static memory, and the processors804are considered machine-readable media. As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium is shown, in an example embodiment, to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store the instructions810. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., instructions810) for execution by a machine (e.g., machine800), such that the instructions800, when executed by one or more processors of the machine800(e.g., processors804), cause the machine800to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more data repositories in the form of a solid-state memory (e.g., flash memory), an optical medium, a magnetic medium, other non-volatile memory (e.g., erasable programmable read-only memory (EPROM)), or any suitable combination thereof. The term “machine-readable medium” specifically excludes non-statutory signals per se. The I/O components818include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. In general, it will be appreciated that the I/O components818can include many other components that are not shown inFIG.7. The I/O components818are grouped according to functionality merely for simplifying the following discussion, and the grouping is in no way limiting. In various example embodiments, the I/O components818include output components826and input components828. The output components826include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor), other signal generators, and so forth. The input components828include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In some further example embodiments, the I/O components818include biometric components830, motion components834, environmental components836, or position components838, among a wide array of other components. For example, the biometric components830include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components834include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components836include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensor components (e.g., machine olfaction detection sensors, gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components838include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication can be implemented using a wide variety of technologies. The I/O components818may include communication components840operable to couple the machine800to a network832or devices820via a coupling824and a coupling822, respectively. For example, the communication components840include a network interface component or another suitable device to interface with the network832. In further examples, communication components840include wired communication components, wireless communication components, cellular communication components, near field communication (NFC) components, BLUETOOTH® components (e.g., BLUETOOTH® Low Energy), WI-FI® components, and other communication components to provide communication via other modalities. The devices820may be another machine800or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, in some embodiments, the communication components840detect identifiers or include components operable to detect identifiers. For example, the communication components840include radio frequency identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect a one-dimensional bar codes such as a Universal Product Code (UPC) bar code, multi-dimensional bar codes such as a Quick Response (QR) code, Aztec Code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, Uniform Commercial Code Reduced Space Symbology (UCC RSS)-2D bar codes, and other optical codes), acoustic detection components (e.g., microphones to identify tagged audio signals), or any suitable combination thereof. In addition, a variety of information can be derived via the communication components840, such as location via Internet Protocol (IP) geo-location, location via WI-FI® signal triangulation, location via detecting a BLUETOOTH® or NFC beacon signal that may indicate a particular location, and so forth. In various example embodiments, one or more portions of the network832can be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a WI-FI® network, another type of network, or a combination of two or more such networks. For example, the network832or a portion of the network832may include a wireless or cellular network, and the coupling824may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling822can implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology. In example embodiments, the instructions810are transmitted or received over the network832using a transmission medium via a network interface device (e.g., a network interface component included in the communication components840) and utilizing any one of a number of well-known transfer protocols (e.g., Hypertext Transfer Protocol (HTTP)). Similarly, in other example embodiments, the instructions810are transmitted or received using a transmission medium via the coupling822(e.g., a peer-to-peer coupling) to the devices820. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying the instructions810for execution by the machine800, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Furthermore, the machine-readable medium is non-transitory (in other words, not having any transitory signals) in that it does not embody a propagating signal. However, labeling the machine-readable medium “non-transitory” should not be construed to mean that the medium is incapable of movement; the medium should be considered as being transportable from one physical location to another. Additionally, since the machine-readable medium is tangible, the medium may be considered to be a machine-readable device. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
119,590
11861747
DETAILED DESCRIPTION Overview The inventors have determined that, in many roles, it can be useful to be able to accurately determine the value of residential real estate properties (“homes”) and to accurately predict the likelihood that homes will be sold and the length of time homes will remain on the market when listed for sale at various prices. As examples, by using accurate time on market and likelihood of sale predictions for homes: sellers and their agents can optimally set listing prices; buyers and their agents can determine offer timing strategies and appropriate offer amounts; and analysts can gauge market trends and assess the health of real estate markets. Accordingly, the inventors have recognized that a new approach to valuing houses and estimating time on market and likelihood of sale that is more universally accurate, less expensive, and more convenient would have significant utility. A software and/or hardware facility for automatically determining a current value for a home or other property, estimating the length of time a home or other property will be on the market at a listing price, and/or predicting the likelihood of sale of a home at a listing price (“the facility”) is described. Though the following discussion liberally employs the words “home,” “house,” and “housing” to refer to the property being valued, those skilled in the art will appreciate that the facility may be straightforwardly applied to properties of other types. In some embodiments, the facility estimates a probability that a home will be sold if listed at a particular price. For example, the facility might estimate the probability that a home will be sold within some period (e.g., three months) if listed at a particular price, or estimate a range of probabilities that a home will be sold within some period. In some embodiments, the facility estimates the number of days that a home will remain on the market before sale if initially listed at a particular price. For example, the facility might estimate the probable length of time a home will remain unsold up to some maximum (e.g., >180 days) if listed at a particular price, or estimate a range of durations that a home will remain on the market at a particular listing price. To generate an estimate of the likelihood of sale for a home or an estimate of time on the market until sale for a home, the facility applies, in various embodiments, one or more probability distribution models. In some embodiments, the facility employs a parametric estimation model, e.g., linear regression. In some embodiments, the facility employs a random forest regression model. In some embodiments, the facility employs a multilevel hierarchical model. In some embodiments, the facility employs survival analysis to estimate time on market. In some embodiments, the facility employs a probabilistic model or logistic regression, e.g., binomial regression, to estimate probability of sale. Such models use independent variables including a particular price and, e.g., a home valuation, home attribute values, and relevant market data. In various embodiments, a model for estimating a likelihood of sale or a model for estimating time on market, for example, produces estimations based on independent variables including one or more of, e.g., the difference between a valuation of the home and the selected listing price, the difference between an estimated listing price for the home and the selected listing price, values of the home's attributes, market conditions in the home's geographic area, and the difference between the selected listing price and the median price of homes listed or sold in the home's market. In some embodiments, such independent variables include synthetic home attributes (e.g., a valuation ascribed to the home by a model, or imputed home information in place of missing data), or, for a home that is or has recently been on the market, previous listing price and duration information and cumulative days on the market. Before applying a model to produce estimates based on home data, the facility trains or fits the model and tests or validates the trained or fitted model. To train and test the model, the facility uses listing and sales transaction data describing home listing events associated with homes in a geographic area, with which home attribute values and real estate market data are also associated. Each home listing event comprises, e.g., a listing price, a listing date, and either a sale price and date (for homes that were sold while listed at the listing price) or a date that the listing price was changed or that the listing was removed (for homes that were not sold while listed at the listing price). An example of such recent listing and sales transaction data is discussed in further detail below in connection withFIG.17. In some embodiments, the facility applies a model multiple times (e.g., as discussed in further detail below in connection withFIG.19) to generate and display a set of estimates of a home's likelihood of sale at various listing prices and/or a set of estimates of the length of time a home will remain unsold on the market at various listing prices. For example, the listing prices to which the model is applied may be based on offsets from a home valuation (e.g., prices higher or lower than the valuation by some dollar amount or percentage). In some embodiments, the facility enables a user to select a range of prices, and estimates probabilities that a home will be sold and/or probable time on market if listed at prices within that range. In some embodiments, the facility displays predictions on a two-axis graph in which, e.g., the horizontal axis represents listing prices and the vertical axis represents probabilities of sale or numbers of days on the market. Examples of such graphs are discussed in further detail below in connection withFIGS.20and21. In some embodiments, both predictions are displayed in one graph (e.g., using multiple vertical scales). An example of such a multi-prediction graph is discussed in further detail below in connection withFIG.22. In time on market and likelihood of sale estimate graphs, the facility may approximate values between and/or beyond calculated predictions to generate a visual representation such as a substantially continuous line chart or smooth prediction curve or band (e.g., using linear or polynomial interpolation, regression estimation, or other curve fitting). In some embodiments, the facility displays one or more estimations or prediction graphs on a Web page corresponding to the subject home. In some embodiments, the facility determines listing prices associated with a range of probabilities of sale or a range of time-on-market durations selected by a user. For example, the facility may estimate probability of sale and/or time on market for various listing prices for a home, and identify listing prices that produce the estimates within the selected range. In some embodiments, the facility determines a listing price to obtain a user-specified threshold estimated probability that a home will be sold or estimated duration of time within which a home will be sold (e.g., for a seller interested in a quick sale, the highest listing price at which the home is estimated to have a 95% likelihood of sale, the highest listing price at which the home is estimated to sell within 15 days on the market, or the highest listing price at which the home is estimated to have a 90% likelihood of sale within 30 days). In some embodiments, the facility establishes, for each of a number of geographic regions, a model of housing prices in that region. This model transforms inputs corresponding to home attribute values into an output constituting a predicted current value of a home in the corresponding geographic area having those attributes. In order to determine the current value of a particular home, the facility selects the model for a geographic region containing the home, and subjects the values of the home's attribute values to the selected model. In some embodiments, the model used by the facility to value homes is a complex model made up of (a) a number of different sub-models each producing a valuation based on values of the attributes of a home, together with (b) a meta-model that uses values of attributes of the home to determine a way to combine the sub-model valuations to obtain a valuation of the home by the complex model, such as by determining a relative weighting of the sub-model valuations. In some embodiments, one or more sub-model valuations can be based on other sub-model valuations as well as values of the attributes of a home. In some embodiments, among the sub-models of the complex model is a listing price model that generates an estimated listing price for a home based on information about the home. An estimated listing price is an estimate of the listing price that would be attributed to a home if its owner listed it for sale. The meta-model combines home attributes, valuation inputs from various valuation models, and a listing price from a listing price model in producing an overall valuation. In some embodiments, the facility constructs and/or applies housing price models or sub-models each constituting a forest of classifying decision trees. In some such embodiments, the facility uses a data table that identifies, for each of a number of homes recently sold in the geographic region to which the forest corresponds, attributes of the home and its selling price. For each of the trees comprising the forest, the facility randomly selects a fraction of homes identified in the table, as well as a fraction of the attributes identified in the table. The facility uses the selected attributes of the selected homes, together with the selling prices of the selected homes, to construct a decision tree in which each non-leaf node represents a basis for differentiating selected homes based upon one of the selected attributes. For example, where number of bedrooms is a selected attribute, a non-leaf node may represent the test “number of bedrooms≤4.” This node defines two subtrees in the tree: one representing the selected homes having four or fewer bedrooms, the other representing the selected homes having five or more bedrooms. Each leaf node of the tree represents all of the selected homes having attributes matching the ranges of attribute values corresponding to the path from the tree's root node to the leaf node. The facility stores in each leaf node a list of the selling prices of the selected homes represented by the leaf node or assigns each leaf node a value corresponding to an average (e.g., the mean) of the selling prices of the selected homes represented by the leaf node. In some embodiments, one or more of the models or sub-models is trained using data in the data table that identifies homes listed for sale and synthetic sales prices based on their listing prices, either together with or instead of data identifying recently sold homes and their selling prices. A listing price adjustment model generates these synthetic sales prices from attributes of homes that have been listed for sale and their listing prices. In a geographic area or other set of homes for which the number of recently sold homes is very small or zero but some homes have been listed for sale, home valuations may be estimated solely on the basis of such a listing price adjustment model. The listing price adjustment model is trained using data including the listing prices, selling prices, and attributes of sold homes. In order to weight the trees of the forest, the facility further tests the usefulness of each tree by applying the tree to homes in the table other than the homes that were selected to construct the tree, and, for each such home, comparing the value indicated for the home by the decision tree (i.e., the value of the root leaf node into which the tree classifies the home) to its selling price. The closer the values indicated by the tree to the selling prices, the higher the rating for the tree. In order to value a home using such a forest of trees model, the facility uses the attributes of the home to traverse each tree of the forest to a leaf node of the tree. In some embodiments, the facility then concatenates the selling prices from all of the traversed-to leaf nodes, and selects a robust statistic (e.g., the median) of the selling prices from the concatenated list as the valuation of the home. This approach is sometimes referred to as using a “quantile regression forest.” In some embodiments, the values in each leaf node are weighted according to the rating for the tree. In most cases, it is possible to determine the attribute values of a home to be valued. For example, they can often be obtained from existing tax or sales records maintained by local governments. Alternatively, a home's attributes may be inputted by a person familiar with them, such as the owner, a listing agent, or a person that derives the information from the owner or listing agent. In order to determine a value for a home whose attributes are known, the facility applies all of the trees of the forest to the home, so that each tree indicates a value for the home. The facility then calculates an average of these values, each weighted by the rating for its tree, to obtain a value for the home. In various embodiments, the facility presents this value to the owner of the home, a prospective buyer of the home, a real estate agent, or another person interested in the value of the home or the value of a group of homes including the home. In some areas of the country, home selling prices are not public records, and may be difficult or impossible to obtain. Accordingly, in some embodiments, the facility estimates the selling price of a home in such an area based upon loan values associated with its sale and an estimated loan-to-value ratio. In some embodiments, the facility uses a decision tree to impute attribute values for a home that are missing from attribute values obtained for the home. In some embodiments, the facility employs a variety of heuristics for identifying “outlier” homes, listings, and/or sales transactions and other kinds of data undesirable for training a model and excluding them from data used by the facility to construct valuation models. For example, in some embodiments, the facility filters out data describing listings or sales of distressed homes in a geographic area, e.g., homes that have been foreclosed on or homes whose mortgages are in default. In some embodiments, the facility identifies such listings by, e.g., locating keywords in a property sale description. In some embodiments, the facility also excludes listings created by real estate agents who have been identified for creating listings with inaccurate information or priced outside a predetermined tolerance of expected or median listing prices (i.e., agents seen as having a large degree of data error or pricing error), or listings associated with brokers seen as having a large degree of error. In some embodiments, the facility maintains a list of such agents and/or brokers. Those skilled in the art will appreciate that a variety of other filters could be used. In some embodiments, the facility regularly applies its model to the attributes of a large percentage of homes in a geographic area to obtain and convey an average home value for the homes in that area. In some embodiments, the facility periodically determines an average home value for the homes in a geographic area, and uses them as a basis for determining and conveying a home value index for the geographic area. Because the approach employed by the facility to determine the value of a home does not rely on the home having recently been sold, it can be used to accurately value virtually any home whose attributes are known or can be determined. Further, because this approach does not require the services of a professional appraiser, it can typically determine a home's value quickly and inexpensively, in a manner generally free from subjective bias. Additionally, by supplementing valuation models that rely on actual home sale transactions with models incorporating synthetic sale transactions for homes that have been listed for sale, the sizes of training and testing data sets can be increased and the accuracy of the facility's valuation estimates can be improved. DESCRIPTION OF FIGURES FIG.1is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility executes. These computer systems and devices100may include one or more central processing units (“CPUs”)101for executing computer programs; a computer memory102for storing programs and data—including data structures, database tables, other data tables, etc.—while they are being used; a persistent storage device103, such as a hard drive, for persistently storing programs and data; a computer-readable media drive104, such as a CD-ROM drive, for reading programs and data stored on a computer-readable medium; and a network connection105for connecting the computer system to other computer systems, such as via the Internet, to exchange programs and/or data—including data structures. In various embodiments, the facility can be accessed by any suitable user interface including Web services calls to suitable APIs. While computer systems configured as described above are typically used to support the operation of the facility, one of ordinary skill in the art will appreciate that the facility may be implemented using devices of various types and configurations, and having various components. FIG.2is a table diagram showing sample contents of a recent listings table. The recent listings table200is made up of rows201-215, each representing a home listing that occurred in a recent period of time, such as the preceding 60 days. Each row is divided into the following columns: an identifier column221containing an identifier for the listing; an address column222containing the address of the listed home; a square foot column223containing the floor area of the home; a bedrooms column224containing the number of bedrooms in the home; a bathrooms column225containing the number of bathrooms in the home; a floors column226containing the number of floors in the home; a view column227indicating whether the home has a view; a year column228showing the year in which the home was constructed; a listing price column229containing the listing price at which the home was listed; and a date column230showing the date on which the home was listed. For example, row201indicates that listing number 1, of the home at 1611 Coleman Drive, Gloucester, VA 23189 having a floor area of 2280 square feet, 4 bedrooms, 3 bathrooms, 2 floors, no view, built in 1995, was for $245,000, and occurred on Jul. 30, 2012. Though the contents of recent listings table200are included to present a comprehensible example, those skilled in the art will appreciate that the facility can use a recent listings table having columns corresponding to different and/or a larger number of attributes, as well as a larger number of rows. Attributes that may be used include, for example, construction materials, cooling technology, structure type, fireplace type, parking structure, driveway, heating technology, swimming pool type, roofing material, occupancy type, home design type, view type, view quality, lot size and dimensions, number of rooms, number of stories, school district, longitude and latitude, neighborhood or subdivision, tax assessment, attic and other storage, etc. For a variety of reasons, certain values may be omitted from the recent listings table. In some embodiments, the facility imputes missing values using the median value in the same column for continuous variables, or the mode (i.e., most frequent) value for categorical values. ThoughFIG.2and each of the table diagrams discussed below show a table whose contents and organization are designed to make them more comprehensible by a human reader, those skilled in the art will appreciate that actual data structures used by the facility to store this information may differ from the table shown, in that they, for example, may be organized in a different manner; may contain more or less information than shown; may be compressed and/or encrypted; etc. FIG.3is a flow diagram showing steps typically performed by the facility in some embodiments in order to prepare a model to be able to predict listing prices for homes in a geographic area by creating and training a forest of listing-price-estimating decision trees. In various embodiments, the facility performs these steps for one or more geographic areas of one or more different granularities, including neighborhood, city, county, state, country, etc. In some embodiments these steps are performed periodically for each geographic area, such as daily. In some embodiments, the facility constructs and applies random forest valuation models using an R mathematical software package available at cran.r-project.org/ and described at cran.r-project.org/web/packages/randomForest/randomForest.pdf. In step301, the facility accesses recent listing transactions occurring in the geographic area. The facility may use listings data obtained from a variety of public or private sources. In some embodiments, the facility filters the listings data to exclude listings such as outlier listings and unreliable listings as described in greater detail above. An example of such listings data is the table shown inFIG.2. In step302, the facility begins with a first tree and carries out steps303-310for each tree to be created in the forest. The number of trees, such as 100, is configurable, with larger numbers typically yielding better results but requiring the application of greater computing resources. In step303, the facility randomly selects a fraction of the recent listings in the geographic area to which the tree corresponds, as well as a fraction of the available attributes including listing price, as a basis for training the tree. FIG.4is a table diagram showing sample contents of a table containing a training set comprising the selected listings and selected attributes to be used for training a tree. Tree1training table400contains rows randomly selected from the recent listings table200, here rows201,202,208,209,211,213, and215. The table further includes the identifier column221, address column222, and listing price column229from the recent listings table, as well as randomly selected columns for two available attributes: a bedrooms column224and a view column227. In various embodiments, the facility selects various fractions of the listing data rows and attribute columns of the recent listings table for inclusion in the training set data for training the tree. Returning toFIG.3, in step304, the facility creates a root node for the tree that represents all of the listings contained in tree1training table400and the full range of each of the attributes in the table. FIG.5is a tree diagram showing a single-node tree500comprising a root node corresponding to tree1training table400. The root node501represents the listings having identifiers 1, 2, 8, 9, 11, 13, and 15 (the entire training set); values of the bedrooms attribute from 0 to ∞; and values of the view attribute of yes and no. Returning toFIG.3, in steps305-310, the facility iterates through each node of the tree, including the root node created in step304and any additional nodes added to the tree in step307. In step306, if it is possible to “split” the node, i.e., create two children of the node each representing a different subrange of an attribute value range represented by the node, then the facility continues in step307, else the facility continues in step308. Further details describing steps typically performed by the facility in order to determine whether and how to split a node of a tree may be found in U.S. patent application Ser. No. 13/417,804, entitled “Automatically Determining a Current Value for a Home,” filed Mar. 12, 2012, which is fully incorporated herein by reference. In step307, where the facility has determined that the node should be split on the values of some attribute, the facility creates a pair of children for the node. Each child represents one of the subranges of the attribute for splitting identified in step306and the node's full range of other attributes. Each child represents all training set listings whose attributes satisfy the attribute ranges represented by the child. Step307is discussed in greater detail below in connection withFIG.6. In step308, because the node will not be split to two children, it will be a leaf node. The facility determines an estimated listing price based on the listing prices of the training set listings represented by the node. In some embodiments, the estimated listing price is determined by taking an average (e.g., mean or median) of the listing prices of the home listings represented by the node. In step309, the estimated listing price is stored in connection with the leaf node. In some embodiments, the set of listing prices represented by the leaf node is stored in connection with the leaf node. In some embodiments, the facility stores an estimated listing price in a separate data structure or by reference to the underlying listings data. In step310, the facility processes the next node of the tree. After step310, no more nodes will be split and the tree is fully constructed, so the facility continues in step311to construct and train another tree until a forest containing the desired number of trees has been constructed and trained. Those skilled in the art will appreciate that the steps shown inFIG.3and in each of the flow diagrams discussed below may be altered in a variety of ways. For example, the order of the steps may be rearranged; some steps may be performed in parallel; shown steps may be omitted, or other steps may be included; etc. FIG.6is a tree diagram showing a completed version of the sample tree. It can be seen that the facility added child nodes602and603to root node501, corresponding to the subranges defined by a split on the bedrooms attribute. Node602represents listings whose bedrooms attribute is less than or equal to 2, that is, between 0 and 2, as well as the full range of view attribute values represented by node501. Accordingly, node602represents training set listings 13 and 15, having listing prices $255,000 and $140,000. Node602is a leaf node. Node603represents listings with bedrooms attribute values greater than 2, that is, 3-∞. Node603further represents the full range of view attributes values for node501. Accordingly, node603represents training set listings 1, 2, 8, 9, and 11. Node603is a branch node with two child nodes604and605, indicating that the facility proceeded to identify an attribute for splitting node603, in this case the view attribute. Accordingly, child node604represents attribute value ranges of 3 or more bedrooms and no view, and concomitantly listings 1 and 9, each having 3 or more bedrooms and no view, with listing prices $245,000 and $185,000. Node605represents attribute value ranges of 3 or more bedrooms and a view (i.e., for the attribute of whether the home has a view, the value “yes”), to which listings 2, 8, and 11 correspond, having listing prices $266,500, $245,000, and $140,000. In order to apply the completed tree600shown inFIG.6to obtain an estimated listing price for a distinguished home, the facility accesses the home's attributes. As an example, consider a home having attribute values bedrooms: 5 and view: yes. The facility begins at root node501. Because node501is not a leaf node, the facility proceeds along one of its branches to a child of node501. In the example, among the available edges611and612, the facility traverses the one whose condition is satisfied by the attributes of the home. Because the value of the bedrooms attribute for the home is 5, the facility traverses edge612to node603. In order to proceed from branch node603, the facility determines, among edges613and614, which edge's condition is satisfied. Because the home's value of the view attribute is yes, the facility traverses edge614to leaf node605. Having reached a leaf node, the facility here, by way of example, takes an average of the listing prices associated with node605and estimates a listing price of $217,000 for the distinguished home. If tree600is one tree in a forest of decision trees, the facility in some embodiments aggregates the listing prices represented by leaf node605of tree600with listing prices represented by the leaf nodes representing the distinguished home by the other trees of the forest, and selects the median as the forest's estimated listing price for the distinguished home. Those skilled in the art will appreciate that the tree shown inFIG.6may not be representative in all respects of trees constructed by the facility. For example, such trees may have a larger number of nodes, a larger depth, and/or a larger branching factor. Also, though not shown in this tree, a single attribute may be split multiple times, i.e., in multiple levels of the tree. FIG.7is a flow diagram showing steps typically performed by the facility in some embodiments in evaluating the efficacy of trees in the forest and assigning corresponding relative weights to the trees. Once a forest of trees has been constructed and trained with a first set of recent listings (a training set) as described above in connection withFIGS.3-6, the facility in step701accesses a distinct second set of listings (a test set) to gauge the accuracy of predictions of each tree in the forest. The facility loops through each tree in the forest in step702, typically initializing in step703a data structure such as a list or array for collecting error measures for the tree's listing price estimations for each home listing in the test set. In steps704-705, the facility loops through each home listing in the test set and for each home accesses the home's attribute values and actual listing price. In step706, the facility applies the home's attribute values to the tree in order to reach a leaf node of the tree corresponding to the home and an estimated listing price associated with that leaf node. Steps705-706are the same steps the facility would use to apply a tree (such as tree600shown inFIG.6) to the attribute values of a distinguished home to obtain an estimated listing price for the home. In step707, the facility compares the estimated listing price for the home determined from the tree's leaf node with the actual listing price for the home accessed in step705. In some embodiments, the comparison determines the absolute value of the difference between the estimated listing price and the actual listing price, and calculates the magnitude of the estimation's error in relation to the actual listing price by dividing the difference by the actual listing price. In step708, the resulting error measure for the tree's listing price estimation for the home is added to the list of error measures for the tree, and in step709the process is repeated until error measures for the tree's estimations have been collected for each home in the test set. In step710, the facility obtains an overall error measure for the tree based on the collected error measures for the test set homes. In some embodiments, the overall error measure for the tree is determined by taking an average (e.g., the median value) of the individual error measures calculated from the tree's estimations for the homes in the test set. In step711, steps703-710are repeated for each tree in the forest, resulting in the facility assigning an overall error measure to each tree. In step712, the facility accords a relative weight to each tree that is inversely related to the overall error measure for the tree. In this manner, trees that provided more accurate listing price estimates over the test set may be attributed increased likelihood of producing correct estimates. In some embodiments, to determine a particular tree's weighting the facility generates an accuracy metric for each tree by subtracting its median error value from 1, and dividing the tree's accuracy measure by the sum of all of the trees' accuracy measures. In various embodiments, the facility uses a variety of different approaches to determine a rating that is negatively correlated with the tree's overall error measure. FIG.8is a table diagram showing sample results for testing a tree. Tree1testing table800tests tree600based upon the contents of recent listings table200. More particularly, testing is performed using recent listings that were not used to train the tree. The testing table is thus made up of rows203,204,205,206,207,210,212, and214of recent listings table200. It also contains the following columns from recent listings table200: identifier column221, address column222, bedrooms column224, view column227, and actual listing price column229. The testing table further contains an estimated listing price column811containing the estimated listing price of each home determined in steps706-707. For example, row214shows that the facility determines a listing price of $215,000 for listing 14 using tree600. To arrive at that determination, the facility begins at root node501; traverses to node603because the number of bedrooms 3 is greater than 2; traverses to node604because the value for view is “no;” and adopts the estimated listing price of node604, $215,000. Tree1testing table800further contains an error column812indicating the difference between each home's estimated listing price and actual listing price. For example, row214shows an error of 0.2874, calculated as the absolute difference between estimated listing price $215,000 and actual listing price $167,000, divided by actual listing price $167,000. Associated with the table is a median error field851containing the median of error values in the testing table, or 0.1829. Each tree's median error value is used to determine weightings for the trees that are inversely related to their median error values. FIG.9is a flow diagram showing steps typically performed by the facility in some embodiments in order to apply a forest of trees to estimate a listing price for a distinguished home. In step901, the facility accesses the distinguished home's attribute values. In step902, the facility typically initializes a data structure such as a list or array for collecting listing price estimations from each tree in the forest. In steps903-907, the facility loops through each tree in the forest obtaining an estimated listing price for the distinguished home from each tree. In step904, the facility uses the home's attributes retrieved in step901to traverse the tree to a leaf node corresponding to the home's attributes. (If any attributes of the home are missing, the facility typically imputes a value for the missing attribute based upon the median or mode for that attribute in the recent listings table.) The application of a tree to a home in step904is performed in the same way that a tree is applied to a home in the testing process described above in connection withFIGS.7and8. In step905, the estimated listing price associated with the leaf node is weighted by the rating attributed by the facility to the tree. In some embodiments, the weight attributed to the tree in the testing process is already incorporated into the estimated listing price as part of the testing process. In some embodiments, weighting is applied when the estimated listing prices of the trees in the forest are combined. In step908, the facility determines an overall estimated listing price for the distinguished home by combining the accumulated weighted estimated listing prices obtained by applying each tree in the forest to the home's attribute values. In some embodiments, the weighted estimated listing price from each tree is averaged with the weighted estimated listing prices from the other trees of the forest, and the resultant average is presented as the overall estimated listing price for the home. FIG.10is a table diagram showing sample contents of a recent listings and sales table. The recent listings and sales table1000is made up of rows1001-1015, each representing a home listing and a corresponding sale that occurred in a recent period of time, such as the preceding six months. Each row is divided into the following columns: an identifier column1021containing an identifier for the listing and sale; an address column1022containing the address of the listed and sold home; a square foot column1023containing the floor area of the home; a bedrooms column1024containing the number of bedrooms in the home; a bathrooms column1025containing the number of bathrooms in the home; a floors column1026containing the number of floors in the home; a view column1027indicating whether the home has a view; a year column1028showing the year in which the home was constructed; a listing date column1029showing the date on which the home was listed for sale; a listing price column1030containing the listing price at which the home was listed; a sale date column1031showing the date on which the home was sold; and a selling price column1032containing the selling price at which the home was sold. For example, row1011indicates that for listing-and-sale ID number 11, the home at 87 Acme Boulevard, Williamsburg, VA 23185 having a floor area of 1480 square feet, 3 bedrooms, 2 bathrooms, 2 floors, a view, built in 2002, was listed for sale at $140,000 on Apr. 3, 2012, and sold for $133,000 on Jun. 27, 2012. Though the contents of recent listings and sales table1000are included to present a comprehensible example, those skilled in the art will appreciate that the facility can use a recent listings and sales table having columns corresponding to different and/or a larger number of attributes, as well as a larger number of rows. Attributes that may be used include, for example, construction materials, cooling technology, structure type, fireplace type, parking structure, driveway, heating technology, swimming pool type, roofing material, occupancy type, home design type, view type, view quality, lot size and dimensions, number of rooms, number of stories, school district, longitude and latitude, neighborhood or subdivision, tax assessment, attic and other storage, etc. For a variety of reasons, certain values may be omitted from the recent listings and sales table. In some embodiments, the facility imputes missing values using the median value in the same column for continuous variables, or the mode (i.e., most frequent) value for categorical values. FIGS.11A-11Care a flow diagram showing steps typically performed by the facility in some embodiments in order to prepare and weight a forest of valuation-estimating decision trees.FIG.11Ais a flow diagram showing a broad outline of the steps performed in building a forest of trained, weighted decision trees that use home attributes including listing prices to generate home valuations. In step1101, the facility accesses recent listings and sales of homes in a geographic area, comprising home attribute values, listing transactions, and sale transactions. An example of such data is provided in recent listings and sales table1000inFIG.10. In some embodiments, accessing recent listings and sales includes filtering the data to exclude bad data or outlier data. In some embodiments, portions of the data used to train the trees are listings data for homes that have been listed for sale, for which synthetic sale prices have been generated as discussed in greater detail below in connection withFIGS.12and14. In step1102, the facility divides the listing and sale transactions into two distinct sets: a first set of home listings and sales data for training a valuation model (a training set) and a second, distinct set of home listings and sales data for testing and weighting the valuation model (a test set). In step1103, the facility trains, using the training set, a forest of decision trees to estimate home valuations from the homes' attribute values and listing prices. Step1103is discussed in greater detail below in connection withFIG.11B. In step1104, the facility tests, using the test set, the accuracy of the decision trees' estimations and assigns weights to the trees of the forest in order to improve the quality of home valuation estimates. Step1104is discussed in greater detail below in connection withFIG.11C. FIG.11Bis a flow diagram showing steps typically performed by the facility in some embodiments in order to create and train a forest of decision trees to estimate home valuations from home attribute values and listing prices. In steps1110-1115, the facility constructs and trains a number n of trees, such as 100. This number is configurable, with larger numbers typically yielding better results but requiring the application of greater computing resources. In step1111, the facility constructs a new tree (i.e., a root node). In step1112, the facility selects a subset of the attributes in the training set home listing and sale data, including listing price, and identifies the sale price, as a basis for training the tree. In step1113, the facility fully constructs (i.e., trains) the tree to classify the training set home data using the subset of attributes including listing price selected in step1112, resulting in a trained tree that can be used to estimate a home valuation from home attributes including a listing price. (The process of creating and training a home valuation-estimating decision tree is analogous to the process of creating and training a home listing-price-estimating decision tree described above in connection withFIG.3.) Once the tree has been fully constructed, each leaf node represents a range of home attribute values including listing prices, such that each home in the training set corresponds to exactly one leaf node. In step1114, the facility stores, in association with the leaf nodes, the sale prices of the training set homes that correspond to the attribute value ranges of each leaf node. The facility after step1115has created a forest of n trained but un-tested and non-weighted decision trees. FIG.11Cis a flow diagram showing steps typically performed by the facility in some embodiments in testing and assigning relative weight to the trees of the forest created and trained as described in connection withFIG.11B. (The process of testing and weighting a forest of home valuation-estimating decision trees is analogous to the process of testing and weighting a forest of home listing-price-estimating decision trees described above in connection withFIG.7.) In step1120, the facility iterates through each tree in the forest, performing steps1121-1127for each tree. In step1121, the facility loops through each home listing and sale entry in the test set, and accesses the home's attribute values including listing price, and its sale price. In step1122, the facility applies the home's attribute values to the tree, traversing the tree to a leaf node corresponding to the home's attribute values and its listing price. In step1123, the facility generates an estimated home valuation associated with that leaf node. (Steps1122-1123are the same steps the facility would use to apply a home valuation-estimating tree to the attribute values and listing price of a distinguished home to obtain a valuation for the home, as discussed in further detail below in connection withFIG.12.) In step1124, the facility compares the estimated valuation for the home as generated in step1123with the sale price for the home contained in the test set data, and determines an error measure (e.g., the absolute difference divided by the sale price) for the estimation by that tree for that home. In step1125, the facility performs the same steps for each home listing and sale entry in the test set, recording the error measures for each home for that tree. In step1126, the facility obtains an overall error measure for the tree based on the collected error measures for the test set homes. In step1127, the facility attributes a weight to the tree inversely related to the tree's overall error measure. In step1128, the facility repeats steps1121-1127for each tree, resulting in a forest of trained, weighted decision trees that use a home's attributes and listing price to generate a home valuation. FIG.12is a flow diagram showing steps typically performed by the facility in some embodiments in order to apply a forest of trees to generate a synthetic sale price for a home. In step1201, the facility accesses a home listing transaction including home attribute values and a listing price for a distinguished home. In step1202, the facility initializes a data structure such as a list or array for collecting synthetic sale price estimations from each tree in the forest. In steps1203-1206, the facility iterates through each tree in a forest of decision trees that use home attributes and a listing price to generate a home valuation. In step1204, the facility applies a tree to the home's attribute values and listing price, traversing the edges of the tree graph to reach the leaf node whose range of encompassed attribute values and listing prices corresponds to the home's attribute values and listing price. In step1205, the valuation or selling prices associated with that leaf node are added to the data structure that was initialized in step1202for collecting sale price estimations. After applying each tree in the forest to the distinguished home in step1206, the data structure has collected valuations for the home from each tree. In step1207, the facility generates a synthetic sale price for the distinguished home based on the collected valuations. In some embodiments, the home's overall synthetic sale price is generated by identifying the median element in the list of synthetic sale prices generated by the trees of the valuation-estimating decision tree forest. FIG.13is a table diagram showing sample contents of a recent listings table including synthetic sale prices. The recent listings and sales table1300is made up of rows1301-1315, each representing a home listing that occurred in a recent period of time, such as the preceding six months, and a corresponding synthetic sale price. Each row is divided into the following columns: an identifier column1321containing an identifier for the listing and synthetic sale; an address column1322containing the address of the listed home; a square foot column1323containing the floor area of the home; a bedrooms column1324containing the number of bedrooms in the home; a bathrooms column1325containing the number of bathrooms in the home; a floors column1326containing the number of floors in the home; a view column1327indicating whether the home has a view; a year column1328showing the year in which the home was constructed; a listing price column1329containing the listing price at which the home was listed; a date column1330showing the date on which the home was listed for sale; and a synthetic sale price column1331containing the synthetic sale price generated for the home. For example, row1306indicates that for listing number 6, the home at 1135 Eighth Avenue North, Williamsburg, VA 23185 having a floor area of 2300 square feet, 2 bedrooms, 2 bathrooms, 1 floor, no view, built in 1966, was listed for sale at $239,000 on Feb. 22, 2012, and was accorded a synthetic sale price of $232,000. Though the contents of recent listings and synthetic sales table1300are included to present a comprehensible example, those skilled in the art will appreciate that the facility can use a recent listings and synthetic sales table having columns corresponding to different and/or a larger number of attributes, as well as a larger number of rows. For a variety of reasons, certain values may be omitted from the recent listings and sales table. In some embodiments, the facility imputes missing values using the median value in the same column for continuous variables, or the mode (i.e., most frequent) value for categorical values. FIG.14is a data flow diagram showing a typical process used by the facility in some embodiments to train and/or test a home valuation model using data from both actual sale transactions and synthetic sale transactions generated by a listing price adjustment model. Listing transactions1401are provided to a listing price adjustment model1402, which uses the data to generate synthetic sale transactions1403. Both synthetic sale transactions1403and actual sale transactions1404are used to train and/or test a valuation model1405. The valuation model1405is then able to produce valuations for homes based in part on synthetic sale data. FIG.15is a data flow diagram showing a typical process used by the facility in some embodiments to apply a complex valuation model to value a home. A home attributes store1501is shown, from which attributes1502of a home are provided to various valuation models1503that produce valuations1505. Among the valuation models1503in some embodiments is a valuation model trained and/or tested on synthetic sale data. The home attributes1502are also provided from the home attributes store1501to a listing price model1504, which produces a listing price1506. The home attributes1502are also provided from the home attributes store1501to a meta model1507, which uses the home attributes1502in determining how to combine valuation inputs1505from various valuation models1503and listing price1506from listing price model1504. The meta model applies various techniques such as input weighting, bias correction, data smoothing, and confidence interval estimation in producing an overall valuation1508. Further details describing steps typically performed by the facility in connection with a meta model may be found in U.S. patent application Ser. No. 13/417,804, entitled “Automatically Determining a Current Value for a Home,” filed Mar. 12, 2012, which is fully incorporated herein by reference. FIG.16is a display diagram showing a way in which information about an individual home including a valuation generated by the facility may be presented. The display1600includes information1601about the home. Despite the fact that the home has not been sold recently, the facility also displays a valuation1602and a confidence interval of valuation estimates1603for the home, enabling prospective buyers and listing agents to gauge their interest in the home, or permitting the home's owner to gauge his or her interest in listing the home for sale. FIG.17is a table diagram showing sample contents of a recent listings history table. The recent listings table1700is made up of rows1701-1705, each representing a home listing that was active during a recent period of time, such as the preceding six months. Each row is divided into the following columns: an identifier column1721containing an identifier (e.g., the multiple listing service (MLS) listing number) for the listing; an address column1722containing the address of the listed home; a listing price column1723containing the listing price at which the home was listed; a listing date column1724showing the date on which the home was listed; an listing end date column1725showing the date on which the home listing ended; a days on market column1726showing the duration of the listing (i.e., the length of dime from the listing date to the end date [or to the current date, for an active listing]); an end reason column1727containing a classification of the reason that the listing ended; and a relisting/sale price column1728showing the price at which the listed home was sold or relisted, where applicable. For example, row1701indicates that listing number 1, of the home at 15 W High Drive, Spokane, WA 99203, was for $800,000, started on Apr. 5, 2012, and ended on Aug. 16, 2012, after 133 days on market, when it was relisted at a lower price of $710,000. Row1703indicates that listing number 3, of the same home for $710,000, began on Aug. 16, 2012, and ended after 55 days with a sale for $695,000 on Oct. 10, 2012. Row1705represents a recent, active listing. In some embodiments, the facility excludes active listings listed more recently than some minimum threshold (e.g., within the past two months) from home listing event training or testing data sets. For training or testing data purposes, the facility may treat a pending sale as a sale. In some embodiments, the facility applies survival analysis to non-excluded active listings. Though the contents of recent listings table1700are included to present a comprehensible example, those skilled in the art will appreciate that the facility can use a recent listings history table having columns corresponding to different and/or a larger number of data categories (e.g., a cross-reference to a data table containing home attribute values), as well as a larger number of rows. FIGS.18A-18Care flow diagrams showing steps typically performed by the facility in some embodiments to train and test a forest of time-on-market-estimating decision trees.FIG.18Ais a flow diagram showing a broad outline of the steps performed by the facility to build a model (here, a forest of trained, weighted decision trees) that uses home attributes including listing prices to generate time on market estimations. In step1801, the facility accesses recent listings of homes in a geographic area, comprising home attribute values and listing transactions. An example of such listing transaction data is provided in recent listings history table1700inFIG.17. (Home attribute data, e.g., a separate table of attribute values for each listed home, is not shown here. Attributes may include synthetic home attributes, e.g., an ascribed valuation, imputed home information, previous listing price and duration information, or cumulative days on the market over a series of listings.) In some embodiments, accessing recent listings includes filtering the data to exclude bad data or outlier data, or, e.g. (as described above in connection withFIG.17), recent active listing transactions. In step1802, the facility divides the listing transactions into two distinct sets: a first set of home listing data for training a time on market estimation model (a training set) and a second, distinct set of home listing data for testing and weighting the time on market estimation model (a test set). In step1803, the facility trains, using the training set, a forest of decision trees to produce time on market estimates from the homes' attribute values and listing prices. Step1803is discussed in greater detail below in connection withFIG.18B. In step1804, the facility tests, using the test set, the accuracy of the decision trees' estimations and assigns weights to the trees of the forest in order to improve the quality of time on market estimates. Step1804is discussed in greater detail below in connection withFIG.18C. (It will be appreciated by those skilled in the art that the same steps described inFIGS.18A-18Cmay be used to train and test a forest of decision trees to predict, for homes in a geographic area with home attributes including listing price, their likelihood of sale within a given time period.) FIG.18Bis a flow diagram showing steps typically performed by the facility in some embodiments in order to create and train a forest of decision trees to estimate time on market from home attribute values and listing prices. In steps1810-1815, the facility constructs and trains a number n of trees, such as 100. This number is configurable, with larger numbers typically yielding better results but requiring the application of greater computing resources. In step1811, the facility constructs a new tree (i.e., a root node). In step1812, the facility selects a subset of the attributes in the training set home listing data, including listing price, and identifies the length of time each listing was on the market and whether the result was a sale, as a basis for training the tree. In step1813, the facility fully constructs (i.e., trains) the tree to classify the training set home data using the subset of attributes including listing price selected in step1812, resulting in a trained tree that can be used to estimate the time a home might remain unsold on the market from home attributes including a listing price. (The process of creating and training a time-on-market-estimating decision tree is analogous to the process of creating and training a home listing-price-estimating decision tree described above in connection withFIG.3, and to the process of creating and training a home valuation-estimating decision tree described above in connection withFIG.11B.) Once the tree has been fully constructed, each leaf node represents a range of home attribute values including listing prices, such that each home in the training set corresponds to exactly one leaf node. In step1814, the facility stores, in association with the leaf nodes, the times on market and the listing end results (e.g., a sale at some price, relisting, or withdrawal from the market) of each of the training set homes that correspond to the attribute value range of each leaf node. The facility after step1815has created a forest of n trained but un-tested and non-weighted decision trees. FIG.18Cis a flow diagram showing steps typically performed by the facility in some embodiments in testing and assigning relative weight to the trees of the forest created and trained as described in connection withFIG.18B. (The process of testing and weighting a forest of time-on-market-estimating decision trees is analogous to the process of testing and weighting a forest of home listing-price-estimating decision trees described above in connection withFIG.7, and to the process of testing and weighting a forest of home valuation-estimating decision trees described above in connection withFIG.11C.) In step1820, the facility iterates through each tree in the forest, performing steps1821-1827for each tree. In step1821, the facility loops through each home listing entry in the test set, and accesses the home's attribute values including listing price, and its time on market and what the end result of the listing was (e.g., a sale at some price, relisting, or withdrawal from the market). In step1822, the facility applies the home's attribute values to the tree, traversing the tree to a leaf node corresponding to the home's attribute values including its listing price. In step1823, the facility generates a time on market estimate associated with that leaf node. (Steps1822-1823are the same steps the facility would use to apply a time-on-market-estimating tree to the attribute values and listing price of a distinguished home to obtain a time on market estimation for the home, as discussed in further detail below in connection withFIG.19.) In step1824, the facility compares the time on market estimate for the home as generated in step1823with the actual time on market for the home contained in the test set data, and determines an error measure (e.g., the absolute difference divided by the actual time on market) for the estimation by that tree for that home. In step1825, the facility performs the same steps for each home listing and sale entry in the test set, recording the error measures for each home for that tree. In step1826, the facility obtains an overall error measure for the tree based on the collected error measures for the test set homes. In step1827, the facility attributes a weight to the tree inversely related to the tree's overall error measure. In step1828, the facility repeats steps1821-1827for each tree, resulting in a forest of trained, weighted decision trees that use a home's attributes and listing price to generate a time on market estimate, i.e., an estimate of the number of days that the home will remain unsold when listed for sale at the listing price in the market on which the model was based. FIG.19is a flow diagram showing steps typically performed by the facility to display a graph of estimated time on market for a home at various listing prices. In step1901, the facility accesses the home's attribute values. The facility determines a valuation for the home, e.g., by applying a valuation model as discussed in greater detail above, in step1902. In step1903, the facility generates a set of listing prices. A user may specify one or more listing prices, or the facility may choose prices based on various criteria (e.g., likely listing prices, based on the home's attribute values, the valuation, and the market, or based on a listing price estimation model). The listing prices need not be evenly distributed around the valuation and may not be based on the valuation. In step1904, the facility initializes a data structure such as a list or array for collecting estimated time on market predictions. In steps1905-1908, the facility iterates through each listing price in the set of generated listing prices and determines an estimated time on market for each listing price. In step1906, the facility applies a model applicable to the home's attribute values, geographic area, and market conditions (for example, a model trained and tested as described above in connection withFIGS.18A-18C) to estimate a length of time the home would remain on the market if listed for sale at the selected listing price. In step1907, the model's prediction is added to the data structure that was initialized in step1904. In steps1909-1910, the facility generates and displays a graph plotting the expected length of time the home would remain listed on the market for each selected listing price. FIG.20is a display diagram showing a line graph of probability of sale for different listing prices for a particular home. The graph2000has a horizontal axis (x-axis)2001labeled “Listing price” that displays a scale of listing prices. In this example, the displayed range of listing prices is approximately $170,000-$230,000. The facility provides a user control to modify the scale or range of displayed listing prices (e.g., with a zoom control, a slider, or a gesture on a touch screen) to focus on a portion of the graph or to see a wider range of values. The graph2000also has a vertical axis (y-axis)2002labeled “Probability of sale within one month” that displays a scale of percentages. In this example, the displayed range of percentages is the full range of 0%-100%. Just as with the horizontal axis, the facility provides a user control to display a portion of the range, either together with or independent of the other axis; it also provides a user control to vary the period (e.g., to show probability of sale within one week, or within 90 days). A segmented line2010plots data points (e.g., a point2011illustrating a sale probability of approximately 55% for the particular home at a listing price of $200,000, as indicated by the dashed lines2013) connected by line segments (e.g., segment2012) illustrating probability trends between data points. In some embodiments, the facility determines the probability of sale of the subject home during the specified time for each of the listing prices shown inFIG.20by: (1) for each tree of the forest, (a) selecting the leaf node corresponding to the home's attributes and listing price, and (b) determining the percentage of sale transactions assigned to the selected leaf that sold in no greater than of the specified amount of time; and (2) aggregating these percentages across the trees of the forest, where the aggregation is weighted using the efficacy weight assigned to each tree. In some embodiments, the facility trains and applies a specialized probability of sale model to obtain the probability shown inFIG.20. FIG.21is a display diagram showing a graph of time on market for different listing prices. The graph2100has a horizontal axis (x-axis)2101labeled “Listing price” that displays a scale of listing prices. In this example, the displayed range of listing prices is approximately $170,000-$230,000. As described above in connection withFIG.20, the facility provides a user control to modify the scale or range of displayed listing prices. The graph2100also has a vertical axis (y-axis)2102labeled “Estimated time on market before sale” that displays a scale of days. In this example, the displayed range of days is 0 days-28 days. Just as with the horizontal axis, the facility provides a user control to zoom in or out to display a smaller or larger portion of the range, either together with or independent of the other axis. A confidence band line2110plots data points (e.g., a point2111illustrating an expected time to sale of approximately 14 days for the particular home at a listing price of $200,000) with vertical confidence bars above and below the data points (e.g., illustrating that the home might be expected to sell between 11 and 17 days from its listing date) and line segments (e.g., segment2112) illustrating confidence bands. FIG.22is a display diagram showing a dual-scaled combined graph of both probability of sale and time on market for different listing prices. The graph2200has a horizontal axis (x-axis)2201labeled “Listing price” that displays a scale of listing prices. In this example, the displayed range of listing prices is approximately $170,000-$230,000. As described above in connection withFIG.20, the facility provides a user control over the displayed scale or range of displayed listing prices. The graph2200also has a left vertical axis (left y-axis)2202labeled “Probability of sale within 60d” that displays a percentage scale. In this example, the displayed percentage range is the full range of 0%-100%. The graph2200also has a right vertical axis (right y-axis)2203labeled “Estimated time on market before sale” that displays a scale of days. In this example, the displayed range of days is from 0 days to approximately 70 days. Just as with the horizontal axis, the facility provides a user control to zoom in to display a portion of either vertical range, together with or independent of one or both of the other axes. A line2210plots a smooth curve illustrating sale probabilities for the particular home at various listing prices, in relation to left vertical axis2202. Another line2220plots a smooth curve illustrating estimated days on market for the particular home at various listing prices, in relation to right vertical axis2203. A vertical line2230indicates, for a listing price equal to a home valuation2231, an estimated 75% likelihood of sale within sixty days 2232 and an estimated 22-day time on market2233. Bars2242and2243illustrate, for a different listing price2241, an estimated 38% likelihood of sale within sixty days 2242 and an estimated 45-day time on market2243. In some embodiments, the facility performs bucketizing and/or other kinds of smoothing to remove artifacts from the graphs before it displays them. In some embodiments, the facility separately analyzes and determines trends in the graph that occur above and below a listing price corresponding to an automatically-determined estimate of the subject home's value. In some embodiments, rather than using a monolithic random forest model to predict time on market and/or likelihood of sale based on home attributes and listing price, the facility uses a compound model made up of two constituent models: (1) a random forest that predicts the probability of sale or time on the market at each home's automatically-estimated current value, and (2) an adjustment model that predicts the degree of variation from the results produced by the random forest constituent model based upon the ratio of home listing price to estimated value. In various embodiments, the facility uses an adjustment model of various types, such as a linear regression model or a K-nearest neighbor model. For example, in some embodiments using a K-nearest neighbor adjustment model, the facility (1) collects the sale transactions in a relevant geographic area, such as a county, during a relevant time period, such as the last year; (2) for each sale transaction in the collection, computes the ratio of listing price to the home's estimated value; (3) discards sale transactions from the collection whose computed ratios identify them as outliers, such as the sale transactions having the top and bottom 5% of ratios, sale transactions whose ratios are more than the threshold distance from an aggregate of the ratios such as mean or median, etc.; (4) among the remaining sale transactions in the collection, selecting those whose home attributes are the most similar to those of the subject home including such attributes as, for example, number of bedrooms, number of bathrooms, latitude and longitude, assessed value, etc.; and (5) determining an adjustment factor on the basis of these nearest neighbors. In some embodiments, the facility uses a number of nearest neighbors between 25 and 100. In some embodiments, rather than selecting nearest neighbors for the subject home, the facility uses all of the undiscarded sale transactions in the geographic area. In some embodiments, the facility uses home estimated value tiers to determine the adjustment factor, such as tiers comprising the top, middle, and bottom third of automatically-estimated values within the geographic area. In some embodiments, the facility combines all of the homes in the geographic area into a single tier. CONCLUSION It will be appreciated by those skilled in the art that the above-described facility may be straightforwardly adapted or extended in various ways. For example, the facility may use a wide variety of modeling techniques, house attributes, and/or data sources. The facility may display or otherwise present its valuations in a variety of ways. While the foregoing description makes reference to particular embodiments, the scope of the invention is defined solely by the claims that follow and the elements recited therein.
70,263
11861748
DETAILED DESCRIPTION The inventors have recognized that the conventional approaches to valuing homes have significant disadvantages. For instance, attributing the most recent sale price of a home as its value has the disadvantage that the home's current value can quickly diverge from its sale price. Accordingly, the sale price approach to valuing a home tends to be accurate for only a short period after the sale occurs. For that reason, at any given time, only a small percentage of homes can be accurately valued using the sale price approach. Further, a home may be purchased by a buyer who values the home much more greatly than any other interested buyer. Because such a high-valuing buyer no longer exists in the market after the only one has purchased the home, in some or all of these cases, the sale price immediately overvalues the home. The appraisal approach, in turn, has the disadvantage that its accuracy can be adversely affected by the subjectivity involved. Also, appraisals can be expensive, can take days or weeks to complete, and often require physical access to the home by the appraiser. The statistical modeling approach has the disadvantage that it often fails to accurately account for valuation trends affecting regions of different sizes that contain the home. In view of the shortcomings of conventional approaches to valuing homes discussed above, the inventors have recognized that a new approach to automatically valuing homes that more accurately account for valuation trends affecting regions of various sizes that contain the home would have significant utility. A software and/or hardware facility for automatically determining a current value for a home (“the facility”) using geographic regions of varying granularity is described. In some embodiments, the facility constructs, trains, and applies a home valuation model having independent variables that reflect an encoded version of the geographic location of the home. In some embodiments, this encoded version of location is a geohash. (See en.wikipedia.org/wiki/Geohash#Web_site_geohash.org; and geohash.org; and Sahr, Kevin, Denis White, and A. Jon Kimerling. “Geodesic discrete global grid systems.” Cartography and Geographic Information Science 30.2 (2003): 121-134. for more information about geohashes, each of which is hereby incorporated by reference in its entirety.) In some embodiments, the encoded version of location used by the facility is a more traditional version of latitude and longitude. In some embodiments, a range of different granularities of encoded location are used, such as by discarding a least-significant digit (binary, decimal, or hex) or other character of each component of the encoded version to obtain each next-less-granular level of the encoded version. In some embodiments, the model uses independent variables that specify, at each of one or more levels of granularity, the subject home's location and the eight nearest neighbors of that location at the same level of granularity. In some embodiments, the model uses independent variables that specify, at each of one or more levels of granularity, an aggregate of some home attribute across all of the homes in the same location at that granularity. For example, for a particular home, there might be independent variables whose values are obtained by (1) aggregating square footage across the11homes in the same location at the highest level of granularity; (2) aggregating square footage across the88homes in the same location at the next-lower level of granularity; (3) aggregating square footage across the267homes in the same location at the next-lower level of granularity; etc. In various embodiments, the facility performs these aggregations using different combinations of one or more aggregation functions, including, for example, median, mean, mode, minimum, maximum, sum, count, distinct count, range, variance, etc. In some embodiments, the geographic regions are arranged in discrete global grid, such as a hierarchical regular or semi-regular grid, as described in en.wikipedia.org/wiki/Discrete_global_grid, which is hereby incorporated by reference in its entirety. In various embodiments, the facility uses any of the following: ISEA Discrete Global Grids; COBE—Quadrilateralized Spherical cube; Quaternary Triangular Mesh; Hierarchical Equal Area isoLatitude Pixelization; Hierarchical Triangular Mesh; S2/S2Region; S2/S2LatLng; S2/S2CellId. In various embodiments, the regions are square; non-square rectangles; regular triangles; non-regular triangles; and circles. In some embodiments, regions of adjacent sizes containing the same location are concentric, while in others, they are non-concentric. In some embodiments, the regions are defined the same for every home, such that, if two homes are in the same region, their regional independent variables have the same values at the region size of that region. In some embodiments, the regions are defined based on the location of one or more homes, such that a first home may be included in the aggregate regional independent variables for a second home, but these two homes have different sets of homes included in their regional aggregate independent variables for this region size, as each of these two homes defines a different region of this region size. In some embodiments, regions of a certain size have the same or similar area, and/or the same or similar dimensions, and may contain numbers of homes that vary significantly. In some embodiments regions of a certain size contain the same or similar number of homes, and may have areas and/or dimensions that vary significantly. By performing in some or all of the ways described above, the facility generates valuations for homes that are often more accurate than those generated by conventional techniques. FIG.1is a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the facility operates. In various embodiments, these computer systems and other devices100can include server computer systems, desktop computer systems, laptop computer systems, netbooks, mobile phones, personal digital assistants, televisions, cameras, automobile computers, electronic media players, etc. In various embodiments, the computer systems and devices include zero or more of each of the following: a central processing unit (“CPU”)101for executing computer programs; a computer memory102for storing programs and data while they are being used, including the facility and associated data, an operating system including a kernel, and device drivers; a persistent storage device103, such as a hard drive or flash drive for persistently storing programs and data; a computer-readable media drive104, such as a floppy, CD-ROM, DVD, or Blu-ray drive, for reading programs and data stored on a computer-readable medium; and a network connection105for connecting the computer system to other computer systems to send and/or receive data, such as via the Internet or another network and its networking hardware, such as switches, routers, repeaters, electrical cables and optical fibers, light emitters and receivers, radio transmitters and receivers, and the like. In various embodiments, the computer systems are owned and operated by the operator of the facility; owned by the operator of the facility but operated by a third party; hardware dedicated to the operator of the facility and owned and operated by a third party; and/or owned and operated by a third party on behalf of the operator of the facility and other unrelated tenants. In various embodiments, the facility executes on these computing systems directly, and/or via one or more layers of virtualization. While computer systems configured as described above are typically used to support the operation of the facility, those skilled in the art will appreciate that the facility may be implemented using devices of various types and configurations, and having various components. FIG.2is a flow diagram showing a process performed by the facility in some embodiments to establish a trained home valuation model. Those skilled in the art will appreciate that the acts shown inFIG.2and in each of the flow diagrams discussed below may be altered in a variety of ways. For example, the order of the acts may be rearranged; some acts may be performed in parallel; shown acts may be omitted, or other acts may be included; a shown act may be divided into subacts, or multiple shown acts may be combined into a single act, etc. In acts201-204, the facility loops through each of a plurality of homes for which a selling price is available. In some embodiments, these homes include those recently sold, such as those sold within the last six months, or a similar period of time. In some embodiments, these homes include homes that are the subject of synthetic sale transactions determined based on a variety of other market data. In act202, the facility establishes independent variables for the home. Details of act202are shown inFIG.3. FIG.3is a flow diagram showing a process performed by the facility in some embodiments to establish independent variables for a home. In act301, a facility accesses home attributes for the home that include a geographic location of the home, such as a latitude/longitude (“lat/long”) pair, or a geohash. In various embodiments, these attributes also include, for example, number of square feet, year constructed, roof type, view type, number of bedrooms, number of bathrooms, heating type, parking type, lot size, etc. In act302, the facility selects at least a portion of the accessed home attributes as independent variables. In acts303-306, the facility loops through each of a range of region sizes. Table 1 below shows some of the region sizes used by the facility in some embodiments. TABLE 1region sizegeohash encoding leveltypical number of homes171226144352,6134425,5635392,957 For example, in some embodiments, the facility may in act303use a range of region sizes from 1-4. These are defined using geohash encoding levels between 7 and 4, and produce regions whose typical number of homes range from 12 for region size 1 to 25,563 for region size 4. In act304, the facility identifies a region of the present region size that contains the home's geographic location accessed in act301. In some embodiments, the facility does this for region size 1 by calculating a level-7 geohash using the home's latitude/longitude pair. For example, for a home at latitude 41.7611 and longitude 88.3198, the facility calculates the tzqv9f2 as the geohash region id of the size-1 region containing the home's geographic location. For subsequent, larger region sizes, the facility begins with the geohash for the next-smaller region size, and removes the least significant character from the right end of this geohash. Compare the geohash region ids for regions415,410, and520shown below in Table 1 for a particular example. In some embodiments, rather than representing the region ids of regions as geohashes, the facility represents them as latitude/longitude pairs. In the above example, the facility would determine the size-1 region as (41.7611, 88.3198), i.e., the rectangle bounded by the points (41.76110, 88.31980) and (41.76119, 88.31989); determine the size-2 region as (41.761, 88.319), i.e., the rectangle bounded by the points (41.7610, 88.3190) and (41.7619, 88.3199); etc. FIG.4is a map diagram showing an example of identifying a region containing the home's geographic location in accordance with step304. In the map400, a home is located at geographic location401. In act304, the facility identifies region415as a region of region size 1 that contains the home. In a subsequent iteration of the loop between acts303and306for the next-larger region size, the facility identifies region410as a region of region size 2 that contains the home. FIG.5is a map diagram showing a region of a larger region size identified by the facility. In map500, the facility identifies region520as a region of region size 3 that contains the geographic location401of the home. Returning toFIG.3, in act305, the facility creates one or more independent variables based on the region identified in act304. FIG.6is a flow diagram showing a process performed by the facility in some embodiments to create one or more independent variables based on an identified region in accordance with act305. In act601, the facility creates one or more independent variables that contain identifiers for the identified region and one or more regions of the same region size that are near the identified region. For example, with reference toFIG.4, where the identified region is region415, in some embodiments, the facility creates an independent variable for each of regions411-419. The geohash encoding of these listed regions, as well as the other regions shown inFIGS.4and5, appear below in Table 2. TABLE 2region sizeregiongeohash region id1411tzqv9dx1412tzqv9f81413tzqv9f91414tzqv9dr1415tzqv9f21416tzqv9f31417tzqv9dp1418tzqv9f01419tzqv9f12410tzqv9f3520tzqv9 FIG.7is a table diagram showing sample contents of a region id independent variable table in which the facility in some embodiments stores independent variables it creates for a particular home containing region identifiers. Here, the home for which the contents of the region id independent variable table are shown is the one at geographic location401. The region id independent variable table700is made up of rows, such as rows701-704, each of which contains independent variables created for this home containing region identifiers for regions of a different region size. Each row is made up of the following columns: a region size column721containing the region size to which the row corresponds; and region id columns722-730, each of which contains a geohash encoding region id for a region of the size to which the row corresponds for which an independent variable is created for the home in question. In particular, column722contains the region id for the region of the size to which the row corresponds that contains the home, while columns723-730contain region ids for the eight surrounding regions of the same size. For example, in row701, corresponding to region size 1, one of the created independent variables, contained by column722, is the region id of region415, which is of region size 1 and contains the geographic location of the home, “tzqv9f2”. In the same row, column723contains the region of size 1 directly to the north of the region containing the home's geographic location, region412shown inFIG.4, “tzqv9f8”. Box790shown inFIG.7identifies the independent variables created by the facility in act601in this example. WhileFIG.7and each of the table diagrams discussed below show a table whose contents and organization are designed to make them more comprehensible by a human reader, those skilled in the art will appreciate that actual data structures used by the facility to store this information may differ from the table shown, in that they, for example, may be organized in a different manner; may contain more or less information than shown; may be compressed and/or encrypted; may contain a much larger number of rows than shown, etc. Returning toFIG.6, in acts602-605, the facility loops through each of one or more home attributes to be aggregated across regions of different sizes that contain the home's geographic location. In one example, in act602, the facility loops through the following home attributes: square feet, tax assessment, and year built. In act603, the facility determines one or more aggregates of the current home attribute across all of the homes within the identified region. In act604, the facility creates an independent variable for each aggregate determined in act603. In act605, if additional home attributes remain to be processed, then the facility continues in act602to process the next home attribute, else this process concludes. FIG.8is a table diagram showing sample contents of a region aggregate independent variable table used by the facility in some embodiments to store region aggregate independent variables created by the facility for a particular home in act604. The region aggregate independent variable table800is made up of rows, such as rows801-810, each of which corresponds to a different combination of region size and home attribute. Rows801-803correspond to combinations of the region size 1 with each of three sample home attributes: square feet, tax assessment, and year built. Similarly, rows804-806correspond to combinations of the region size 2 with those same home attributes. In columns824-826, each row contains three region aggregates determined for the particular combination of region size and home attribute to which the row corresponds. For example, row801contains in column824the value 1,932, obtained by determining the median of the square feet home attribute values of all of the homes in the region having region id tzqv9f2. Similarly, in column825, row801contains the value 1,910, obtained by determining the arithmetic mean of those same square feet home attribute values for the homes in the region having region id tzqv9f2. Likewise, in column826, row801contains the value 23.98, determined by calculating the variance among all of these same square feet home attribute values. In some embodiments, the facility creates all of the region aggregate independent variables shown in box890across three iterations of act604in the loop between acts602and605. In some embodiments, the facility caches the aggregate values for each region to permit their reuse for other homes contained in these regions. Returning toFIG.3, in act306, having created all of the region-based independent variables for a particular region size in act305, in act306, if additional region sizes remain to be processed, then the facility continues in act303to process the next region size, else this process concludes. Returning toFIG.2, after establishing independent variables for the current home in act202, in act203, the facility creates an observation for the current home. The created observation contains the independent variables established in act202and has the current home's selling price as the observation's dependent variable. In act204, if additional homes remain to be processed, then the facility continues in act201to process the next home in the plurality of homes, else the facility continues in act205. In act205, the facility trains a model to predict home value using the observations created in act203. In various monuments, the facility trains a valuation model of a variety of types, including, for example, random forests of partitioning decision trees; gradient boosting machines; other kinds of tree-based models; support vector machines, linear regressions, neural networks; other general machine learning models, or ensembles combining two or more of the foregoing model types. After act205, this process concludes. FIG.9is a flow diagram showing a process performed by the facility in some embodiments to estimate the value of a home using the model trained by the facility. In act901, the facility establishes independent variables for the home to be valued, such as as shown inFIG.3and discussed above. In act902, the facility applies the model trained in act205to the independent variables established for the home to be valued in act901, in order to obtain a valuation for this home. In act903, the facility persistently stores the valuation for the home to be valued obtained in act902. In act904, the facility causes the obtained valuation to be displayed along with information identifying and/or describing this home. In act905, the facility generates or updates a home value index using home valuations that include the one obtained in act902. After act905, this process concludes. It will be appreciated by those skilled in the art that the above-described facility may be straightforwardly adapted or extended in various ways. While the foregoing description makes reference to particular embodiments, the scope of the invention is defined solely by the claims that follow and the elements recited therein.
20,278
11861749
MODE OF IMPLEMENTATION OF THE INVENTION Hereinafter, various embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the present document, like reference numerals are used for like elements throughout the drawings, and redundant descriptors of the like elements are omitted. For the various embodiments of the present invention disclosed in the present document, specific structural to functional descriptions are merely illustrative of the present invention. The various embodiments of the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Terms such as “a first,” “a second,” “first,” and “second” used in various embodiments may modify various components regardless of the order and/or importance thereof, and do not limited the corresponding components. For example, a first component may be referred to as a second component without departing from the scope of the present invention, and similarly, a second component may also be referred to as a first component. The terms used in this document are only used to describe specific embodiments, and may not be intended to limit the scope of other embodiments. Singular expressions may include plural expressions unless the context clearly indicates otherwise. All the terms used herein, including technical or scientific terms, may have the same meanings as those commonly understood by those skilled in the art of the present invention. Terms that are defined in a dictionary commonly used should be interpreted as having the same or similar meaning to the meaning in the context of the related art, and should not be interpreted as having an ideal or overly formal meaning unless explicitly defined in the present document. In some cases, even the terms defined in this document should not be interpreted as excluding embodiments of the present invention. Hereinafter, in describing an apparatus for calculating an estimate for installation of a water heater according to the present invention, the water heater is described to be a boiler as an example. However, the apparatus for calculating an estimate for installation of a water heater of the present invention is not limited to a boiler, but may be applied to various apparatuses for heating water, such as a water heater and a water heater combined boiler. FIG.1is a diagram showing a user terminal and a server for calculating an estimate for installation of a boiler according to an embodiment of the present invention; A user terminal100may have an application for calculating an estimate for installation of a boiler. Therefore, using the application installed in the user terminal100, a user him/herself may capture an image of a region in which a boiler is installed and transmit through a network the captured image to a server200for calculating an estimate for installation of a boiler. In addition, when the user selects, on the application, a model of a boiler to be installed, information on the selected model of a boiler may be transmitted to the server200through a network. Accordingly, the server200analyzes the image received from the user terminal100to calculate required dimensions of each material (e.g., a boiler main body, pipes, and the like) of a region in which the boiler is to be installed, and may calculate a total boiler installation estimate in consideration of the price of the boiler model selected by the user and the installation estimate according to the required dimensions of each material. In addition, the sever200transmit the calculated for calculating an estimate for installation to the application of the user terminal100to allow the user to identify an expected estimate in advance before installing a new boiler. FIG.2is a block diagram showing a configuration of an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention; Referring toFIG.2, the apparatus200for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include an image analysis unit210, a processing unit220, a database230, and a communication unit240. The image analysis unit210may analyze a boiler region image including at least one of a previously-installed boiler main body, a pipe connected to the previously-installed boiler main body for fluid flow, and a room controller for controlling a boiler. At this time, the boiler may include all types of boilers, such as a floor-type boiler and a wall-mounted boiler which are classified according to the installation position thereof, a general boiler, a condensing boiler, and a boiler using electricity, gas, or oil. In addition, the pipe may include all types of pipes connected to a boiler main body, such as a flue which sucks air for boiler combustion and discharges exhaust gas, a heating water pipe, and a hot water and cold water pipes. The image analysis unit210may obtain at least one of a first information, a second information, or a third information by analyzing the boiler region image including the previously-installed boiler main body and the pipe. For example, the boiler region image may be a front image of a boiler. At this time, the first information may include information about the model of the previously-installed boiler main body (for example, a model name, a product number, and the like), the second information may include at least one of the dimensions of the boiler main body and the diameter of the pipe in the boiler region image, and the third information may include the dimensions of the pipe in the boiler region image. The image analyzing unit210may obtain the first information on the basis of at least one of an appearance (for example, the design of the boiler main body, a model name written on the boiler main body, and the like) of the boiler main body in the boiler region image, a product specification table of the boiler main body in the boiler region image, and the room controller. That is, when received a front image of a region in which a boiler is installed, the image analysis unit210may obtain information about the model of the boiler through the appearance of the boiler. However, when it is not possible to capture the front image of the boiler due to the limitation of space, the information about the model of the boiler may be obtained through either a product specification table attached on the side of the boiler main body or an image of the room controller installed inside a room. In addition, the image analysis unit210may obtain the first information, the second information, and the third information by analyzing the boiler region image in pixel unit. That is, the image analysis unit210may calculate the dimensions of each of the boiler main body and the pipe by analyzing the same in a boiler image received from the user terminal100in pixel unit. The processing unit220may calculate an estimate required for the installation of a boiler to be installed on the basis of at least one of information about the model of a boiler to be installed and required dimensions of a pipe to be installed obtained on the basis of the analysis result of the image analysis unit210. In addition, the processing unit220may calculate the total estimate required for the installation of the boiler to be installed on the basis of the information about the model of the boiler to be installed and the required dimensions of the pipe to be installed obtained using at least one of the first information, the second information, and the third information. Specifically, the processing unit220may calculate the estimate required for the installation of the boiler to be installed by adding a price of a boiler main body to be installed and a price and an installation estimate according to the required dimensions of the pipe. In this case, the processing unit220may calculate an installation estimate not only for a case in which both a boiler main body and a pipe are replaced together but also for a case in which either a boiler main body or a pipe is replaced. In addition, the processing unit220may determine a position at which the main body of the boiler to be installed is to be installed and adjust the required dimensions of the pipe to be installed calculated above according to the determined position. At this time, on the basis of the adjusted dimensions of the pipe, the estimate required for the installation of the boiler to be installed may be calculated. For example, the dimensions and installation position between a boiler previously installed and a boiler to be installed may be different. In this case, required dimensions of a pipe obtained based on the size of the previously-installed boiler main body may not be suitable for the boiler main body to be installed. In this case, the processing unit220may adjust the required dimensions of a pipe obtained based on the previous boiler in consideration of both the size and the installation position of the boiler main body to be newly installed. Specifically, a region of the boiler main body to be newly installed is overlapped on a region of the previously-installed boiler main body, and then, by setting a portion in which the boiler main body to be newly installed and a pipe to be installed are connected as a starting point and setting a point at which a previously-installed pipe ends as an end point, an actual distance between the two points may be calculated. In this case, in consideration of a hole on a window or wall through which the pipe to be installed is connected to the outside and a position of a distributor connected to the pipe, the required dimensions of the pipe to be installed may be further corrected. In this case also, an analysis in pixel unit for each material on the boiler region image may be performed. In the case of calculating an installation estimate by obtaining required dimensions of a pipe to be installed, the processing unit220may calculate the installation estimate of the pipe by first calculating the total length of the pipe to be installed, and according to the length of the pipe to be installed, determining whether to use a mid-pipe connection part and a pipe sagging prevention fixing pin. In addition, with respect to the inside and outside of a wall hole or the inside and outside of a window hall through which a new pipe has been installed to discharge exhaust gas, the processing unit220may also calculate a manpower estimate for finishing the connection of the pipe installed on the wall and when a pipe is installed on the window, an estimate according whether or not the window is to be replaced. The processing unit220may obtain, in consideration of the position and length of heating supply water, returning water, cold water, hot water, a gas supply lines and the installation position and dimensions of the pipe, the position and dimensions of a lower connecting pipe according to the position of the boiler to be newly installed. In addition, the processing unit200is allowed to change the material of the pipe according to the selection of a boiler user, and may take a manpower estimate based on the change in the installation estimate of the boiler. The processing unit220obtains required dimensions of the boiler from the database230using the first information, and may obtain required dimensions of the pipe on the basis of a ratio between the required dimensions of the boiler and the dimensions of the boiler main body in the boiler region image. This will be described later in detail with reference toFIG.8B. In addition, the processing unit220obtains an actual diameter of the pipe from the database230using the first information, and may obtain the required dimensions of the pipe on the basis of a ratio between the actual diameter of the pipe and the diameter of the pipe in the boiler region image. Basically, pipes are often standardized according to the model of a boiler. That is, since standards (e.g., diameter) of a pipe (e.g., a flue, heating supply and returning water piping pipes) used for a specific model of a boiler are already determined, when the model name of a boiler is known, the dimensions (diameter) of a pipe used in the corresponding model may be obtained. Therefore, on the basis of the information about the model of a boiler obtained in the image analysis unit210, the processing unit220obtains information about the diameter of a pipe used in the corresponding model, and may calculate required dimensions of other materials using the information. The processing unit220obtains information about the price of a distributor and information about the installation estimate thereof from the database230, and on the basis of the information about the price and the installation estimate of the distributor, may calculate an estimate required for the installation of the distributor. When the installation of a boiler is completed, the processing unit220may make a list of time required for work and materials used to install the boiler for each boiler region image to calculate the estimate required for the actual installation of the boiler and may store the data in the database230. In this case, the processing unit220may perform data learning through Machine Learning on the basis of data obtained by matching the time required for work and the materials used to install the boiler listed for each image stored in the database230with the estimate required for the actual installation of the boiler. Meanwhile, when it is not possible for a user to capture a front image of a boiler, the processing unit220may obtain a second information and a third information on the basis of a boiler region image including a side surface of a boiler main body and a pipe. This will be described later in detail with reference toFIG.5AandFIG.5B. For example, there may be objects present near a boiler in a boiler region image. In this case, the objects near the boiler are moved to a different location, and then moved back to the original location after the installation is finished. As described above, when is an obstacle near the boiler, the processing unit220may obtain required dimensions of the corresponding object using at least one of the first information, the second information, and the third information. At this time, on the basis of the installation difficulty (e.g., additional time required due to the interruption to the installation caused by the object or time required to move the object and move back the object) of the boiler to be installed calculated in consideration of the required dimensions of the object, an additional estimate required to install the boiler to be installed is calculated. In addition, when it is impossible to calculate the required dimensions since the orientation or level of the boiler region image is not aligned, the processing unit220may appropriately align the orientation and level of the boiler region image by performing correction of the boiler region image. For example, the processing unit220may perform the correction of the boiler region image using data of the boiler image learned through machine learning. The database230may store information about models of boilers, information about required dimensions of boilers and pipes, information about prices of main bodies of boilers and pipes, and information about installation estimates of main bodies of boilers and pipes. In addition, the database230may store information about the price and installation estimate of a distributor of the boiler. For example, the database230may include information about models of boilers, model names, product numbers, designs, dimensions (specifications) of pipes, and room controllers. In addition, the price information of the main body of the boiler and the pipe may include price information for each manufacturer and model of the boiler and price information per length for each model of the pipe, and the information about the installation estimate of the main body of the boiler and the pipe may include estimate of removing the main body and pipe of a previously-installed boiler. In addition, the database230may store data obtained by the processing unit220by matching the time required for work and the materials used to install the boiler listed for each image with the estimate required for the actual installation of the boiler. Meanwhile, inFIG.2, the database230is shown to be included in the apparatus for calculating an estimate for installation of a boiler according to the present invention, but is not limited thereto. The database230may be separately provided in an external a server, and may be configured such that data is received from the external server to the apparatus for calculating an estimate for installation of a boiler200according to the present invention. The communication unit240may receive a boiler region image from the user terminal100and receive information about the model of a boiler to be installed selected by a user. In addition, when the estimate required for the installation of the boiler to be installed is calculated by the processing unit220, the communication unit240may transmit a final estimate to the user terminal100. However, the embodiment of the present invention is not limited thereto. The communication may transmit/receive all the data required between the user terminal100and the apparatus for calculating an estimate for installation of a boiler200. In addition, when the installation of the boiler is completed, data obtained by contrasting images of main parts before and after the installation of the installed boiler, and data of required estimate, work time, and the like may be transmitted to the user terminal. Accordingly, the apparatus for calculating an estimate for installation of a water heater according to the present embodiment is applied to a water heater such as a boiler for providing heating, a water heater for providing hot water (a direct-water-type water heater without a separate hot water tank, or a tank-type water heater with a separate hot water tank), or a water heater combined boiler, and may automatically calculate an installation estimate through an image analysis for the water heater. As described above, according to the apparatus for calculating an estimate for installation of a water heater, when a water heater is newly installed, an installation estimate is automatically calculated through an image analysis for the water heater, thereby enabling a user of the water heater to anticipate installation estimate in advance, and correct information about estimate required is provided even after the installation of the water heater, thereby securing reliability in the installation estimate. FIG.3is a diagram showing elements for calculating an estimate for installation of a boiler having a structure of a typical boiler. Referring toFIG.3, a typical boiler300may include an upper pipe (flue)310, a boiler main body320, a lower pipe330, and a distributor340. The upper pipe310, that is a flue, is a passage connected to an upper portion of the boiler main body320for sucking air for boiler combustion and discharging gas exhausted from the boiler main body320to the outside. The boiler body320is where water is heated and evaporated using heat generated by the combustion of fuel. The lower pipe330is connected to a lower portion of the boiler main body320, and may be, for example, a heating pipe for circulating heating water, and a cold water pipe and a hot water pipe for supplying cold and hot water, respectively. The distributor340evenly distributes heating water heated by the boiler main body320using fuel to each room. Typically, the estimate for replacing the boiler300may include an estimate for each of the upper pipe310, the boiler main body320, the lower pipe330, and the distributor340. Specifically, in the case of the upper pipe310and the lower pipe300, dimensions of the outer diameter of the pipes are generally standardized and the estimate therefor may vary depending on the length of the pipes. In addition, in the case of the boiler main body320, the price thereof varies depending on the product model, operation method, and the like thereof. Meanwhile, compared to other components, the distributor340rarely needs to be replaced, but may need to be replaced if problems such as rust or water leaking occur to the distributor340. A replacement process for a typical boiler is as follows. Hereinafter, the description is made with respect to a condensing boiler. However, as described above, the present invention may be applied to various boilers such as gas and oil boilers. Objects around the boiler300previously installed are moved to a different place. Then, a gas intermediate valve is closed and the power cord of the boiler main body320is separated to drain heating water contained inside of the boiler main body320. Next, the connected heating lower pipe330is separated followed by closing an intermediate valve of a cold water line at the bottom of the boiler body320, and then the cold water and hot water lower pipe330is separated from the boiler main body320. The upper pipe310in the upper portion of the boiler300is separated from the boiler main body320, and in order to send exhaust gas to the outside, the upper pipe310is separated from a hole drilled on a wall or a hole drilled on a window through which the upper pipe310is installed. At this time, it is preferable to remove the upper pipe310while maintaining a finished state around a discharge hole of the upper pipe310previously installed. The boiler main body320installed on the wall is removed, and according to the position of the upper pipe310to be connected to an upper portion of the boiler300to be newly installed and the position of the wall hole or the window hall through which exhaust gas is sent to the outside, a position to fix the boiler300is determined. At this time, the position of the boiler main body320and the position of the upper pipe310need to be adjusted such that the upper pipe310is installed upwards by about 5 degrees in the case of a condensing boiler and the upper pipe310is installed downwards in the case of a typical boiler. When the position at which the boiler main body320to be installed is identified, the position of an anchor bolt which may be fixed to the wall is marked and inserted according to the corresponding wall position and fixed to prevent shaking. The lower pipe330is cut such that the lower connection pipe330previously installed and the boiler main body320and the lower pipe330are connected. The boiler body320is lifted to be fixed to the anchor bolt installed on the wall, the upper pipe310is inserted into the wall hole or the window hole through which exhaust gas is to be sent to the outside, and the middle portion of the upper pipe310of the boiler300is adjusted to install the upper pipe310in a supply/exhaust connection portion of an upper portion of the boiler main body320to be newly installed. A finishing process is performed such that there is no leakage in the connection portion of the boiler main body320and the upper pipe310, and if there is a middle connection portion of the upper pipe310, a finishing process is performed such that there is no leakage. In addition, a finishing process is also performed for the inside and outside of a wall hole area such that the wall hole area maintains sealing from the outside. Work is performed to connect the new lower pipe330to the heating pipe or the cold water and hot water lower pipes330the lower portions of which are cut, and the new pipe (heating pipe, cold water and hot water pipes)330is cut and adjusted according to the position of the boiler body main body320to be connected to the boiler main body320. The connected lower pipe330is covered with an insulation material to prevent freezing and busting in winter and heat loss. In the case of a condensing boiler, condensate water is generated, so the condensate water is fixed toward a near discharge pipe through a discharge hose. The cold water middle valve and the gas middle valve are open again and power is back on. Thereafter, when water is automatically replenished inside the boiler300, check whether the boiler operates normally, and check whether there is leakage to the inside of the boiler main body100and a connection area of an external gas pipe. In addition, check whether there is leakage to a water pipe area. FIG.4shows a front image of a boiler received from a user for image analysis by an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention. FIG.5AandFIG.5Bshow side images of a boiler received for image analysis by an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention; Referring toFIG.4, it is preferable that a region image of a boiler400basically includes not only a boiler main body420but also an upper pipe (flue)410of a boiler, a pipe430, and a distributor440. Therefore, when a front image of a boiler is received by an apparatus for calculating an installation estimate of a boiler according to an embodiment of the present invention, information about the model of the boiler main body420and the distributor440is obtained through the image and required dimensions of the upper pipe410and the pipe430are obtained to calculate a total replacement estimate of the boiler. However, there may be a case in which the space in which a boiler is installed is limited or it is difficult to capture a front image of the boiler doe to the other objects around the boiler. In this case, only a side surface of the boiler or a portion of the boiler may be captured. As described above, when it is difficult to capture the front image of the boiler due to the limitation of the space in which the boiler is installed, as shown inFIG.5AandFIG.5B, a user may capture an image from the side of a boiler main body. At this time, in order for the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention to calculate installation estimate of the boiler, it is preferable that the user captures images of an upper portion (side surface) of a boiler main body520and an upper pipe510together and captures images of a lower portion (side surface) of the boiler main body520and a pipe cover or a pipe530. In the apparatus for calculating an installation estimate of a boiler according to an embodiment of the present invention, an image (FIG.5A) including the side surface of the boiler main body520and the upper pipe510and an image (FIG.5B) of the side surface of the boiler main body520and the pipe530are compared with information about boiler models, flues, and dimensions of pipes stored in a database to calculate required dimensions of the boiler main body520, the upper pipe510, and the pipe530included in the boiler region image. For example, dimensions of the height and thickness of the main body may be obtained through a side surface image of the boiler main body, and using the dimensions, the required dimensions of the boiler main body520, the upper pipe510, and the pipe530included in the boiler region image may be calculated. As described above, since the upper pipe510and the pipe530of the boiler are generally circular, there is little error in image analysis according to a capturing angle. Therefore, only with the side surface image of the boiler, the calculation of the thickness dimension (i.e., diameter) may be performed more accurately. In addition, since the upper pipe510and the pipe530of the boiler are mostly standardized, the length of the upper pipe510and the pipe530may be calculated together. FIG.6Ashows an example of a product specification table attached to a side surface of the main body of a boiler, andFIG.6Bshows an example of a room controller for operating a boiler. As described above, when it is not possible to capture a front image of a boiler due to the limitation of space, as shown inFIG.6A, a product specification table attached to a portion of the boiler body may be captured. In the product specification table, information such as a manufacturer, a model name, a boiler type, a product number, and output is input in detail. Therefore, the image analysis unit of the apparatus for calculating an installation estimate of a boiler according to an embodiment of the present invention may determine the model name of a corresponding boiler only with an image of the product specification table. Meanwhile, the product specification table of a boiler may be damaged to an extent the table is not visually identifiable, depending on the age of the boiler or the installation environment of the boiler. There may be also a case in which it is difficult to capture a product specification table due to the nature of a space in which a boiler is installed. In this case, it may be difficult to accurately determine a model name only by capturing a boiler itself. As described above, when it is difficult to capture even the product specification table of a boiler, a user may be allowed to capture an image of a room controller installed inside a room. Most boilers often have a compatible room controller depending on the model of a boiler. Therefore, the apparatus for calculating an installation estimate of a boiler according to an embodiment of the present invention receives an image of a room controller from a user, compares the image with a room controller compatible for each model of a boiler stored in a database, and then determines the model name of a boiler associated with a room controller included in the captured image. In addition, the room controller ofFIG.6Bmay operate with a boiler in a wireless communication, or operate in a wired manner. At this time, a room controller previously installed may not be compatible with the model of a boiler main body to be installed. As described above, the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention may calculate a total installation estimate in consideration of estimate for replacing a room controller when the room controller is not compatible with a boiler main body to be replaced. FIG.7is a flow chart showing a method for calculating an estimate for installation of a boiler according to an embodiment of the present invention. Referring toFIG.7, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may first analyze a boiler region image including at least one of a previously-installed boiler main body, a pipe connected to the previously-installed boiler main body for fluid flow, and a room controller for controlling a boiler S110. At this time, the pipe may include a flue, a heating water pipe, hot water and cold water pipes, and the like. In Step S110, at least one of a first information, a second information, or a third information may be obtained by analyzing the boiler region image including a boiler main body and a pipe. For example, the boiler region image may be a front image of a boiler. Here, the first information may include information about the model of the previously-installed boiler main body, the second information may include at least one of the dimensions of the previously-installed boiler main body and the diameter of the pipe in the boiler region image, and the third information may include the dimensions of the pipe in the boiler region image. For example, the first information, the second information, and the third information may be obtained by an analysis in pixel unit. In addition, in Step S110, when it is not possible to capture a front image of a boiler, the second information and the third information may be obtained on the basis of a boiler region image including a side surface of the previously-installed boiler main body and the pipe. Next, on the basis of at least one of information about the model of a boiler to be installed and required dimensions of a pipe to be installed obtained on the basis of the analysis result of the boiler region image, estimate required for the installation of the boiler to be installed may be calculated (S120). For example, in Step S120, when a pipe of a boiler needs to be replaced, actual dimensions of the previously-installed boiler main body are obtained from a database using the first information, and on the basis of a ratio between the actual dimensions of the previously-installed boiler main body and the dimensions of the boiler main body in the boiler region image, required dimensions of the pipe may be obtained. Alternatively, an actual diameter of the pipe of the boiler may be obtained from the database using the first information, and on the basis of a ratio between the actual diameter of the pipe and the diameter of the pipe in the boiler region image, the required dimensions of the pipe may be obtained. Specifically, in Step S120, estimate required for the installation of the boiler to be installed may be calculated by adding a price of the main body of the boiler to be installed and a price and installation estimate according to the required dimensions of the pipe. In this case, as described above, installation estimate may be calculated not only for a case in which both a boiler main body and a pipe are replaced together but also for a case in which either a boiler main body or a pipe is replaced. The above-described method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may be performed using a database for storing information about models of boiler main bodies, information about specifications of boiler main bodies and pipes, information about prices of boiler main bodies and pipes, and information about installation estimates of boiler main bodies and pipes. Although not shown inFIG.7, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include the steps of determining a position at which the main body of a boiler to be installed is to be installed, adjusting the dimensions of a pipe to be installed in a boiler region image according to the determined position, and calculating estimate required for the installation of the boiler to be installed on the basis of the adjusted dimensions of the pipe to be installed in the boiler region image. A specific method of adjusting the dimensions of the pipe is as described inFIG.2. In addition, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include the steps of obtaining, on the basis of a first position at which the main body of the boiler to be installed and the pipe to be installed are connected in the boiler region image and a second position at which a previously-installed pipe ends in the boiler region image, the dimensions of the pipe to be installed in the boiler region image, converting the dimensions of the pipe to be installed in the boiler region image into required dimensions of the pipe to be installed, and calculating, on the basis of the required dimensions of the pipe to be installed, estimate required for the installation of the boiler to be installed. In addition, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may further include the steps of storing information about prices and installation estimates of distributors of boilers in a database, obtaining the information about prices and installation estimates of distributors from the database, and calculating, on the basis of the information about prices and installation estimates of distributors, estimate required for the installation of the distributor. In addition, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include the steps of making a list of time required for work and materials used to install the boiler for each boiler region image to calculate the estimate required for the actual installation of the boiler and storing data obtained by matching the time required for work and the materials used to install the boiler listed for each image with the estimate required for the actual installation of the boiler. In this case, a step of performing data learning through Machine Learning on the basis of the data obtained by matching the time required for work and the materials used to install the boiler listed for each image stored in the database with the estimate required for the actual installation of the boiler may be further included. In addition, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include the steps of, where there may be objects present near a boiler in a boiler region image, obtaining required dimensions of the objects using the first information, the second information, and the third information, and calculating, on the basis of the installation difficulty of the boiler to be installed calculated in consideration of the required dimensions of the objects, additional estimate required to install the boiler to be installed. Additionally, the method for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include performing correction of the boiler region image when the orientation or level of the boiler region image is not aligned. FIG.8Ashows that a user captures an image of a boiler with a terminal to transmit the image to an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention. FIG.8Ashows that a front image710of a region in which the boiler is installed is captured through an application on a user terminal. At this time, the application of the user terminal may provide a guide function when the user captures an image of the boiler. Specifically, when the user captures an image of the boiler, the application displays a separate guide line720on a boiler region image as shown inFIG.8A, and may induce the user to capture an image to include in the guideline all of a boiler main body, an upper pipe (flue), a lower pipe, and the like required for calculating an estimate for installation of the boiler. As described above, the application communicating with the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention provides a guide function such that all the information required for calculating an estimate for installation of a boiler (e.g., dimensions information of a boiler main body, an upper pipe, and a pipe) is included on an image, thereby accommodating the user when capturing an image of the boiler. FIG.8Bis a diagram showing a boiler selection screen on an application communicating with an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention. Referring toFIG.8B, a list of boilers to be newly installed may be displayed on the application of a user terminal. For example, the application is linked to the sales site of each boiler manufacturer, so that when a user selects a manufacturer, the user may be provided with a list of boiler models of the corresponding manufacturer. In addition, the boiler selection screen on the application shown inFIG.8Bmay be provided before the boiler image capturing ofFIG.8A, or may be provided after the dimensions for each material in the boiler region ofFIG.8Care obtained. FIG.8Cis a diagram showing calculating required dimensions of an upper pipe (flue) of a boiler in an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention. Referring toFIG.8C, when a model name of a boiler main body is obtained through an image analysis unit of the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention, actual specifications of the corresponding model may be obtained from a database. In addition, by comparing the actual specifications (horizontal×vertical=440×695) of the boiler main body model obtained from the database with the specifications of the boiler main body on a boiler region image (e.g., A1and A2inFIG.8C), a ratio d between dimensions on the image and required dimensions may be calculated. Next, on the basis of the calculated ratio and dimensions of the flue on the boiler region image, required dimensions of the flue may be calculated ((B1+B2)×d). Meanwhile, inFIG.8C, a case in which the dimensions and installation positions of a previously-installed boiler main body and a boiler main body to be installed are the same. However, when the dimensions and installation positions of a previously-installed boiler main body and a boiler main body to be installed are different, as described above, a region of the boiler main body to be newly installed is overlapped on a region of the previously-installed boiler main body, and then, by setting a portion in which the boiler main body to be newly installed and a pipe to be installed are connected as a starting point and setting a point (end portion of a pipe) at which a previously-installed pipe ends as an end point, an actual distance between the two points may be calculated. In addition, inFIG.8C, only the method for calculating the required dimensions of the flue is shown. However, required dimensions of a lower pipe of a boiler may also be calculated by the same method. FIG.8Dshows an installation estimate of a boiler calculated by an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention displayed on an application of a user terminal. Referring toFIG.8D, the total installation estimate of a boiler may include the replacement estimate of a boiler main body, the replacement estimate of a flue, the replacement estimate of a pipe, the replacement estimate of a distributor, and other estimates (e.g., estimate of removing objects around a boiler, etc.). At this time, the replacement estimate for each material may include both the price and the installation estimate of the material. In addition, as shown inFIG.8D, when a user clicks on the estimate for a specific material on the application, estimate per specifications of the corresponding material may be provided. For example, when the user selects the estimate for a pipe on the application screen, estimate per length of the corresponding pipe may be displayed. Meanwhile, the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention may request a user to evaluate an error range between an estimated installation estimate of a boiler and an actual installation estimate required through the application of the user terminal. Therefore, the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention may analyze the evaluation data of a user and reflect analysis result data in an algorithm for calculating an estimated installation estimate of a boiler, thereby improving the accuracy of the estimate calculation. FIG.9shows a hardware configuration of an apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention. Referring toFIG.9, an apparatus900for calculating an estimate for installation of a boiler according to an embodiment of the present invention may include a central processing unit (CPU)910, a memory920, an input/output I/F930, and a communication I/F940. The CPU910may be a processor for executing a boiler installation estimate calculation program stored in the memory920, processing various data of the apparatus for calculating an estimate for installation of a boiler according to an embodiment of the present invention, and performing functions ofFIG.2. The memory920may store a boiler installation estimate calculation program. In addition, the memory920may store a boiler region image received from a user application, model information of a selected boiler, data obtained about a boiler main body and dimensions of a pipe, calculated boiler installation estimate, and the like. The memory920may be prepared in a plurality if needed. The memory920may be a volatile memory or a non-volatile memory. As the non-volatile memory, RAM, DRAM, SRAM, and the like may be used as the memory920. As the volatile memory, ROM, PROM, EAROM, EPROM, EEPROM, a flash memory, and the like may be used as the memory920. The examples of the memory920listed above are only exemplary, and are not limited thereto. The input/output I/F930may provide an interface which connects between an input device (not shown) such as a keyboard, a mouse, and a touch panel and an output device such as a display (not shown) and the CPU910to transmit and receive data. The communication I/F940is a component capable of transmitting and receiving various data to and from a user terminal, and may be various devices capable of supporting wired or wireless communication. For example, the communication I/F940may receive a boiler region image and a selected boiler model information from the user terminal, and may transmit a calculated boiler installation estimate to the user terminal. As described above, the user application according to an embodiment of the present invention is stored in the memory920and processed by the CPU910, so that may be implemented, for example, as a module which performs each functional block illustrated inFIG.2. Accordingly, the apparatus for calculating an estimate for installation of a water heater according to the present embodiment is applied to a water heater such as a boiler for providing heating, a water heater for providing hot water (a direct-water-type water heater without a separate hot water tank, or a tank-type water heater with a separate hot water tank), or a water heater combined boiler, and may automatically calculate an installation estimate through an image analysis for the water heater. As described above, according to the apparatus and the method for calculating an estimate for installation of a water heater, when a water heater is newly installed, an installation estimate is automatically calculated through an image analysis for the water heater, thereby enabling a user of the water heater to anticipate installation estimate in advance, and correct information about estimate required is provided even after the installation of the water heater, thereby securing reliability in the installation estimate. In the above, even if all the components constituting the embodiments of the present invention is described as being combined or combined to operate as one, the present invention is not necessarily limited to these embodiments. that is, if within the scope of the present invention, all of the components maybe selectively combined and operated as one or more. In addition, the terms “include,” “consist,” or “have” as described above mean that a corresponding component may be intrinsic, unless specifically stated otherwise, and it should be interpreted as including other components rather than excluding other components. All terms including technical or scientific terms have the same meanings as those commonly understood by those skilled in the art to which the present invention pertains, unless defined otherwise. Terms commonly used as those defined in a commonly used dictionary should be construed as being consistent with the context of the relevant art, and are not to be construed in an idealized or overly formal sense unless expressly defined in the present invention. The above description is merely illustrative of the technical idea of the present invention, and those skilled in the art to which the present invention pertains may make various modifications and variations without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are not intended to limit the technical spirit of the present invention, but to explain, and the scope of the technical spirit of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed by the following claims, and all technical concepts within the scope of the present invention should be construed as being included within the scope of the rights of present invention. DESCRIPTION OF THE REFERENCE NUMERALS OR SYMBOLS 100: User terminal200: Boiler installation estimate calculation apparatus (server)210: Image analysis unit Processing unit230: Database240: Communication unit300,400: Boiler310,410,510: Upper pipe (flue)320,420,520: Boiler body330,430,530: Lower pipe340,440: Distributor710: Boiler shooting screen720: Shooting guidelines
49,300
11861750
DETAILED DESCRIPTION Techniques are described for allowing tenants to tour properties without the presence of an owner, real estate agent, or property manager. A system uses tenant reservation data to automatically show rental properties to prospective tenants, including providing access codes and monitoring the rental properties in accordance with tenant reservations and check-ins/check-outs. Once a prospective tenant, or user of the system, has toured the property, he can provide feedback to the system regarding his experience. For example, the user can provide feedback about what aspects of the current property he liked and what aspects of the current property he did not like. This feedback allows the owner, real estate agent, or property manager to maintain a relationship with the prospective tenant, and if the current property he is viewing does not fit his needs, the owner, real estate agent, or property manager can help the prospective tenant find a property that does. The renting process is streamlined by providing a system through which a prospective tenant can reserve showing times, access properties, and indicate interest. Whereas prospective tenants may have previously been limited to times during which leasing agents, property managers, etc. were available, they can now view properties at times that fit their schedules. Allowing prospective tenants to view properties unattended minimizes the amount of time a property is vacant, and reduces travel and labor costs for property management companies with properties that are geographically distant. In some examples, unattended viewing may include allowing tenants to view properties unaccompanied by an owner or property manager. For example, if a property management company focuses on long term renters and has a portfolio of single family homes that are each at least 20 twenty miles apart from each other or the property management company site, the property management company can increase the frequency of showings without incurring additional travel costs or losing time between showings by travelling between the different properties. Furthermore, multiple interested parties can attend showings simultaneously while receiving a personalized experience, and showing times may overlap without interrupting the viewing experience of a prospective tenant already inside the property. For example, if a prospective tenant, John, is interested in viewing a property from 1:30 p.m. to 2:30 p.m., and a second prospective tenant, Sally, is interested in viewing the same property from 2:00 p.m. to 2:30 p.m., the system may enable both John and Sally to view the property during their desired viewing times. Additionally, the system may provide a guided tour or personalized information for John and for Sally. FIG.1illustrates a diagram of an example of a property management system100associated with a property101. While the below disclosure is written in the context of showing a rental property it could also be used for showing a property for sale. In some examples, the system can be used to provide access codes to prospective tenants who wish to tour the property. The system100may include a monitor control unit110, sensors122, appliances124, cameras126, an electronic lock128, a rental property management server130that manages rental reservations, and an authorized user device140. The server130may maintain data that defines which prospective tenants, owners, or property managers are associated with which properties (or the electronic locks at the properties) and maintain permission data that sets which users are allowed to view data and perform control operations for electronic locks and monitoring and energy consuming devices. AlthoughFIG.1illustrates one property for brevity, the server120may manage electronic locks and energy consuming devices for many more properties and/or structures. For example, the system100may include several monitoring systems each associated with a respective multiple, different rental properties and the server120may manage access to all the multiple, different rental properties. The multiple, different properties may be operated by different entities (e.g., owned by different entities) with single entities operating groups of properties. For example, each rental property may be owned by a different person, and a single property management company may be managing all of the rental properties using the system. The authorized user104may be the same user as the individual near the front door102. For example, the individual near the front door102may be authorized to gain access to the property101and may be entering an access code to the electronic lock128. The operations performed by the system100may limit labor and travel expenses by automating the process of granting access to prospective tenants in remotely located rental properties. In some examples, the server130receives data related to reservations for the rental properties managed by the server130. The server130may provide a web interface that enables users (e.g., travel agents, travelers, property management personnel, etc.) to manage reservations for the rental properties (e.g., make reservations, cancel reservations, change reservations, etc.). In these implementations, the server130further receives data related to settings for monitoring systems, devices, and energy management provided by owners and/or property managers of the rental properties managed by the server130. The server130may provide a web interface that enables each owner and/or property manager to define operational settings for their rental properties (e.g., energy management profiles, thermostat profiles, lighting management profiles, rules related to interior door access by renters for rented and unrented states of the property, etc.). In some examples, the owner or property manager may define and update settings for appliances, devices, and systems of the property101. In some examples, a tenant or a prospective tenant may make changes to settings and profiles for appliances, devices, and systems of the property101. In general, the system100can be configured to respond to an electronic lock action by an individual102based on monitoring an detectable region128aof the property101and determining an appropriate action to be performed in response based on one or more actions specified by a lock action repository132. The lock action repository132may include actions available in response to inputs to the electronic lock128. For example, in response to the electronic lock128detecting an input of an incorrect access code, the lock action repository132may transmit a control signal to the monitor control unit110to active an alarm. The lock action repository132may receive data from the electronic lock128. In some examples, the lock action repository132may communicate with various systems, such as the monitor control unit110, the server130, etc. In the example depicted inFIG.1, the electronic lock128initially detects an input to the electronic lock128by the individual102. In response to detecting user input, the electronic lock128monitors the detectable region128a, near the front door of the property101, for motion by the individual102. The electronic lock128then transmits a signal including lock information (e.g., timestamp of input to the electronic lock, detected motion within the detectable region128a, captured footage of the individual102, etc.) to the monitor control unit110. In response, the monitor control unit110gathers additional information for the property101from the sensors122, the appliances124, and the cameras126, and then transmits the gathered data to the rental property management server130. After receiving the gathered data, the rental property management server130accesses the lock action repository132to determine an appropriate action to be performed based on the information included within the gathered data. After determining an appropriate action to be performed, the rental property management server130transmits instructions to perform the action to be performed to the monitor control unit110, which then transmits corresponding signals to one or more of the sensors122, the appliances124, the cameras126, or the electronic lock128. In some instances, the action to be performed may include transmitting an electronic lock alert notification indicating the detected input to the electronic lock and other associated information to the user device140of the authorized user104. More particular descriptions related to the components of the system100are provided below. The server130may maintain a database that stores integrated reservation data and property usage data (e.g., lock/lock usage data, thermostat usage data, and pool heater usage data). In some examples, the server130or the monitor control unit110maintains the property usage data. The server130may analyze the integrated reservation data and property usage data to provide alerts/reports based on both reservation data and property usage data and also to verify that the property101is being properly managed. For example, the server130may monitor sensors on interior doors to which a renter does not have access to issue an alert to a property manager or owner of the property101. The server130also enables owners or property managers of the properties to edit operational settings at the properties. In some examples, the server130manages operational settings at the properties in an automated manner based on the reservation data and the property usage data stored in the server130. The server130may communicate with the monitor control unit110to control operations of devices and systems located on the property101. For example, when allowing users to view the property101unattended, systems such as the HVAC system may be locked. The server130may control the HVAC system and/or deny input from a thermostat located on the property101. For example, the owner or property manager of the property101may set controls of the HVAC system such that the settings cannot be changed during unattended viewing times. In some examples, the visitors or prospective tenants may not interact with the devices and systems of the property101. For example, the owner of the property may set operational settings for the devices and systems of the property101such that the devices and systems do not respond to physical inputs, e.g., a visitor or prospective tenant may press buttons or flip switches and no action will be taken. In some examples, prospective tenants may interact with physical controls in the property101. For example, prospective tenants may interact with light switches, open cabinets and doors, turn fans on and off, etc. In some examples, the owner or property manager may set controls of devices or systems such that they do not accept input when occupancy is detected. The system100may enable the owner or property manager to set controls of devices or systems by sending control signals through the monitor control unit110. For example, the system100may send control signals to a security system to enable it through the monitor control unit110. In some examples, the system100may enable the owner or property manager to set controls of devices or systems by directly controlling the devices or systems. For example, the system100may communicate directly with an electronic lock to grant a prospective tenant access to a property. In some examples, the systems and devices may be locked when it is detected that a prospective tenant is in the home, but settings may be adjusted remotely. In some examples, settings may be adjusted remotely at any time, and may not be adjusted manually. For example, if the owner or property management company is managing the property101remotely, they may not allow control of systems or devices located on the property101by anyone. The system100also includes electronic locks located at each of the properties. As shown, the property101includes an electronic lock128located at an exterior door of the property101and a monitor control unit110located within the property101. The electronic lock128may include a user input device that receives user input of a passcode and a mechanical lock that unlocks and locks a physical door of an entrance to the property101. The electronic lock128also may include a communication module that performs two-way communication over a wired or short-range wireless communication protocol and a processor that controls the lock to allow access to the property based on entry of a proper passcode through the user input device. In some examples, the electronic locks described throughout this disclosure may have firmware and processing capabilities that allow the server130to add, delete, and change codes stored at the electronic locks. The electronic lock128engages in two-way communications with the monitor control unit110over the short-range wireless communication protocol. In this example, the monitor control unit110includes communication components that allow the monitor control unit110to perform two-way communication with the lock128over the short-range wireless communication protocol and to perform two-way communication with the server130over a long-range communication protocol (e.g., a long-range wired or wireless communication protocol). The monitoring system110may serve as an intermediary between the server130and the lock128to enable the server130to remotely program and manage the lock128and also to receive reports when events (e.g., entry of a correct passcode, entry of an incorrect passcode, entry of a check-in or checkout code, etc.) occur at the lock128. In some examples, the monitor control unit110performs relatively few processing operations and serves to primarily exchange communications between the lock128and the server130. In these examples, the lock128includes an electronic storage device that stores passcodes that are valid to open the lock128and the processor of the lock128performs the decision making processing to determine whether or not a proper passcode has been entered. When the server130remotely manages passcodes (e.g., adds passcodes, deletes passcodes, changes passcodes, etc.) for the lock128, the monitor control unit110relays commands from the server130to the lock128and the processor of the lock128interprets the commands and performs adjustments to the electronic storage device needed to modify the valid passcodes as instructed. For reporting lock events, the lock128sends reports of events to the monitor control unit110and the monitor control unit110relays the reports to the server130. The server130stores the reports and may perform reporting operations for the entity operating the property101such that the entity (e.g., owner) may be alerted to events at the lock128and may view a history of events at the lock128. The server130also may perform energy management operations for the property101based on reports from the lock128. In other examples, the lock128performs relatively few processing operations and the monitor control unit110performs control processing for the lock128. In these examples, the monitor control unit110includes an electronic storage device that stores passcodes that are valid to open the lock128and also includes a processor that performs the decision making processing to determine whether or not a proper passcode has been entered. For instance, when a user inputs a passcode at the lock128, the lock128merely forwards the entered passcode to the monitor control unit110and the monitor control unit110determines whether the passcode is valid. Based on the determination, the monitor control unit110sends a command back to the lock128to either deny the entered passcode or allow access to the property101. When the server130remotely manages passcodes (e.g., adds passcodes, deletes passcodes, changes passcodes, etc.) for the lock128, the monitor control unit110interprets the commands and performs adjustments to the electronic storage device needed to modify the valid passcodes as instructed. The lock128does not need to receive any communication related to the management of passcodes since the monitor control unit110stores the valid passcodes. For reporting lock events, the monitor control unit110sends reports of events to the server130. The server130stores the reports and may perform reporting operations for the entity operating the property101such that the entity (e.g., owner) may be alerted to events at the lock128and may view a history of events at the lock128. In some examples, the server130also may perform energy management operations or other operations of monitoring or energy consuming devices for the property101based on reports from the monitor control unit110. The monitor control unit110includes a controller and a network module. The controller is configured to control a monitoring system (e.g., a home alarm or security system) that includes the monitor control unit110. In some examples, the controller may include a processor or other control circuitry configured to execute instructions of a program that controls operation of an alarm system. In these examples, the controller may be configured to receive input from sensors, detectors, or other devices included in the alarm system and control operations of devices included in the alarm system or other household devices (e.g., a thermostat, an appliance, lights, etc.). For example, the controller may be configured to control operation of the network module included in the monitor control unit110. The network module is a communication device configured to exchange communications over the network105. The network module may be a wireless communication module configured to exchange wireless communications over the network105. For example, the network module may be a wireless communication device configured to exchange communications over a wireless data channel and a wireless voice channel. In this example, the network module may transmit alarm data over a wireless data channel and establish a two-way voice communication session over a wireless voice channel. The wireless communication device may include one or more of a LTE module, a GSM module, a radio modem, cellular transmission module, or any type of module configured to exchange communications in one of the following formats: LTE, GSM or GPRS, CDMA, EDGE or EGPRS, EV-DO or EVDO, UMTS, or IP. The network module may also be a wired communication module configured to exchange communications over the network105using a wired connection. For instance, the network module may be a modem, a network interface card, or another type of network interface device. The network module may be an Ethernet network card configured to enable the monitor control unit110to communicate over a local area network and/or the Internet. The network module also may be a voice-band modem configured to enable the alarm panel to communicate over the telephone lines of Plain Old Telephone Systems (POTS). The network105is configured to enable exchange of electronic communications between devices connected to the network105. For example, the network105may be configured to enable exchange of electronic communications between the monitor control unit110, the sensors122, the appliances124, the cameras126, the electronic lock128and the rental property management server130. The network105may include, for example, one or more of the Internet, Wide Area Networks (WANs), Local Area Networks (LANs), analog or digital wired and wireless telephone networks (e.g., a public switched telephone network (PSTN), Integrated Services Digital Network (ISDN), a cellular network, and Digital Subscriber Line (DSL)), radio, television, cable, satellite, or any other delivery or tunneling mechanism for carrying data. The network105may include multiple networks or subnetworks, each of which may include, for example, a wired or wireless data pathway. The network105may also include a circuit-switched network, a packet-switched data network, or any other network able to carry electronic communications (e.g., data or voice communications). For example, the network105may include networks based on the Internet protocol (IP), asynchronous transfer mode (ATM), the PSTN, packet-switched networks based on IP, X.25, or Frame Relay, or other comparable technologies and may support voice using, for example, VoIP, or other comparable protocols used for voice communications. The network105may include one or more networks that include wireless data channels and wireless voice channels. The network105may be a wireless network, a broadband network, or a combination of networks including a wireless network and a broadband network. In some examples, the monitor control unit110may include data capture and recording devices. In these examples, the monitor control unit110may include one or more cameras126, one or more motion sensors, one or more microphones, one or more biometric data collection tools, one or more temperature sensors, one or more humidity sensors, one or more air flow sensors, and/or any other types of sensors that may be useful in capturing monitoring data related to the property101and users in the property. The property101may include various monitoring devices. For example, the property101may include cameras, sensors, and other devices that provide monitoring data associated with devices and areas of the property101. Cameras located on the property101may provide video, still images, or other monitoring data, and may provide data via a live feed, transmit data to be stored in a remote location, store data locally for review at a later time, etc. Sensors located on the property101may include motion sensors, heat sensors, pressure sensors, resistive sensors, etc. Sensors may communicate with the monitor control unit110and transmit monitoring data for processing to the monitor control unit110. In some examples, sensors located on the property101may store collected data locally or transmit monitoring data to be stored in a remote location. In some examples, the monitor control unit110includes computer-readable storage media that store passcodes that are valid to open the lock128. The lock128may transmit input received at a keypad or other user input device to the monitor control unit110. The monitor control unit110may determine whether a proper passcode has been entered or the processor of the lock128performs the decision making processing to determine whether or not a proper passcode has been entered. When the server130remotely manages passcodes (e.g., adds passcodes, deletes passcodes, changes passcodes, etc.) for the lock128, the monitor control unit110relays commands from the server130to the lock128and the processor of the lock128interprets the commands and performs adjustments to the electronic storage device needed to modify the valid passcodes as instructed. For reporting lock events, the lock128sends reports of events to the monitor control unit110and the monitor control unit110relays the reports to the server130. The server130stores the reports and may perform reporting operations for the entity operating the property101such that the entity (e.g., owner) may be alerted to events at the lock128and may view a history of events at the lock128. The server130also may perform energy management operations for the property101based on reports from the lock128. In some examples, the monitor control unit110performs relatively few processing operations and serves to primarily exchange communications between the lock128and the server130. In these examples, the lock128includes an electronic storage device that stores passcodes that are valid to open the lock128and the processor of the lock128performs the decision making processing to determine whether or not a proper passcode has been entered. When the server130remotely manages passcodes (e.g., adds passcodes, deletes passcodes, changes passcodes, etc.) for the lock128, the monitor control unit110relays commands from the server130to the lock128and the processor of the lock128interprets the commands and performs adjustments to the electronic storage device needed to modify the valid passcodes as instructed. For reporting lock events, the lock128sends reports of events to the monitor control unit110and the monitor control unit110relays the reports to the server130. The server130stores the reports and may perform reporting operations for the entity operating the property101such that the entity (e.g., owner) may be alerted to events at the lock128and may view a history of events at the lock128. The server130also may perform energy management operations for the property101based on reports from the lock128. The monitor control unit110also may include a communication module that enables the monitor control unit110to communicate other devices of the system100. The communication module may be a wireless communication module that allows the monitor control unit110to communicate wirelessly. For instance, the communication module may be a Wi-Fi module that enables the monitor control unit110to communicate over a local wireless network at the property101. The communication module further may be a 900 MHz wireless communication module that enables the monitor control unit110to communicate directly with a monitor control unit. Other types of short-range wireless communication protocols, such as Bluetooth, Bluetooth LE, Zwave, ZigBee, etc., may be used to allow the monitor control unit110to communicate with other devices in the property101. The monitor control unit110further may include processor and storage capabilities. The monitor control unit110may include any suitable processing devices that enable the monitor control unit110to operate applications and perform the actions described throughout this disclosure. In addition, the monitor control unit110may include solid state electronic storage that enables the monitor control unit110to store applications, configuration data, collected sensor data, and/or any other type of information available to the monitor control unit110. The monitor control unit110may exchange communications with the sensors122, the appliances124, the cameras126, the electronic lock128, and the rental property management server130using multiple communication links. The multiple communication links may be a wired or wireless data pathway configured to transmit signals from sensors122, the appliances124, the cameras126, the electronic lock128, and the rental property management server130to the controller. The sensors122, the appliances124, the cameras126, the electronic lock128, and the rental property management server130may continuously transmit sensed values to the controller, periodically transmit sensed values to the monitor control unit110, or transmit sensed values to the monitor control unit110in response to a change in a sensed value. The multiple communication links may include a local network. The sensors122, the appliances124, the cameras126, the electronic lock128, and the rental property management server130and the monitor control unit110may exchange data and commands over the local network. The local network may include 802.11 “Wi-Fi” wireless Ethernet (e.g., using low-power Wi-Fi chipsets), Z-Wave, Zigbee, Bluetooth, “Homeplug” or other “Powerline” networks that operate over AC wiring, and a Category 5 (CATS) or Category 6 (CAT6) wired Ethernet network. The local network may be a mesh network constructed based on the devices connected to the mesh network. In some implementations, the monitor control unit110may additionally be used to perform routine surveillance operations on a property. For instance, the monitor control unit110may be assigned to one or more particular properties within a geographic location and may routinely collect surveillance footage during specified time periods (e.g., after dark), which may then be transmitted to the rental property management server130for transmitting back to each particular property owner. In such implementations, the property owner may receive the surveillance footage over the network105as a part of a service provided by a security provider that operates the rental property management server130. For example, transmissions of the surveillance footage collected by the monitor control unit110may be part of a premium security service package provided by a security provider in addition to the routine drone emergency response service. In some implementations, the monitor control unit110may monitor the operation of the electronic devices of the system100such as sensors122, the appliances124, the cameras126, the electronic lock128, and the rental property management server130. For instance, the monitor control unit110may enable or disable the devices of the system100based on a set of rules associated with energy consumption, user-specified settings, and/or other information associated with the conditions near or within the property101where the system100is located. In some examples, the monitor control unit110may be used as a replacement to a traditional security panel (or monitor control unit) that is used to monitor and control the operations of the system100. In other examples, the monitor control unit110may coordinate monitoring operations with a separate security panel of the system100. In such examples, the monitor control unit110may monitor particular activities of the devices of the system100that are not monitored by the security panel, or monitor the operation of particular devices that are not monitoring by the security panel. In some examples, a monitoring system may not be used. In these examples, the locks may communicate directly with the server130over a long-range communication protocol. In controlling operations of the property101, the server130may consider reservation data for the rental property and/or events detected by an electronic lock, monitoring system devices, sensors, etc. at the rental property. For instance, the server130may define the prospective tenant arrival time used in determining when to begin arrival preparation as the earliest possible check-in time allowed with the reservation or the time when the prospective tenant reaches the rental property and enters a check-in code to the electronic lock at the property. In some implementations, the server130tracks actual check-in times detected by the lock at the property over time (e.g., many rental periods) and determines an estimated arrival time based on the tracked check-in times. In these implementations, the server130may compute an average or median actual check-in time at the rental property based on the tracked check-in times and use the computed average or median actual check-in time as the estimated arrival time. In addition, the server130may use a combination of the reservation data and the electronic lock data in managing energy usage at the rental property. For example, the server130may begin arrival preparation based on the earliest possible check-in time allowed with the reservation and then monitor for an event from the electronic lock that signifies that the prospective tenant has arrived at the property. The server130may use monitoring data, such as camera input (still images or video), motion sensors, etc. to determine whether the prospective tenant has arrived at the property. In this example, the server130may maintain operational settings for an occupied rental property based on the server130detecting the event from the electronic lock that signifies that the prospective tenant has arrived at the property within a threshold period of time after the earliest possible check-in time allowed with the reservation. However, based on the server130determining that the event from the electronic lock that signifies that the prospective tenant has arrived at the property has not been detected within the threshold period of time, the server130may initiate conservation or un-occupancy operations (e.g., at least temporarily stopping heating or cooling of the rental property, turning off lights, etc.) and continue monitoring for an event from the electronic lock or monitoring system devices that signifies that the prospective tenant has arrived at the property. The server130also may take action to attempt to determine when the prospective tenant expects to arrive at the rental property (e.g., sending the prospective tenant an electronic message that asks the prospective tenant to provide an estimated arrival time). In some implementations, the server130also sets a “departure” profile. In these implementations, the server130may use the prospective tenant reservation data (e.g., check out date/time) and may send a signal to the monitoring system that includes settings for the departure of the prospective tenant. The monitoring system at the property then may send the departure settings to various devices and systems located in the rental property101. The departure operations and timing of when the departure temperature is used may be set by the rental management company and/or the owner of the rental property in a manner similar to that described above for the arrival temperature. The server130may use reservation data and/or electronic lock events to control departure timing similar to how the server130uses reservation data and/or electronic lock events to control arrival timing. For instance, the server130may monitor for a departure code that the prospective tenant is asked to enter at the lock when checking out. In some examples, the departure code may operate the electronic lock128. In some examples, the departure code does not operate the electronic lock128, but is used to report the check-out to the server130, which in turn is able to perform energy management operations defined for when the rental property is not occupied by a prospective tenant. To the extent the server130does not detect the departure code by the latest check-out time allowed with the reservation, the server130may, at that time, initiate operations defined for when the rental property is not occupied by a prospective tenant or may provide reminder to the prospective tenant that their reserved time has elapsed, e.g., flash the lights in the property, provide an announcement of speakers in the property that the reserved time has elapsed, etc. The server130may initiate operations defined for when the property is not occupied if no activity or motion is detected for a predetermined amount of time, even if no departure is detected. For example, a visitor may depart without fully closing the front door, through the back door, without entering a departure code, etc. In such examples, the server130may use monitoring data to determine that all visitors have departed and initiate operations for an unoccupied property. In some examples, the server130may determine that the property101is unoccupied through data such as a visitor's geographical location, which devices are connected to a network of the property101, etc. For example, if no devices are connected to the network local to the property101and all visitors' geographical locations are determined to be locations other than the property101, the server130may determine that the property101is unoccupied, and enter an unoccupied operation mode. In some examples, the server130stores reservation information (e.g., periods in which properties are rented, arrival date-time, departure date-time, etc.) for each property being managed by the server130. At a particular number of hours prior to the prospective tenant arrival, the server130may send a signal via a network (e.g., the Internet) to the monitor control unit110in the rental property101. The signal may include a prospective tenant arrival temperature setting, a lighting system profile, a personalized greeting, etc., and the monitor control unit110may then send the prospective tenant arrival settings to the appropriate devices (e.g., a thermostat, lights, a speaker system, etc.) located in the rental property101. The particular number of hours prior to the prospective tenant arrival at which arrival preparation begins and the prospective tenant arrival settings may be set by the owner of the property or the rental management company that operates the server130. The rental management company may define limits in which the owner of the property can choose the desired number of hours and the prospective tenant arrival settings to prevent owners from choosing unreasonable settings that are likely to result in prospective tenant dissatisfaction. The prospective tenant also may set the particular number of hours prior to the prospective tenant arrival at which arrival preparation begins and adjust the prospective tenant arrival settings. The owner of the property and/or the rental management company may define limits in which the prospective tenant can choose the desired number of hours and the prospective tenant arrival settings to prevent prospective tenants from choosing unreasonable settings that are likely to result in high energy consumption. A user of a mobile device140may be interested in the property101. The user may make a reservation to see the property101through a mobile application on the mobile device140(e.g., an application for interacting with the server130). The user may make a reservation to see the property101through a web interface (e.g., a property management company's website). In some examples, a mobile application on the mobile device140may allow the user to browse properties on one or more property management company websites. For example, the mobile application may present the user with a list of properties from different property management companies, and redirect the user to the property management company's web site once the user selects a property to view. The mobile application may allow a user to filter results by location, price, availability, showing times, management company, etc. In this example, the user may decide that they are interested in viewing the property101. The user may select the property101using the mobile application and indicate their interest using a user interface element provided in the mobile application (e.g., a “Reserve a Showing” button). The mobile application may display a list of times during which the property101is available for viewing. For example, the property101may still have a tenant who wishes to be home when prospective tenants view the property101; the application may offer only times during which they are home to prospective tenants. In some examples, the owner or property manager may override tenant-specified viewing times. In some examples, the owner or property manager may set the viewing times without input from the current tenant. The available viewing times can include the present—for example, a prospective tenant may be in the neighborhood and may wish to view the property at that very moment. The user can select a time or range of times to view the property. For example, viewings may be a fixed amount of time (e.g., thirty minutes, one hour, two hours, etc.). In some examples, the user can select a range of times, (e.g., from 2 p.m. to 3:45 p.m.). The user may be able to select a range of times in various increments (e.g., fifteen minutes, thirty minutes, forty-five minutes, etc.). In some examples, the user is limited to increments or time ranges set by the owner or property manager of the property. For example, the owner may impose a time limit of one and a half hours for each prospective tenant to view the property. In some examples, the property manager may override the owner's limits if the limits would negatively impact a prospective tenant's experience. In other examples, the owner may override the property manager's limits if the owner is not comfortable with them. Once the user has selected a property and a reservation time, the user may be asked to register. For security and authentication purposes, the user may be asked to answer a variety of questions and provide various information. The user may be asked to provide their full name, email address or phone number, and notification preferences. For example, the property management company may need to contact the user to receive feedback, inform them that they have left something behind, or to ask them to return an item they removed from the property. The user may be asked to answer standard background questions, such as whether they have ever been convicted of a felony or filed for bankruptcy. In some examples, the user may be asked to provide credit information or authorize a credit check. In some examples, the user may be asked to provide a valid credit card number and to authorize a small charge to their card. This fee may act as a deterrent to users who are not serious about viewing properties for rental, or who may have malicious intentions (e.g., stealing from the property, damaging the property, etc.). The fee may be used as a deposit that is returned upon inspection of the property after the user leaves. In some examples, the fee is small and is kept as a showing charge. The user may be asked to provide a current photo for verification purposes. For example, the owner or property management company may manually verify that the person is who they claim to be by checking the photo against public records, or by checking monitoring data, such as video or still images, from the property101upon the user's arrival. The user may be able to take a photo of themselves when they request to tour a property. For example, the user may be able to take a photo using their mobile device140. The user may then transmit the photo to the server130. The owner or property management company may then verify that the current photo matches public records or a photo sent to the server130from the monitor control unit110. In some examples, the system100may automatically compare photos using facial recognition technology. Photo verification may be provided in various ways, such as over the network105, through postal mail, etc. If the user does not match the photo they provided, they may be denied access to the property101. Once a user has been authenticated and successfully booked a showing time for the property, the server130may generate an access code or personal identification number (PIN) unique to the user. The access code is valid throughout the duration of the user's reservation, and is input to the electronic lock128to grant the user access to the property101. In some examples, the access code is not valid for any other electronic locks or devices located on the property101, or to any electronic locks or devices outside of the property101. In other examples, the access code may be unique to the user, and may be used by the user to access multiple properties. For example, the user may schedule several showings for the same day. The server130may generate one access code for the user and communicate the code to each of the properties the user is scheduled to view. In some examples, the electronic locks at each property may maintain the times during which the codes are active. In other examples, the server130may manage the times during which the access codes are active for each property. A user may then input their unique access code to the electronic lock128to gain access to the property101. The electronic lock128may transmit data indicating that it has received an input. The server130may receive the access code data to record when a particular user has arrived at the property101. In some examples, the server130may perform an action based on the access code data. For example, the server130may determine that a user, John, has entered his access code, and has entered the property101. The server130may operate a speaker system located on the property101to greet John (e.g., saying “Welcome to your new home, John! Feel free to look around!”). In some examples, multiple prospective tenants may be inside the property101, and the server130may provide a personalized greeting for each visitor. In some examples, the server130may notify the owner or property manager of the property101that a particular visitor has arrived. The owner or property manager may then greet the visitor personally (e.g., through a speaker system, through a phone call, through an interactive device within the property101). The server130may perform operations based on the access code data and/or the monitoring data. In some examples, the operations may include a personalized guided tour. For example, an interactive device such as a tablet or computer may be placed in the property101. Upon detecting that a visitor has entered the property101, the server130may initiate a greeting through the interactive device. In some examples, the server130may operate the interactive device to guide a visitor through key areas of the property101. The interactive device may grant the visitor access to specific areas of the property101. In some examples, the visitor may not access certain areas of the property101without the interactive device. For example, the visitor may be denied access to a second floor or a basement or a door leading to a different area of the property101unless they are holding the interactive device. The interactive device may communicate with doors and sensors through short-ranged wireless communication (RFID, NFC, etc.). The server130may operate the interactive device to guide a user through a tour by providing voice-guidance, presenting augmented reality graphics, operating certain systems, etc. For example, the server130may operate the interactive device to turn on the lights of specific areas while they are described to the visitor by a voice-over. In some examples, the owner or property manager may be providing a live tour by controlling systems through the server130while talking to the visitor through the interactive device. In some examples, the interactive device may collect and transmit data to the server130. For example, the server130may be able to gather visitor interaction data or visitor location data based on the data received from the interactive device. The server130may use the visitor interaction data or visitor location data to determine how long a visitor has been at the property101, predict the visitor's interest in the property101, detect when a user is doing something unsavory (e.g., eating cookies from the cookie jar, stealing napkin holders, etc.). The server130can collect user feedback through various interfaces, such as the interactive device or the prospective tenant's mobile device. For example, after a tour of the property101, a prospective tenant, or user, can use the interactive device on-site to indicate that he is not interested in the property101because although it has the right number of bedrooms (e.g., four bedrooms), has a basement, and is in the correct neighborhood (e.g., Happyville), it doesn't have enough bathrooms (e.g., only two bathrooms). The server130can solicit feedback from the user at any point during the tour, for example, by providing survey questions. The server130can present a questionnaire to the user with questions relevant to the current property101and to which the user's answers are stored and analyzed to determine whether a property that would be a better fit for the user can be found. The server130can automatically determine when to solicit feedback form the user. For example, the server130can determine when a user is not interested in the property101, or shows signs of interest in certain aspects of the property101and not others. For example, the server130can use data from cameras126to monitor whether the user is about to depart and present a survey to the user on his phone. In some examples, the server130can use data from the cameras to determine that the user lingered in the master bedroom and the backyard, and can tailor survey questions to detected interactions. The server130can automatically detect interactions of the user with the property101through, for example, the use of machine learning models. The machine learning models may be models which accept sensor data collected by cameras and/or other sensors as inputs. The machine learning models may use any of a variety of models such as decision trees, linear regression models, logistic regression models, neural networks, classifiers, support vector machines, inductive logic programming, ensembles of models (e.g., using techniques such as bagging, boosting, random forests, etc.), genetic algorithms, Bayesian networks, etc., and can be trained using a variety of approaches, such as deep learning, perceptrons, association rules, inductive logic, clustering, maximum entropy classification, learning classification, etc. In some examples, the machine learning models may use supervised learning. In some examples, the machine learning models use unsupervised learning. The server130can provide, for example, video footage from the cameras126and sensor data from various sensors122as input to the machine learning models to determine whether the user has interacted with the property101in a way that indicates that he is either interested or uninterested in a particular aspect of the property101. These determinations can then be used by the server130to generate personalized questionnaires for the user to provide a better property-hunting experience. For example, if a user walks past a room in the property101without stopping inside to look at the room, the server130can determine that the user does not like at least one aspect of the room, and can provide questions to the user about the room. If, however, the server130determines, from the user's answers to the questions, that the user likes the room, the server130can learn from the user's behaviors and adjust the machine learning models to better predict whether the particular user likes or dislikes a property. The server130can determine and store user preferences in a profile created for the user so that future listings can be identified as possibly being of interest to the user, and can alert the user that there is a property matching his criteria. In some examples, the server130can store, in a database, an anonymized set of data based on prospective tenants' behavior regarding particular features of a property that indicate whether the prospective tenant likes or dislikes the particular feature. The server130can use this stored data to better predict, using the machine learning models, whether a future prospective tenant touring a particular property101likes or dislikes certain features of the property101. The database allows the server130to select more relevant questions and provide better suggestions to prospective tenants, and reduces frustration of a prospective tenant that may occur if a realtor does not pick up on the prospective tenant's body language or if the prospective tenant has preferences for particular aspects of a property that he is unaware of. The server130can provide a questionnaire focused on aspects of the property101that the user liked and aspects that the user would like to see changed in the next property the user is shown. If the user agrees to see a different property at the suggestion of the server130, the server130can tailor subsequent questionnaires to determine whether the changed parameters of the different property meet the user's expectations. For example, the server130can ask the user, Tom, whether he would like to see a next property with a larger kitchen or with a lower monthly fee (e.g., rent, home owners association (HOA) fees, etc.), or what additional amenities the user would like to see. In some examples, the server130can solicit or receive feedback telephonically, and the user can provide the feedback directly to a property manager of the property101. For example, the property manager can be alerted that the user has finished his tour of the property101, and that the user has indicated in his exit survey that he does not wish to rent the property101. The property manager can then call the user to ask about whether the user would be interested in providing feedback that is used to find different properties for the user to tour. Feedback collected by the property manager can then be used as input to the server130, and the server130can automatically determine whether there are other properties managed by the server130that satisfy parameters created based on the user's feedback. The server130receives input of parameters associated with the current user and automatically performs a search for properties that match the input parameters. The server130can access, for example, a database of properties and input the parameters to find properties for the user. Continuing the example in which Tom is searching for a rental property, the server130can search for homes with four bedrooms, a basement, in Happyville, and with more than two bathrooms. Criteria specified by the user can be selected, for example, from a database of available criteria, or can be entered by the user. Criteria can include, for example, various characteristics of a property, including property square footage, building square footage, outdoor space (e.g., lawns, pools, gardens, backyards, garages, etc.), room square footage (e.g., bathrooms, kitchens, bedrooms, etc.), building characteristics (e.g., number of windows, size of doors, amount of storage space available, etc.) and other appropriate characteristics. In examples where the server130manages properties for multiple property managers, owners, real-estate agents, etc., the server130can specifically perform searches based on the point-of-contact for the current property101. For example, if the user is being helped by a particular real-estate agent, the server130can perform a search in the real-estate agent's inventory, instead of the entire inventory of the server130, in order to preserve the existing tenant-agent relationship. By showing the prospective tenant properties managed by the same owner, real-estate agent, or property manager, the server130allows a relationship to develop, and improves the prospective tenant's experience by keeping him with a manager who knows what the prospective tenant is looking for. In other examples, the server130may allow the user to select whether he would like to only view properties managed by the same owner, property manager, real-estate agent, etc. If there are homes that match the search criteria, the server130can offer the homes as suggestions for new properties to tour to the prospective tenant. A match does not have to be an exact match in which all search criteria are met. A match can include properties that meet a threshold number of criteria or are within a threshold of similarity to the criteria. For example, a house with four bedrooms, four bathrooms, and a basement, but is five miles outside of Happyville, can still be provided as a suggestion if a threshold distance is satisfied. In some examples, the prospective tenant can set some or all of the thresholds for the criteria. In other examples, the property manager can set the thresholds, or a default threshold can be used. The server130can provide suggestions to the prospective tenant through various interfaces, including the interactive device or the prospective tenant's mobile device. For example, the server130can text the user a link to a website with selected new properties for the prospective tenant to tour. The server130can, in other examples, show property suggestions through the interactive device, send the prospective tenant an email with each listing, or provide the prospective tenant with a customizable list of data to a specified location, such as a different phone number that the prospective tenant uses. By collecting feedback from the prospective tenant while he is still on the property101, the server130provides real-time service to the tenant and improves property management technology by automating the experience of helping a prospective tenant find a property he is interested in renting. If a prospective tenant decides he wants to see one of the suggested properties, the server130further automates the process by allowing the prospective tenant to schedule a viewing at that time. For example, if a prospective tenant selects one or more of the suggested properties managed by the server130, the prospective tenant can request to schedule a viewing, and the server130can automatically vet the prospective tenant using his previously provided information and generate an access code specifically for the prospective tenant. The server130can schedule a viewing appointment and generate an access code for each property the prospective tenant is interested in seeing. By automatically scheduling and generating access codes for a prospective tenant who is still searching, the server130improves the searching experience for a user by reducing delays associated with, for example, working schedules of property managers. A tenant can schedule a viewing at the time he has indicated he is interested in seeing the property, and an access code can be generated at the same time, such that he does not have to experience any delays between when he has decided to view another property and when he can view it. The server130provides an improved experience by reducing, or even eliminating, delays in a prospective tenant's property search, and thus the tenant is more likely to continue seeing properties for which he is scheduled. The server130may initiate operations based on the access code data and/or monitoring data such as restricting access to certain areas of the property101, arming alarm systems, etc. For example, electronic locks may be installed on doors such as a master bedroom closet. The electronic locks may be enabled when an access code is detected and a visitor enters the property101to prevent the visitor from opening the closet and rifling through the owner or current tenant's personal items. When a current tenant is still occupying the property101, certain operations of the system100may be altered. A tenant could request to change available showing hours to coincide with when they are home, or when they are away. For example, a tenant could request that visitors only be let in between 8 a.m. and 6 p.m. on weekdays, when they are not home. In some examples, a tenant could alter settings and operations of the monitor control unit110. For example, if the tenant has opted to receive a photo from a camera each time someone arrives at the front door because he wants to make sure his daughter has come home from school, the tenant may disable the notifications during showing times. In some examples, the server130may determine whether the access code received is for a prospective tenant and refrain from sending a photo to the tenant, while sending other photos taken from people using a physical key. In some examples, if the current tenant has requested certain videos to be saved, or certain actions to trigger notifications or alarms, the settings may be overridden or altered when prospective tenants are detected to have entered the property101. For example, if the current tenant only uses the back door of the property101and has requested a notification every time the front door is used, the notification settings may be automatically adjusted. In some examples, the settings are adjusted by the owner or property manager of the property101. For example, the owner or property manager might enforce more or less restrictions on where a visitor can go or what a visitor can do. The property manager might enforce that personal closet doors are locked to deter theft, even if the current tenant has not specified this setting. In some examples, sensors or other monitoring devices may be installed throughout the property101. The sensors may include pressure sensors, motion sensors, light sensors, etc., and may transmit data to the monitor control unit110or store collected data locally. The sensor data may be used to monitor the activity of visitors to the property101. For example, the server130may determine, from data collected by a sensor on the refrigerator door, that the refrigerator door was opened but never closed. The server130may determine which users are inside the property101from the access code data and send a notification to the users to remember to close the refrigerator door. In some examples, the server130may use a combination of the sensor data and the access code data to determine which particular users are in an area of property101. For example, there may be five prospective tenants simultaneously touring the property101. The server130may determine from the sensor data (e.g., video, still images, etc.) that a person is in the master bedroom of the property101, where they have been asked not to go. For example, video data from the cameras126may be used with facial recognition to determine the identity of a visitor or prospective tenant. In some examples motion sensor data from the sensors122may be used to track visitors' and/or prospective tenants' movements in or around the property101. The server130may compare the sensor data with the access code data and the user data associated with the access code to determine that the user is Ronald. The server130may send Ronald a notification informing him that the owner or property manager has been notified, and that he should exit the master bedroom. The notification may be delivered through various media, such as text message, email, phone call, application notification, etc. In some examples, the server130may simply send Ronald a notification asking him to leave. The server130may set off an alarm immediately upon detection of entry to a forbidden area. In some examples, the server130may use systems such as an alarm system or a speaker system to interact with visitors. In some examples, the server130may connect a visitor with the owner or property manager of property101upon detecting a condition for which the owner or property manager should be notified. For example, the server130may call the visitor and the owner or property manager and connect the calls. When one or more current tenants are still occupying the property101, the server130may determine whether a person performing an action or accessing an area is a current tenant or a prospective visitor. For example, a current tenant may be permitted to access all areas of the property101and interact with all devices and systems of the property101, while a visitor who performs the same actions or tries to access the same areas may set off an alarm or receive a notification requesting that they cease. The server130may determine, using the monitoring data and the access code data, how long a user has been in the property101. The server130may analyze such data to predict a user's interest in the property, determine whether a user has been at the property for too long, etc. The server130may send the visitor a notification informing them that their reservation period is over. In some examples, the server130may turn off the lights, or initiate the unoccupied profile for the devices and systems on the property101once the last visitor's reservation period is over. For example, the server130may begin to dim the lights on the property101five minutes after warning a visitor that they are the last visitor of the day and that their reservation period is over. In some examples, the server130may notify the owner or property manager if a visitor is detected within the property101after being warned that their time is up. In some examples, the server130may present the owner or the property manager with options for actions to take. For example, the server130may ask the owner or property manager if authorities need to be contacted. In some examples, the server130may detect predefined conditions and automatically notify the authorities. For example, if a visitor is setting fire to the house, the server130may automatically alert the police and fire department. In some examples, a prospective tenant may enter a departure code that indicates a prospective tenant's interest in the property101. For example, a prospective tenant may input a code of “123” to express that they wish to put down a deposit and rent the property101immediately. The prospective tenant may input a code to indicate that they are still looking, and does not wish to make a commitment at the time. In some examples, the prospective tenant may input a departure code to indicate that they are not interested in renting the property101at all. The departure codes may be provided to the prospective tenant as options read from a list (e.g., “Press 1 to indicate you wish to rent this property. Press 2 to indicate . . . ”). In some examples, the departure codes may be specific combinations entered into a user interface element of the electronic lock, such as a keypad. The departure codes may be entered by the user through a web interface, a mobile application, etc. In some examples, if a visitor has indicated immediate interest in renting the property101and a current tenant is still occupying the property101, the server130may prompt the owner or property manager to contact the current tenant to ask the current tenant whether they would like to end their lease early and be credited an amount for the remainder of their lease. For example, if a current tenant only has two weeks left of their three year lease and already has a lease signed elsewhere, the server130may transmit a communication to the current tenant asking if they would like to terminate their lease early and be credited for the remainder of their lease. The flexibility granted by the system100may reduce vacancy times of properties and improve the rental experience for both the current tenant and the prospective tenant. In some examples, a tenant may move out of the property101before the end of their lease, but they may not inform the owner or property manager. For example, a tenant may forget to inform the owner or property manager that their new job in a different state begins three weeks before the end of their lease. The current tenant may move out days, weeks, etc. before the end of their lease, making the property available for re-rental. The server130may use the monitoring data and/or access data to determine that the current tenant has moved out, and has not returned for a predetermined amount of time. The server130may then contact the owner or property manager to inform them that the property is vacant. In some examples, the server130may automatically flag the property as vacant. In some examples, the server130may access, with permission, the current tenant's schedule data to determine when the current tenant will be moving out. In some examples, the current tenant may take a vacation before moving out. The server130may use the schedule data to determine that, while there has been no activity within the property101for two weeks, the current tenant is returning a week before the end of their lease to move out. The server130would then delay flagging the property101as vacant. The system100also includes one or more sensors or detectors. For example, the monitoring system may include multiple sensors122. The sensors122may include a contact sensor, a motion sensor, a glass break sensor, or any other type of sensor included in an alarm system or security system. The sensors122also may include an environmental sensor, such as a temperature sensor, a water sensor, a rain sensor, a wind sensor, a light sensor, a smoke detector, a carbon monoxide detector, an air quality sensor, etc. The sensors122further may include a health monitoring sensor, such as a prescription bottle sensor that monitors taking of prescriptions, a blood pressure sensor, a blood sugar sensor, a bed mat configured to sense presence of liquid (e.g., bodily fluids) on the bed mat, etc. In some examples, the sensors122may include a radio-frequency identification (RFID) sensor that identifies a particular article that includes a pre-assigned RFID tag. The appliances124may be home automation devices connected to the network105that are configured to exchange electronic communications with other devices of the system100. The appliances124may include, for example, connected kitchen appliances, controllable light sources, safety and security devices, energy management devices, and/or other types of electronic devices capable of exchanging electronic communications over the network105. In some instances, the appliances124may periodically transmit information and/or generated data to the monitor control unit110such that the monitor control unit110can automatically control the operation of the appliances124based on the exchanged communications. For example, the monitor control unit110may operate one or more of the appliances124based on a fixed schedule specified by the user. In another example, the monitor control unit110may enable or disable one or more of the appliances124based on received sensor data from the sensors122. The cameras126may be video/photographic cameras or other type of optical sensing devices configured to capture images. For instance, the cameras126may be configured to capture images of an area within a building monitored by the monitor control unit110. The cameras126may be configured to capture single, static images of the area and also video images of the area in which multiple images of the area are captured at a relatively high frequency (e.g., thirty images per second). The cameras126may be controlled based on commands received from the monitor control unit110. The cameras126may be triggered by several different types of techniques. For instance, a Passive Infra Red (PIR) motion sensor may be built into the cameras126and used to trigger the cameras126to capture one or more images when motion is detected. The cameras126also may include a microwave motion sensor built into the camera and used to trigger the cameras126to capture one or more images when motion is detected. The cameras126may have a “normally open” or “normally closed” digital input that can trigger capture of one or more images when external sensors (e.g., the sensors122, PIR, door/window, etc.) detect motion or other events. In some implementations, the cameras126receives a command to capture an image when external devices detect motion or another potential alarm event. The cameras126may receive the command from the controller or directly from one of the sensors122. In some examples, the cameras126trigger integrated or external illuminators (e.g., Infra Red, Z-wave controlled “white” lights, etc.) to improve image quality when the scene is dark. An integrated or separate light sensor may be used to determine if illumination is desired and may result in increased image quality. The cameras126may be programmed with any combination of time/day schedules, system “arming state”, or other variables to determine whether images should be captured or not when triggers occur. The cameras126may enter a low-power mode when not capturing images. In this case, the cameras126may wake periodically to check for inbound messages from the controller. The cameras126may be powered by internal, replaceable batteries if located remotely from the monitor control unit110. The cameras126may employ a small solar cell to recharge the battery when light is available. Alternatively, the cameras126may be powered by the controller's112power supply if the cameras126is co-located with the controller. In some implementations, the cameras126communicates directly with the rental property management server130over the Internet. In these implementations, image data captured by the cameras126does not pass through the monitor control unit110and the cameras126receives commands related to operation from the rental property management server130. The electronic lock128may be an electronic computing device that is placed on the exterior of the property101and configured to capture video and image footage of the detectable region128aof the property101. In some implementations, the electronic lock128can be a connected device placed on the front door of the property101that is capable of receiving a button press from an individual near the front door (e.g., the individual102). In such implementations, the electronic lock128may be configured to exchange communications with a separate security camera that captures footage of the front exterior of the property101. Alternatively, in other implementations, the electronic lock128may include one or more integrated camera devices that are capable of capable of collecting footage of the detectable region128a. The integrated cameras may also be capable of detecting motion within the detectable region128asuch that, after initially detecting an action such as an input to the electronic lock128, the electronic lock128can correlate a detection event and subsequent motion detected within the detectable region128ain order to identify possible security risks to the property101. In some implementations, the electronic lock128may be capable of performing one or more response actions to a detected input of an incorrect access code to deter possible intruders. For instance, in some examples, the electronic lock128can include a speaker that plays a pre-recorded message to indicate that someone is presently within the property101even when the property101is unoccupied. In other examples, the electronic lock128may be capable of transmitting signals to devices within the property101(e.g., the sensors122, the appliances124, the cameras,126) in response to detecting an action to simulate occupancy within the property101. In other examples, the electronic lock128may also communicate directly with the monitor control unit110, which can then relay the communication with the electronic lock128to devices within the property over another signal path using a different communication protocol (e.g., Bluetooth, Bluetooth LE, ZWave, ZigBee, etc.). The rental property management server130is an electronic device configured to provide monitoring services by exchanging electronic communications with the monitor control unit110and the user device140over a network. For example, the rental property management server130may be configured to monitor events (e.g., alarm events) generated by the monitor control unit110. In this example, the rental property management server130may exchange electronic communications with the network module included in the monitor control unit110to receive information regarding events (e.g., alarm events) detected by the monitor control unit110. The rental property management server130also may receive authorization information (e.g., keypad codes, electronic lock codes, etc.) from the user device140. The user device140may be an electronic device associated a user interested in the property101that exchanges communications over a network, such as the Internet or the network105. For example, the user device140may be a smartphone, tablet, or other types of network devices. The user device140may access a service made available by the rental property management server130on the network105, such as a mobile application. The data generated by the user device140may be transmitted over the network105, and may be monitored by the monitor control unit110. In some implementations, the rental property management server130may route alarm data received from the network module or the user device140to a central alarm station server that is maintained by a third-party security provider. The alarm data can include captured video footage of the detected individual within the detectable region128a, which is processed by the third-party security provider to request emergency assistance to the property101. For example, the alarm data can be transmitted to law enforcement so indicate a potential security breach within the property101. In some instances, the alarm data can also include metadata identified by the electronic lock128within the captured video footage (e.g., gender of the individual, suspected identity of the individual, key physical attributes, etc.). In these examples, the alarm data can either be transmitted to law enforcement after requesting confirmation from the user, or automatically transmitted without intervention from the user. The rental property management server130may store sensor and image data received from the monitoring system and perform analysis of sensor and image data received from the monitoring system. Based on the analysis, the rental property management server130may communicate with and control aspects of the monitor control unit110or an interactive device within the property101. The interactive device may be an electronic device associated with a property owner or an occupant that exchange network communications over a network, such as the Internet or the network105. For example, the interactive device may be a smartphone, tablet, personal computer (PC), network-enabled media player, home entertainment system, cloud storage device, and other types of network devices. The interactive device may access a service made available by the rental property management server130on the network105, such as a mobile application. The data generated by the interactive device may be transmitted over the network105, and may be monitored by the monitor control unit110. The interactive device can include a native surveillance application. The native surveillance application refers to a software/firmware program running on the corresponding mobile device that enables the user interface and features described throughout. The interactive device may load or install the native surveillance application based on data received over a network (e.g., the network105) or data received from local media. The native surveillance application runs on mobile devices platforms. The native surveillance application also enables the interactive device to receive and process image and sensor data from the monitoring system. In some implementations, the interactive device communicate with and receive monitoring system data from the monitor control unit110using a communication link. For instance, the interactive device may communicate with the monitor control unit110using various local wireless protocols such as Wi-Fi, Bluetooth, Zwave, Zigbee, HomePlug (Ethernet over powerline), or wired protocols such as Ethernet and USB, to connect the interactive device to local security and automation equipment. The interactive device may connect locally to the monitoring system and sensors122and other devices. The local connection may improve the speed of status and control communications because communicating through a network, such as the Internet or the network105with a remote server (e.g., the rental property management server130) may be significantly slower. Although the interactive device are shown as communicating with the rental property management server130, the interactive device may also communicate directly with the sensors122and other devices controlled by the monitor control unit110when the interactive device is near the property101. For example, the interactive device may exchange communications with the devices of the system100over the network105. In some implementations, the interactive device receives monitoring system data captured by the monitor control unit110through the network105. The interactive device may receive the data from the monitor control unit110through the network105or the rental property management server130may relay data received from the monitor control unit110to the interactive device through the network105. In this regard, the rental property management server130may facilitate communication between the interactive device and the monitoring system. In some implementations, the system100intelligently leverages the monitor control unit110to aid in security monitoring, property automation, and property management. For example, the monitor control unit110may aid in investigating alarm events detected at the property101by the monitor control unit110. In this example, the monitor control unit110may detect an alarm event (e.g., a fire alarm, an entry into the property101when the system is armed “Stay,” etc.) and, based on the detected alarm event, control the monitor control unit110to attempt to identify persons in the property101at the time of the alarm event. Specifically, the monitor control unit110may send a control command that causes the sensors122and the cameras126to perform a coordinated and automated search for persons in the property101. Based on the control command received, each of the cameras126captures images of the property101. In some examples, the monitor control unit110may be assigned to different areas of the property101where the monitor control unit110can move in an unobstructed manner. In these examples, the monitor control unit110may be assigned to different levels in a property (e.g., an upstairs robotic device and a downstairs robotic device) and even different rooms or sections that are potentially blocked by doors. The monitor control unit110coordinate tracking movement based on the assigned areas. For instance, the monitor control unit110determines areas in a property where an event has been detected (e.g., where motion is sensed, where a door or window is opened, etc.) and only controls the robotic devices assigned to the determined areas to operate. In this regard, the monitor control unit110may use location of users determined using the sensors122to control operation of the monitor control unit110. In addition, the server130may perform energy management for the rental property by controlling the pool heater at the rental property using techniques similar to those used by the server130in performing energy management using the thermostat (e.g., setting a target temperature for the pool heater and/or turning on and off the pool heater). For instance, the server130may use prospective tenant reservation information (e.g., arrival date-time/departure date-time) and/or electronic lock event data to turn on the heater a particular number of hours in advance of a prospective tenant's arrival and to turn off the heater upon prospective tenant departure. The server130may maintain a “visitor” or “occupied” operational profile until the last scheduled visitor is expected to depart. In some examples, the server130only initiates an unoccupied operational profile when all visitors are determined to be off of the property101. The particular number of hours and/or the heater temperature setting may be set by the owner, the rental management company, and/or the prospective tenant using techniques similar to those described above for establishing energy management settings for the thermostat. By controlling energy usage of the pool heater remotely, the server130may reduce the need to send employees to remote rental properties to turn off/on pool heaters, which may reduce labor costs, travel expenses, and unnecessary energy expenses that result from human error in failing to turn pool heaters on/off during periods of the property being unoccupied. Examples of implementations of the system100can use various types of data captured devices within the property101(e.g., the sensors122, the appliances124, the cameras126, and the electronic lock128) to perform differential actions based on the present conditions of the property101. In some instances, the rental property management server130transmits different notifications of a detected input to the electronic lock128based on detecting the identity of the individual102that presses the electronic lock128. For example, the rental property management server130may transmit a low priority notification to the interactive device if the individual102is determined to be a known individual (e.g., family member, neighbor, or commonly detected individual etc.) whereas the rental property management server130may transmit a high priority notification if the individual102is determined to be an unknown individual. In some instances, the priority of the notification can also be based on a classification associated with the detected individual102(e.g., service personnel, mail carriers, etc.). In some instances, the notifications transmitted by the rental property management server130may be based on a security status of the property101assigned a security system of the property101. In such instances, the lock action repository132can specify a subset of users to transmit notifications based on the security status of the property101. For example, the rental property management server130may transmit a notification to all identified users associated with the property101in response to the security status indicating a fire, whereas the rental property management server130may transmit a notification only to administrator users in response to the security status indicating a breach within the property101. In other examples, the rental property management server130may transmit motion-based alerts if the security status of the property101is set to an “alarmed” mode. In some implementations, the rental property management server130can transmit instructions to the monitor control unit110to adjust one or more settings associated with the devices within the property101. For instance, in response to detecting input to the electronic lock128, the monitor control unit110may receive instructions to change the indoor temperature, or operate the appliances124on or off. In such instances, the particular instructions received by the monitor control unit110can be varied based on the identity of the detected individual102. In other instances, the particular instructions can also be based on other types of information associated with the detected individual102(e.g., motion detected within the detectable region128a, time difference between a detected input to the electronic lock128and opening the front door of the property101, etc.). In some implementations, where the rental property management server130transmits notifications to the interactive device, the particular notification transmitted can be based on the location of the interactive device. For example, a notification can be prevented from being transmitted if the interactive device is near or with the property101. In other examples, the rental property management server130can transmit notifications to another remote user if the interactive device is located within the property101. In some implementations, the rental property management server130determines the particular action to be performed in response to a detected input to the electronic lock128based on monitoring one or more parameters indicated by the data transmitted from the monitor control unit110. For instance, as described more particularly with respect toFIG.2, the lock action repository132can specify different actions to be performed based on occupancy information gathered by the devices within the property101, information gathered by the electronic lock128, and/or the security status indicated by a security system of the property101. FIG.2illustrates an example of a process200for enabling unattended property showing. Briefly, the process200may include accessing reservation data on a rental property management server (210), detecting a request for an upcoming reservation from a mobile device (220), confirming, in response to detecting the request, the upcoming reservation (230), generating a unique access code (240), transmitting the access code to the mobile device (250), and transmitting the access code to a monitoring system of a property (260). In more detail, the process200may include accessing reservation data on a rental property management server (210). For example, the system100may access data from the rental property management server130indicating reservations made by users. In some examples, the data may include the date and time of the reservation. The data may include the name of the user, the user's contact information, etc. For example, if a prospective tenant, John, has made a reservation and his reservation data has been stored in the rental property management server130, the data may include John's phone number, the time and date he wishes to view a rental property, and which rental property he is interested in. Then, once the system100receives a request for an upcoming reservation, the available times can be compared to a time slot indicated in the request and the system100can determine whether to confirm or deny the request for the upcoming reservation. In some examples, the system100may access reservation data that indicates reservations made by users, and can generate a user interface for a client device that showing available times based on the reservation data. As user of the client device can then select an available time to tour the property. The process200may include detecting a request for an upcoming reservation from a mobile device (220). For example, the mobile device140of a user104may transmit a request for a reservation to the rental property management server130. The request for a reservation may be made for a viewing of a property in the future. In some examples, the request for a reservation may include information about the user making the request. For example, if Amy wants to see 111 Dryden Road on Saturday, May 14, between 3:30 p.m. and 4:30 p.m., she may be directed to a web application or website to enter information. Amy may be prompted to enter reservation details, such as the date, time, and property. In some examples, Amy may be asked to enter personal details relevant to her search, such as her photo, credit information, a background check, etc. Amy's information may be used to verify her identity or screen her eligibility as a prospective tenant. In some implementations, the server130may generate an interface for the user104to request a reservation where the available time slots for the user104to make a reservation are based on the accessed reservation indicating that those time slots are available for the user104to view the property. The process200includes confirming, in response to detecting the request, the upcoming reservation (230). For instance, once the rental property management server130detects the request, the rental property management server130can confirm the request, generating reservation data associating the mobile device with the property and time slot indicated in the request. Confirming the reservation can include verifying the user's information and updating reservation data in the rental property management server130for the property. If, for example, the owner, property manager, or real-estate agent for the property has a policy limiting the number of prospective tenants who can view the property, the rental property management server130can track the number of users with confirmed reservations for the time slot and reject confirmations based on whether the policy limit has been reached. The process200may include generating a unique access code (240). For instance, rental property management server130may generate a unique access code that grants access to a rental property. The unique access code may be in any form, such as an alphanumeric code, an encrypted code, a gesture, a motion, a sound, etc. In some examples, the unique access code is unique to each unique request to view a property. For example, each time a request is received to view a property, regardless of the user making the request, a unique access code is generated. In some examples, the unique access code is unique to the user requesting the reservation. For example, Tom may request multiple reservations for different properties and one unique access code is generated that is shared for each request made by Tom. In some examples, the unique access code is unique to the property the reservation is made for. The unique access code can be input to a monitoring system of the property in a variety of ways, including over wireless communications, manually entered by a user at an electronic lock or keypad, played by the mobile device, etc. The process200may include transmitting the unique access code to the mobile device (250). For example, the rental property management server130may transmit the unique access code through a wireless communication, a physical communications link, etc. to the mobile device140. The unique access code may be transmitted to the mobile device140through various methods, such as text message, email, voicemail, phone call, etc. In some examples, the user104may download the unique access code to the mobile device140as a data file through the network105, which may be the Internet. In some examples, the unique access code may be transferred to the mobile device140through a physical communications link, such as through a docking port, a data transfer cable, etc. In some examples, the unique access code may be transferred to the mobile device140wirelessly, using various communications protocols and technologies such as NFC, WiFi, Bluetooth, Z-Wave, Zigbee, etc. The process200may include transmitting the access code to a monitoring system of a property (260). For example, the rental property management server130may transmit the unique access code to the monitor control unit110on the property101. The unique access code may be transmitted to the monitor control unit110through wireless communication, a physical communications link, etc. For instance, the rental property management server130may communicate directly with the monitor control unit110. In some examples, the server130may communicate with the monitor control unit110through various methods, such as text message, email, voicemail, phone call, etc. In some examples, the user104may download the unique access code to the monitor control unit110as a data file through the network105, which may be the Internet. In some examples, the unique access code may be transferred to the mobile device140through a physical communications link, such as through a docking port, a data transfer cable, etc. In some examples, the unique access code may be transferred to the mobile device140wirelessly, using various communications protocols and technologies such as NFC, WiFi, Bluetooth, Z-Wave, Zigbee, etc. FIG.3illustrates an example of a process300for enabling unattended property showing. Briefly, the process300may include detecting an input of an access code at an electronic lock (310), determining that the input access code matches the access code received from the rental property management server (320), and performing an action (330). The process300may include detecting input of an access code at an electronic lock (310). For example, the monitor control unit110may detect an input of an access code at the electronic lock128. In some examples, the monitor control unit110may receive data associated with the input of the access code, such as data from various sensors122or cameras126. For example, if an individual near the front door102is inputting an access code to the electronic lock128, the monitor control unit110may receive data including sensor data and camera data that provides information about the individual102, such as a recording of their voice, a photo of them, etc. The process300may include determining that the access code received through input to the electronic lock matches the unique access code received from the rental property management server (320). For instance, the monitor control unit110may determine that the access code received through input to the electronic lock128matches the unique access code received from the rental property management server130. In some examples, the monitor control unit110may compare the access codes to determine whether they match; the access codes could be determined to be matching in various ways. There may be a threshold for a match between the codes. In some examples, the codes must be identical to be considered a match. In some examples, the access codes may be related such that one code may be obtained by performing an operation to the other code. The access codes may be considered matching if the one code is a counterpart to the other code. For example, if a code or unique PIN of ‘3481’ is transmitted to a user, the electronic lock128could pose a question such as “What is the square root of the unique PIN you received?” The question posed by the electronic lock128could be answered by the other code, ‘59’, and used as input to the electronic lock128. In some examples, the monitor control unit110transmits the access codes to be analyzed. For example, the monitor control unit110may transmit the access codes to the server130. The server130may determine whether the access codes match. In some examples, the server130may transmit the access codes to be analyzed. In some examples, the monitor control unit110may transmit the access codes to the rental property management server130to be analyzed. In some examples, the rental property management server130may automatically analyze and compare the access codes to determine whether they match. The process300may include performing an action in response to determining whether the access code received as input to the electronic lock matches the unique access code received from the rental property management server (330). For instance, the monitor control unit110may perform an action in response to determining that the access codes match and perform a different action in response to determining that the access codes do not match. In some examples, the monitor control unit110may perform a different actions in response to determining that the access codes match to a certain degree, up to a certain threshold, or in a certain way. In some examples, the monitor control unit110may perform various actions, such as activating a security system, deactivating a security system, granting a visitor access to the property101, collecting data from cameras126, collecting data from sensors122, etc. For example, if it has been determined that the access codes match, the monitor control unit110may grant an authorized user104access to the property101. In some examples, if it has been determined that the access codes do not match, the monitor control unit110may activate a security system and collect camera data from cameras126of the individual near the front door102that input the access code that does not match. FIG.4illustrates a diagram of an example interface400for unattended property showing. The interface400includes a title402, photo404, reservation information406, property address408, reservation time410, access code information412, and access code414. In some examples, the interface400is used by a prospective tenant for making a reservation to view a property. In some examples, the interface400is used by the owner or property manager of the property for which the reservation is made. The title402provides information about the interface400and lets a user know what app they are using. For example, a user may be redirected from a website to the interface400. The interface400may be various kinds of interfaces. In some examples, the interface400may be an application. In some examples, the interface400may be a website or a web application. The photo404provides visual information about the user making the reservation. In some examples, the photo404may be uploaded by the user while using the interface400. For example a user may be asked by the interface400to provide visual identification. In some examples, the user may be able to upload a photo through an application on the mobile device140. The user may be able to upload the photo404through various methods, such as through a personal computer, through a link to a photo hosted on the Internet, etc. In some examples, the user may be able to edit or delete the photo404by selecting the photo404. For example, the photo404may be a hyperlink to the hosted photo, may redirect the user to a photo editing interface, etc. The reservation information406provides the user with information about their reservation. In some examples, data such as the property address408and the reservation time410are shown. The reservation information406may include more information, such as which entrance to use, which areas are available for viewing, etc. The reservation information406may include less information, such as only providing the property address408. For example, a user may be able to make a reservation for the entire day if they are unsure of their schedule; the reservation information406would then only show the address, or may show the reservation time as a date. In some examples, the property address408and the reservation time410are displayed in a visually different way from the rest of the reservation information406. For example, the property address408and the reservation time410may be displayed in a different font, bolded, italicized, different font size, different color, etc. In some examples the user is able to select the visually different elements of the reservation information406. Selecting elements of the reservation information406may perform an action dependent on the selection. For example, a user may be able to select the property address408to make a change to the reservation or select the reservation time410to be redirected to an interface to make a change to the time of the reservation. All, some, or no elements of the reservation information406may be selectable or changeable. The access code information412provides the user with information about the unique access code414. For instance, the access code information412may provide the user with information about the access code414and how to use it. In some examples, the access code information412may include only the access code414. In some examples, the access code information412may include more information than the access code414. The access code414may be various kinds of codes, such as a PIN, an alphanumeric code, an audio file, a video file, a gesture, a photo, etc. In some examples, the access code414is displayed in a visually different way from the rest of the access code information414. For example, the access code414may be displayed in a different font, bolded, italicized, different font size, different color, etc. In some examples the user is able to select the access code414. Selecting elements of the reservation information406may perform an action dependent on the selection. A user may be able to select the access code414to have the mobile device140use the access code to grant the user access to the property. For example, Ryan may select the access code414to have the mobile device140transmit the access code to the electronic lock128. In some examples, the user may enter the access code414as shown into the electronic lock128. FIG.5illustrates a diagram of an example interface500for unattended property showing. The interface500includes a title402, photo404, prospective tenant name502, reservation information504, prospective tenant temporal data506, events during the prospective tenant's visit508, and flagged events510. The interface500may be used by an owner or property manager of the property being shown. For example, the interface may be used by an owner or property manager of the property101. The details shown by the interface500may relate to a prospective tenant, such as the individual near the front door102or the authorized user104. The prospective tenant name502may provide the user with the name of the person shown in the photo404. In some examples, the prospective tenant name502may be selectable to view further details about the prospective tenant, such as a verified credit history, background check results, etc. In some examples, the interface500may include more or different prospective tenant information without requiring the user to navigate to a different interface. The reservation information504may provide the user with information about the reservation made and the property viewed by the prospective tenant. For example, the address of a property, such as property101, may be displayed. The date and time of the reservation may also be displayed. In some examples, all, some, or no elements of the reservation information504may be selectable to view more information about the elements. For example, a user may be able to select the property name or address to view details about the property, such as the square footage, number of rooms, etc. In some examples, the user may be able to select the date and/or time of the reservation to view details about that particular time. For example, the user may be able to select the reservation time period associated with the prospective tenant's visit and see details such as what other prospective tenants or people were at the property at the same time. The prospective tenant temporal data506may provide the user with information about the time during which the prospective tenant was at the property101. In some examples, the data506is determined from the prospective tenant's interactions with the electronic lock128. In some examples, the data506is verified using data from other elements of the system100, such as the sensors122, the appliances124, the cameras126, and the server130. For example, if Gary input the access code to the electronic lock128but left immediately after seeing the size of the kitchen without checking out with the electronic lock128, the departure time may be determined using video data from the cameras126, door sensor data from the sensors122, etc. The events during the prospective tenant's visit508may provide the user with information about the events that occurred while the prospective tenant was in the property. For example, elements of the system100may determine events for the property101. Data from elements of the system100such as the sensors122, the appliances124, the cameras126, and/or the server130may be used to detect events. For example, events such as doors opening or lights being turned on may be detected using the sensors122. Events such as having an item removed from the refrigerator may be detected using the sensors122, the appliances124, such as the refrigerator itself, the cameras126, etc. In some examples, if more than one prospective tenant is visiting the property101at the same time, the elements of the system100may be able to detect which of the prospective tenants is associated with which events. For example, a television on event may be detected by the appliance124itself, but the event may be attributed to Kelly based on video data from the cameras126. The prospective tenants with which the events are associated may be determined using various methods, such as GPS location, server130interaction, etc. The flagged events510may highlight events of the events during the prospective tenant's visit508. For example, events determined to be abnormal may be displayed in a visually different way. Events may be determined to be abnormal if they are events the owner or property manager does not wish to happen. For example, an item being removed from the refrigerator may be an abnormal event that is flagged. Events may be automatically determined to be abnormal by the system100. For example, if an event such as a television on event has never occurred within the property101or any of the properties managed by an owner or property manager, the system100may determine the television on event to be abnormal and flag the event. The interface500may further show information such as whether the prospective tenant indicated an interest in renting or buying the property. For example, if Ann decides that she wants to rent property101, 1425 Otter Run, right after she views it, she may interact with the electronic lock128or another element of the system100to indicate her interest. This interest may be displayed within the interface500. In some examples, the prospective tenant may provide no such indication, or may indicate negative interest. In some examples, these interests may be listed as an event in the list of events508. In some examples, these interests may be indicated as separate elements of the interface500. In some examples, the owner or property manager of the property101may be able to select the element indicating the prospective tenant's interests, positive or negative, and perform an action. For example, the property manager may be able to approve the application of the prospective tenant if they have indicated positive interest and they have provided the necessary information. The property manager may be able to contact the prospective tenant through their provided contact information (if they have given permission to be contacted for such reasons) to ask for feedback and answer any questions the prospective tenant may have. In some examples, the interface500may provide an alert to a remote user in real time. Alerts may be provided for flagged or abnormal events510. Alert may indicate information such as occupancy information, electronic lock information, security footage, and response options. In some examples, the alert can be transmitted to the user device of an owner or property manager of the property. For instance, after receiving the data from the electronic lock128, the monitor control unit110may receive data gathered by the sensors122, the appliances124, and the cameras126. The received data can include, for example, sensor data indicating occupancy information inside the property101at the time of the interaction with the electronic lock128(e.g., the number and identity of occupants within the property101). In some implementations, the monitor control unit110aggregates the received data from the sensors122, the appliances124, and the cameras126based on using pattern recognition techniques in order to intelligently determine subsets of the received information to transmit to the rental property management server130. In an example, the alert may be transmitted as a text alert that indicates data gathered the devices within the property101(e.g., the sensors122, the appliances124, and the cameras126) and aggregated by the monitor control unit110. For instance, the electronic lock128may determine that motion detected within the detectable region128ais suspicious movement based on analyzing information associated with the motion detected (e.g., time of detection, time period after the initially detecting input to the electronic lock128, number of inputs to the electronic lock128, types of motion detect, etc.). In addition, as described previously, the occupancy information can be used to determine the types of users that are inside the property101(e.g., children, adults, etc.). Other arrangements and distribution of processing is possible and contemplated within the present disclosure. The described systems, methods, and techniques may be implemented in digital electronic circuitry, computer hardware, firmware, software, or in combinations of these elements. Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor. A process implementing these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output. The techniques may be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language may be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and Compact Disc Read-Only Memory (CD-ROM). Any of the foregoing may be supplemented by, or incorporated in, specially designed application-specific integrated circuits (ASICs). It will be understood that various modifications may be made. For example, other useful implementations could be achieved if steps of the disclosed techniques were performed in a different order and/or if components in the disclosed systems were combined in a different manner and/or replaced or supplemented by other components. Accordingly, other implementations are within the scope of the disclosure.
113,576
11861751
DETAILED DESCRIPTION The present disclosure will now be described in detail by describing various illustrative, non-limiting embodiments thereof with reference to the accompanying drawings and exhibits. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the illustrative embodiments set forth herein. Rather, the embodiments are provided so that this disclosure will be thorough and will fully convey the concept of the disclosure to those skilled in the art. The claims should be consulted to ascertain the true scope of the disclosure. To solve the words-to-data problem and the prevailing terms problem, the present platform has developed a multi-tier contract object model, which breaks contracts into at least three object types and connects them via a series of rules. Contract Documents are one object type, which contain things agreed by the parties, as words that are possibly supplemented by other forms of notation or expression. The words in these contracts can then be linked to Contract Transactions (“Contract Txns”), a second object type, which represent a set of terms in at least one Contract Document at a point in time, and reflect a state-change to a contract. Contract Txns reflect the time dimension of a contract, namely, there is at least one at the beginning, and there may be one or more others over time that change the original Txn. There is, in the present platform, a finite set of Txn types, each of which has certain “roll up” rules, such that by evaluating all “active” Txns, a platform can ascertain, at any point in time the prevailing terms of the contract. All Contract Txn objects are children of the third object type, namely the Contract Object. This is to where the prevailing terms are rolled-up, and this object type allows the platform to present to a user the then-current terms of the contract at any point in time Next, gaining visibility into the contracting outcomes reflected in a party's contract portfolio enables that party to protect its business from dangerous contracts by providing users with information about how to improve future contract drafting and negotiation processes and outcomes. A platform that combines automated contract analysis and automated contracting processes allows a party to easily create high quality, low risk documents through an intuitive, interview-driven platform. This lowers costs, reduces bottlenecks, and empowers business users. For example, insights gained through contract portfolio analysis can be used to “harvest” clauses from legacy contracts and automatically feed a clause library. Clauses can be classified and ranked for favorability and risk attributes, and made available to users drafting new agreements, supported by playbook guidance. Machine analysis of contractual outcomes can also be used to derive negotiation patterns, and to apply those patterns into templates and rule-sets for automated drafting. An example would be that analysis shows that all contracts of type A include a clause of type B. This inference is then used by the platform to propose a rule that all templates for new contracts of type A should include clause type B as a mandatory requirement. In one implementation, the template based drafting platform includes a set of drafting rules that are processed by assessing various input facts and using the template rules to generate a draft contract, without expert intervention, which includes clauses and attributes best suited to those facts. The analysis and scoring of each contract is performed by utilizing software to review each contract and categorize contract terms into the contract object model. The contract object model provides for the categorization of contract terms into many other object types: contract objects, transaction objects, document object, organization objects, legal entity objects, project objects, product objects, workflow objects, user objects and group objects. The Universal Contract Model (“UCM”) (described herein) represents a contract as data via a single contract object and relates that object to one or more contract documents. There are master, standalone, sub-contract and other relationships that affect the question whether there is one contract or many. Second, there is a time dimension where contracts change over time, via amendments, assignments, renewals and other events. Third, there are very few constraints and rules about the way contracts are expressed, which makes it challenging to translate them into structured relational data. With this model, virtually any type of contract can be represented in a way that supports accurate analysis of simple high-level terms, down to complex granular terms, without distortion or compromise. The UCM expresses contracts as actionable data. It addresses two fundamental issues: what the contract is and what the contract says. Accordingly, there are two fundamental parts to the model. First, the UCM Object Model is a structural representation of the events and artefacts through which contracts are created, changed and brought to an end. The object model starts with contract documents, which are the usual means for expressing new or amended contract terms. It includes a concept of contract transactions, which is an event by which a contract is changed in some way, and covers the execution of a new contract, an amendment, a work or purchase order, a change order, a renewal, an assignment, a novation, a termination or expiry, and a rescission. Collectively, all signed/active transactions roll up to a single contract object, which records a consolidated view of all those transactions. These three core objects are supplemented by additional objects, including legal entity, organizational, project and workflow objects, amongst others. Second, the Contract Data Model is a data/semantic representation of the parties, promises and meaning embodied in the terms of any contract. The data model organizes the data inside a contract into certain high level, universal categories, including the parties, the term and termination provisions, the payment and performance provisions, risk allocation provisions, relationship management provisions and other boilerplate terms. Referring toFIG.1, the contract object model contains several objects which have unique relationships to one another. A Contract Object100is an object for each separate legal contract, derived from its child Transaction Objects (described below). A Contract Object100is the sum of its active child Transaction Objects, presenting a single consolidated view of all key terms. Some but not necessarily all transaction properties roll up to the Contract Object100, e.g. Orders may not roll up every item of data. Every time there is a change at the (active) transaction level, the Contract Object100re-evaluates itself from its active child transactions. TABLE 1Contract Object• Contract Display Name*• Contract ID*• Contract Status*- Draft | Active | Inactive- These may break down into sub-statuses• Master*- true | false [false by default]• Contract Creator• Contract Owner• Contract Date Effective• Contract Date Expiry• Multi-Contract- true | false [false by default]• Contract Transactions* [example]- Transaction ID 1 (Create), Active- Transaction ID 2 (Order), Active- Transaction ID 3 (Amend), Inactive- Transaction ID 4 (Renew), Active- Transaction ID 5 (Amend), Draft• Related Contracts- Created From Master: Contract ID- Subcontract Of: Contract ID• Contract Data Details . . . [parties, everything . . . ]*= required filed in embodiments A Transaction Object102records an event by which a contract changes—a change of state. It starts with a Create Transaction (a new contract), ends with a Terminate Transaction, and in between those events may include Amend, Order, Renew, Assign and Novate Transactions. When a new contract is first created, this is a create transaction. A create transaction targets a new contract object100and indicates whether or not it is a “master”. In embodiments, at the time it first becomes active, the contract object will inherit all of its terms from the create transaction. An order transaction may only target a master contract. It is designed to cover things like purchase orders, work orders, statements of work, trade confirmations, etc., which are documented and agreed over time. Orders inherit terms from, and add detail to the terms of, a master contract, but should not change most of the master terms (which is what an amendment transaction is for). However, the payment and performance obligations of orders do typically roll up, cumulatively, to the master. The total contract value of a master, for example, is calculated as the sum of all orders that are (or were) active. Amend, Renew, Assign and Novate transactions are all examples where something about the existing contract terms will be changed. Renew, Assign and Novate are very specific types of change: Renew changes the expiry date; Assign and Novate change the parties (with different timing). Amend covers every other type of change to the terms of the contract. When an Amend transaction becomes effective and active, it will trigger changes to the object it targets. An Amendment might target a Contract generally, or it might target a specific transaction like an Order, and in some cases, it might be a complete restatement of all the terms of the Contract or Order (e.g. an “Amended and Restated” contract). It is also possible for Amendments to have expiry dates (e.g. a short-term endorsement to change terms in an insurance contract). When an amendment expires, the contract object needs to recalculate itself from the transactions that remain in effect. Finally, terminate transactions bring contracts, or specific transactions, to an end. A variant of terminate is a rescind transaction, which brings something else (contract as a whole or specific transaction) to an end as if it had never happened. TABLE 2Transaction Object• Transaction Display Name*• Transaction ID*• Target Objects*-Contract ID | Transaction ID-The object directly targeted by this Transaction• Transaction Status*-Draft | Active | Inactive-These may break down into sub-statuses• Transaction Type*-Create | Amend | Order | Renew | Assign | Novate | Terminate |Rescind• Master-true | false-Typically applies to Create Transactions only• Restatement-true | false-Typically applies to Amend, Order, Renew, Assign & NovateTransactions only• Transaction Creator• Transaction Owner• Transaction Date Effective• Transaction Date Expiry*= required filed, in embodiments A Document Object104embodies one or more Contract Transactions. In a typical case, a single Contract Document will record a single Transaction, for example, where one document creates one new, simple, two-party (bilateral) contract. But it is not uncommon for a contract document to record more than one Transaction, for example, where a new contract is created (a Create Transaction) but includes a clause whereby the parties agree to terminate a previous contract (a Terminate Transaction). A Document Object104needs an attribute indicating whether it is the executed version (not a draft or copy). Contract Documents will typically embody a single Transaction (e.g. a single doc creates an NDA, or a single doc amends a master agreement). Contract Documents may embody multiple Transactions related to the same Contract Object100(e.g. a single document which creates a new master contract and also contains the first work order). A Document Object104may embody multiple Transactions related to different Contract Objects100(e.g. the agent-on-behalf-of-multiple-principals scenario) or a new Contract Object104which also contains a clause terminating some other contract. TABLE 3Document Object• Document Title*• Document ID*• Target Objects*- Transaction ID 1- Transaction ID 2- . . .- Transaction ID n- The object/s directly targeted by this Document• Document Status*- Draft | Active | Inactive- These may break down into sub-statuses• Executed*- true | false [false by default]• Document Creator• Document Owner• [mime-type, etc . . . ]• [version info . . . ]*= required filed, in embodiments An Organization Object106defines internal and external organizational relationships, such as internal “departmental” structures, and external “corporate group” structures. A Legal Entity Object108abstracts and links to separate legal entity objects and has certain benefits including tracking affiliate relationships, current, former and alternate names, reference numbers and unique IDs. A Project Object110can be used to associate a series of contracts together where they relate to a common project but may or may not have explicit contractual or organizational links. In simple buying and selling transactions, the Project Object110may be unnecessary and need not be used. But in major projects with many (seemingly) independent contracts needed to achieve the overall goal, it is useful to link these contracts via one or more Project Objects (e.g., major real estate development, engineering works, etc.). It may also be useful to link Organization Objects106and Legal Entity Objects108to a Project Object110, though linking Contract Objects100is most critical. A Product Object112can be used to track relationships between “products” (broadly defined) and contracts, and to provide a reference source for “product” information (including price). The “product” object is intended to cover any offering or asset which may be traded or encumbered by a contract, including tangible products/assets (e.g. laptops, trucks, commodities), intangible products/assets (e.g. software, inventions, trade marks), service offerings, and real estate products/assets. A Workflow Object114defines and tracks workflow tasks and approvals, associated with a particular contract, transaction or document (and possibly other objects). A User Object116tracks a unique user, including various properties, ideally in-sync with a directory service. A Group Object118tracks a group of users, or other Groups, ideally in-sync with a directory service. An example of a Contract Object Model is shown inFIG.2for a Project Object110involving multiple Contract Objects100, Transaction Objects102, Document Objects104, Organization Objects106, and Legal Entity Objects associated with a single Project Object110. In the example, the focus is on Contract222, which is a master contract with a full suite of transaction examples. Most of those transactions link to the same master. Also illustrated is a “purchase order” document, which is characterized as a Create transaction and gives rise to a new Contract333. This is what would happen if an Order under a Master Contract declared itself to create a separate contract (albeit one that incorporates the terms of the master). Even though Contract333is a separate contract, a relationship to Contract222is recorded, showing that it was created from that Master. A link between Contract222and Contract111is described as a prime contract. This illustrates that Contract222may describe itself as a subcontract under Contract111. The subcontract relationship is understood, but they are nonetheless two separate contracts. FIG.2also illustrates the concept of Legal Entity Objects108, Organization Objects106and Project Objects110and examples of the linkages between these objects and Contract Objects100. At the Transactional Object102level, most of the transactions target the Contract Object100, although there is an example where an amend transaction targets an order transaction (such as a “change order”). Even though there are very few strict rules about the terms a contract can contain, there is some universal data that apply across all contract types, and there are some reliable patterns to the data found in most contracts. The only truly essential data across all contracts is the data that identifies the parties, an effective date, and at least some obligation or promise. In addition to identifying the parties bound to the contract, it will generally have a title, some language around the term, and whether either party has any rights to terminate the contract early. Most contracts also contain certain terms designed to allocate risk between the parties (e.g. limitation on liability, indemnities, representations and warranties), and terms designed to manage the relationship between the parties, including what happens when the relationship is strained and there may be disputes to resolve. For transactional contracts, where something is being bought and sold, there will be set of performance and payment obligations. Referring toFIG.3, the Universal Contract Model organizes contractual data into a set of major categories: Contract details300, parties data302, term and termination data304, risk allocation data306, payment and performance data308, relationship management data310and boilerplate data312. The contract details300outline particulars including actual words used in the contract title, whose paper (including, for example, “ours” standard, “ours” negotiated, counterparty paper, industry form standard and industry form negotiated), the contract type (including, for example, confidentiality/nondisclosure agreement, customer/sales, vendor/purchasing, reseller/channel partner, human resources/employment, corporate governance, and finance/trading) and a contract sub-type (including, for example, subtypes of a finance agreement such as custody agreement, fee agreement, International Swaps and Derivatives Association master agreement/credit support annex, loan/credit/facility agreement, investment management agreement, offering memorandum, master repurchase agreement, prime brokerage agreement, guarantee, side letter and the like). The parties data302describes the legal entities party to a contract including name, contact information, role, country and other details about the party. For example, the “our” details may be described (including details of “our” legal entities that are party to the contract, option to repeat “our” details where we have multiple entities on our side, full legal name, unique ID, country and address, and role), counterparty details, and additional party details (including, for example, details of “additional party” entities which may have a special and distinct role, or may be true “third parties” such as agents and guarantors, option to repeat “additional party” details where there are multiple entities, full legal name, unique ID, country and address, and role). Term and termination data304describes the terms designed for starting and ending a contract, including effective date, term, expiry date, renewal process, and termination with or without cause. The data contains the term of the contract (including, for example, effective date, expiry date, initial term, renewal option, auto-renewal process, and timing constraints on renewal), termination rights (including, for example, termination for convenience, termination for change of control, termination for cause, who has these rights, cure and notice periods, and carve-outs), and more granular details for certain industries (including for example, for the financial services industry, ratings downgrade events, net asset value triggers, and cross-default details). The risk allocation data306describes the terms designed to allocate risk, including exclusions, limitations, representations & warranties, indemnities, guarantees, and insurance, collateral & credit support obligations. The data contains high level risk terms (including, for example, a flag where there are certain terms designed to allocate risk, on “us”, counterparty or mutual, including liability caps, obligations to indemnify, credit support obligations, insurance obligations, and force majeure), optional term-by-term detail (including, for example, representations and warranties, credit support detail including guarantees, security, and collateral obligations, insurance detail, exclusions and disclaimers, and waivers), extra details by industry (including, for example, for the financial services industry deep detail into collateral obligations on us and them, initial and variation margin, margin call thresholds and minimum transfer amounts, eligible collateral assets and valuation haircuts on each, segregation, and substitution and re-hypothecation constraints). The payment and performance data308describes the terms designed to detail obligations to perform (provide a good, perform a service, transfer rights, etc.) and make payments. The data contains payment and pricing terms (including, for example, contract value, currencies, discounts, payment terms, price adjustments, set-off rights, most favored nation clauses, etc.) and performance obligations (including, for example, service obligations, delivery obligations, transfer obligations, payment obligations, and restrictive obligations, etc.). The relationship management data310describes the terms designed to define and manage the relationship between the parties, including reporting, disclosure, and dispute resolution processes. The data contains relationship and dispute data (including, for example, assignability, exclusivity, standard of care, governing law, mediation/arbitration, limits on remedies, and jury trial waiver) and notice, reporting & disclosure data (including, for example, periodic reporting obligations, event-triggered notification and disclosure obligations, and timing details for event-triggered obligations). The boilerplate data310describes the terms designed to define and manage common terms including, for example, title, type, status and other meta-info, and other standard “mechanical” terms, including entire agreement, execution, amendment, and definitions. In one version of the data model, every sentence or phrase of a contract can be treated as a data point, and each such sentence or phrase can be associated with one or more of a series of core attributes, for example:legal classification, for example:an obligation (positive or negative),a right (positive or negative),a representation (positive or negative),an act or deed, ora definition,business or subject classification (one or more classifiers as to the subject matter of the clause),a party direction (from which party/ies to which other party/ies),timing contingency (when does it occur),conditionality (or other contingency) (are there any conditions or qualifications on its occurrence), orcontextual dependency (for example, which contract, contract transaction and contract document did it come from; which other contractual wording, clause, sentence, heading or content does it refer to or depend upon FIGS.4A-4Cillustrate, in an embodiment, an example of contract sentence classification using this multi-attribute classification model. Table 4 (below) describes an implementation of legal classifications that can be applied to contractual sentences: TABLE 4Legal ClassifierExplanation of legal classifiersCoverPageSome but not all contracts have this, with basichigh-level contract details like title and parties,possibly also followed by a Table of Contents, allof which is “window dressing” and not part of thesubstance of the agreement.AgreementWordingThese are the critical anchor words by which theformation of a contractual agreement is declared,which in the classic US style contract will be along sentence including party names and effectivedate. In other styles, the agreement wording maylook more like a series of headings with partynames, then a shorter statement of “The partiesagree as follows . . . ”PartiesSometimes the declaration of who the parties arewill be in a discrete chunk, rather than embeddedin an “agreement wording” sentence.RecitalsRecitals are typically declarations or statementsof background facts leading up to the agreement.They may be numbered (for some reason A, B, Cis popular) or they may begin with the keyword“Whereas . . . ” which can repeat itself atthe start of each recital.ExecutionWordingThis is typically a declaratory sentence followingthe main terms and conditions, by which theparties assert that they are executing theagreement and by so doing intend to be bound byit.ExecutionBlockA small block of text or table for each party to puttheir signature, fill in certain other identifyingdetails, and sometimes for companies toaffix/stamp a seal.BlankFormHanging off the back of a contract may be a seriesof attachments, schedules, appendices, andexhibits, many of which are incorporated terms,but some of which are simply asample/template/form showing how a notice, anorder or the like should be formatted, and thisshould not be confused with an actual Order orNotice, since it is just a blank form showing howthat should look.SurvivingTermsThere will often be a clause or sentence (orseveral) indicating that certain terms are intendedto survive and continue even after an agreementterminates or expires.InitialTermA statement indicating how long the contract isintended to last, which could be a limited time orcould be indefinite/ongoing, it may also makereference to the effective date, being the start ofthe contract.FormationMechanismSome contracts may be finished and signed butnot intended to “start” until some pre-condition issatisfied, a concept lawyers call a “conditionprecedent to formation”, and in this scenario, youmay find wording indicating the mechanism bywhich the “official” formation of the contract willbe triggered.ExecutionMechanismA statement indicating how the parties mayexecute or sign the contract.NoticeMechanismA description of the agreed mechanisms forsending notices under the contract.OrderMechanismA description of the agreed mechanisms forcreating Orders under a master agreement.PaymentMechanismA description of the agreed mechanisms formaking payments under an agreement.AmendmentMechanismA description of the agreed mechanisms foramending an agreement or some part of it.AssignmentMechanismA description of the agreed mechanism forassigning/transferring the contract (or part of it)to some other party.RenewalMechanismA description of the agreed mechanism forrenewing a contract (or an Order), which cantake the form of an option to renew, an automaticrenewal mechanism, or nothing at all.TerminationMechanismA description of the different ways the agreementmay be terminated.DisputeMechanismA description of the agreed mechanism for solvingdisputes, should they arise, which includes theclassic jurisdiction clause (which courts you'veagreed to use), and also mediation/arbitrationclauses.InterpretationA statement about how the agreement should beinterpreted, including which laws will apply(governing law), which documents/clauses takepriority in the case of a conflict or clash, andwhether headings, genders, plurals, etc. should beignored.ObligationPerhaps the primary “meat” of the contract,where one or more of the parties agrees to do (ornot do) something. This will break down intomany subcategories.LeaseA present grant of a lease over some property orasset, which is a unique type of right. Not to beconfused with an obligation to grant a lease,which is a promise that a party will grant a lease.LicenseA present grant of a license over some property orasset (most commonly intellectual/intangibleproperty), which is a unique type of right. Not tobe confused with an obligation to grant a license,which is a promise that a party will grant alicense.TransferA present transfer of rights or ownership from oneparty to another. Not to be confused with anobligation to make a transfer, which is somethingthat will happen rather than is presently deemedto happen.RightLanguage indicating that one or more parties hasthe right to do something, typically indicated bywords like “may”, “has the right”,“will be entitled to”, etc.WaiverLanguage indicating that one or more parties haswaived or relinquished or given up a right, oragrees not to exercise a right.RepresentationOrWarrantyAssertions of facts or circumstances, typicallyimposed on one party, forcing it to go on therecord declaring that various “good” things aretrue, or various “bad” things are not true, or thata product or service is up to a certain standard.AcknowledgmentAcknowledgements of facts or circumstances.DisclaimerDisclaimers of representations or warranties.LimitationOrExclusionLimitations or exclusions of liability, which maytake the form of a hard limit on total liability, oran exclusion of certain types of loss.DefinedTermDefinition of a term as has a specific meaning,typically using capitalization and quotes/styling tomark the defined term, followed (or sometimespreceded) by its meaning.ReferencedTermsWords indicating that some external set of termsis incorporated into the contract by reference. In another version of the data model, words of the transaction are represented as a serializable tree-based/hierarchical data format (such as XML or JSON) and each sentence/phrase element is associated (marked up) with attributes reflecting the data model above. This has the benefit of unifying the original source wording of the contract with data abstraction, obviating the need to synchronize two objects. An additional benefit is that XML, JSON and other tree-based representations of Contract Documents provides a consistent format for persistence of words and data that supports both human and machine consumption and analysis. An example XML representation of the contact model is: <Contract><Transaction type=”CreateContract”><ContractDocumentSource name=”Services Agreement.pdf” docid=”AA1234567890”><Clause legal=”Heading”>Supply Agreement</Clause><Clause legal=”Obligation” business=”Delivery” direction=”On Supplier” timing=”OnExecution” Contingency=”None”>On Execution of this agreement, Supplier shall deliver the Product to Customer.</Clause><!-- repeat the <clause> structure here for every clause of the contract,with other elements as necessary for tabular structures, images, and the like --></Transaction></Contract> In another version of the data model, words, objects and data abstractions are represented in a graph database, where each phrase is linked one-to-many times with applicable data attributes, and linked to applicable transaction and contract objects, and to legal entity objects, and to clause objects, where the clause is itself a compilation of several phrases. This could for example be implemented in a triple store using RDF, the Resource Description Framework, a W3C data standard. This approach supports highly accurate queries across large volumes of data with fine grained visibility, without the need for up-front modeling of relational database tables. Table 5 (below) is an example of an implementation of a triple/quad store representation of the contract model. TABLE 5SubjectPredicateObjectNamedGraphTxn001hasTargetObjectCon001Con001Txn001hasTransactionTypeCreateCon001Txn001hasDocumentSourceDoc001Con001Txn001hasTransactionStatusActiveCon001Txn001hasContractTypeSell-SideCon001Txn001hasPartyLE001Con001LE001hasTransactionRoleVendorCon001LE002isPartyToTxn001Con001LE002hasTransactionRoleCustomerCon001Txn001hasDateEffectiveNov. 11, 2011Con001Txn001hasDateExpiryNov. 11, 2012Con001Txn001containsClauseCls001Con001Cls001containsPerformanceObligationOnLE001Con001Cls001hasRiskClassificationHighCon001Cls001hasObligationClassifiationLicenseExclusiveCon001Cls001hasSubjectMatterSoftwareCon001Cls001hasContentP1 grants an exclusiveCon001software license to P2Txn002hasTransactionTypeAmendCon001Txn002containsClauseCls008Con001Cls008amendsClauseCls001Con001Cls008hasObligationClassifiationLicenseNonExclusiveCon001 In another version of the data model, an improved method of combining normalized data with custom extensions is utilized. An optional feature of this model is one that includes extensible business classifications and extensible conditionality classifiers, for example, using tags/hashtags. This enables the platform to be deployed using a normalized data model quickly, but with a low-effort customization layer that personalizes the data to specific customer scenarios and business needs. In another version of the data model, a normalized contractual risk model is utilized. Due to the words-to-data problem and the idiosyncrasy problem, an objective measurement of contractual risk has never been possible. Without a normalized contract data model there is no reliable benchmark against which contractual risk scoring can be applied. Assessments of contract risk are thus left to human experts and are typically based on an assessment of a single contract. However, when one introduces a normalized contractual data model, it is possible to apply contractual risk scoring attributes to specific data outcomes, and thus automate overall risk scoring at the contract level and across a portfolio of contracts. Universal Contract Model Roll-Up Referring back toFIG.1, Universal Contract Model Roll-Up describes the procedures for roll-up of contract terms to determine the prevailing terms of the contract at the present time or at another predetermined point in time. In order to roll up the contract terms the historical terms of related contract documents and events are evaluated and rolled-up to the Contract Object100so that a trusted summary of the current terms can be viewed in one place. This is accomplished by using the Contract Object100to present a roll-up of all terms implemented in its child transactions. The Contract Object100evaluates all active child Transaction Objects102chronologically to build the single set of terms. The Contract Object100does not evaluate draft Transaction Objects102or terminated/rescinded Transaction Objects102. The Contract Object100sums or accumulates certain order values, namely contract value and performance obligations, and in the case of orders, includes expired/terminated orders (but not rescinded orders). Every time there is a state change at the transaction level under a Contract Object100, the Contract Object100should trigger a re-evaluation of its current terms. Referring now toFIG.5, a Contract Object100is re-evaluated each time a Transaction Object102, such as a create transaction, amend transaction or order transaction is added or changed. Transactions may be added or changed based upon re-evaluation of a Contract Document104or the addition of new Contract Documents104, such as by addition of a new schedule, amendment, purchase order, change order, novation, or termination, etc. Every time there is a change at the transaction level, the platform re-evaluates the current active transactions to re-evaluate data for the Contract Object100from the currently active transactions to derive a single set of data that reflects those transactions for the point in time desired for the contract. For example, the Contract Object might be rolled up based upon the active transactions for the present point in time or the Contract Object100might be rolled up using the active transactions for previous or future point in time by only incorporating the transactions active at said given point in time. As shown inFIG.6, for a sample Contract Object100, a Document Object1041that is an executed master agreement is created, set to active and given an effective date. Next, a Create Contract Transaction Object1021is created reflecting the terms and details of the master agreement. Later, a Document Object1042that is a first amendment is executed, set to active and given an effective date. An amend contract Transaction Object1022is then created reflecting the terms and details of the amendment. For each subsequent contract Document Object104nadded one or more Transaction Objects102nare added. Every time a Transaction Object102nor a Document Object104nis added, data from all Transaction Objects102is rolled up to the Contract Object100. The roll up is performed, in a step600, by assessing all terms and data for the Create Contract transaction and incorporating the data into the Contract Object100. Next, in a step602, all subsequent transactions are assessed in order of effective date. If the transaction is currently active, the data from the order transactions is used to add to the data of the Contract Object100(step604) or, for other transaction types, used add, remove, or modify the data of the Contract Object (step606). In a step608, after all transaction have been processed, the net result of data from evaluating the transactions is populated to the Contract Object100. Finally, in steps610and612, a rules engine can validate the data against a model and remove redundant or stale data and inflate raw data in the Contract Object100against a model and supplement it with derived data. For example, a UCM Data Model may specify that certain data attributes are only included based upon other data attributes (nested or dependent data). A Contract with a Term attribute set to “12 months” may include a derived Expiry Date set to “Dec. 31, 2018” (via an “Inflate” calculation). But if that Term attribute is changed to “ongoing” the Expiry Date attribute becomes “stale” and should be purged from the UCM data set. Several steps are undertaken to recognize and conform contractual terms to a data model. The first step is to import existing contract documents into the present platform and to turn the words on each page of those documents into actionable data, allowing a party to instantly report on, visualize, and analyze individual contracts and the trends across an entire contract portfolio. This helps keep a party ahead of its obligations and to avoid breaches and compliance failures. In one approach, the platform presents users with a dynamic questionnaire or wizard to capture and create a data abstraction of the imported documents and Contract Txns. This wizard approach (or machine-augmented approach) ensures that users are guided through essential data features of the contracts and helps to improve the consistency and quality of human capture. In another approach, the capture and creation of the data abstraction is performed by machines using a supervised learning technique. The machine (typically a software algorithm) is trained with examples of contract documents, contract clauses, contract sentences and/or contract phrases (the “contract corpus”), from which a set of rules is developed and refined using one or more heuristic or machine learning methods. Those methods may include one or more statistical natural language processing algorithms, neural network “deep learning” algorithms, and other machine learning algorithms. The quality of a contract corpus for machine learning training purposes is enhanced by the size and diversity of examples in the corpus. To build a large corpus, it is first seeded with examples sourced from public, open repositories such as those made available by government agencies. Non-public samples may also be sourced by agreement with private contracting parties, but these must be maintained and curated using strict privacy methods, ensuring that no human is able to discover private contract data via direct or indirect interaction with the corpus. In order to address the privacy need of the private-sourced corpus, two methods may be used. First, individual clauses/sentences are human-processed independently of their document context, ensuring the full meaning of those sentences/clauses is not disclosed. Second, each sentence clause is pre-processed via an anonymization gateway to obfuscate identifying information. In one method of anonymization, party name/alias information is substituted with a randomized pool of party names/aliases before presentation to a human reviewer. In a hybrid approach, the system rules are optimized based on both human and machine capture, with human corrections to machine extraction feeding back into the machine learning algorithm and/or contract corpus. An important benefit of the machine learning approach is that it is not constrained by the learning limitations of human memory. Unlike a human expert, the machine learning software can process a contract corpus many orders of magnitude larger than a human can read and retain. When combined with a universal contract model, this supports machine learning and capture outcomes that exceed the accuracy of a human expert. To that end and according to an embodiment of the present invention, a platform uses artificial intelligence to capture contracts and apply captured data to the universal contract model data object and data model. Referring toFIG.7, legacy contracts, third party contracts and existing contractual metadata (if any) are analyzed by scanning a paper document and performing optical character recognition to convert a scanned image to computer readable text. Optionally, the computer readable text is manually reviewed and corrected by a human to create OCR corrected data. From the computer readable text, a machine learning/artificial intelligence (“ML/AI”) module is used to obtain data from the contract text for application to the universal contract model. This task is performed by utilizing a contract AI developer module502and a contract AI platform504. The contract AI developer module502comprises a rule development user interface506allowing a user to create rules for contract AI rule development using a human rule builder module508. The contract AI developer module502further comprises a ML/AI rule builder module510. The rule builder modules508and510process documents and clauses from contracts to create rule sets518that are categorized as universal contract module rule sets514, industry specific rule sets516, and customer specific rule sets518, as described below. The rules are deployed to the contract AI platform504(described below). The contract AI platform504further provides a feedback training corpus520that further includes a contract document training corpus526and a contract clause training corpus524. Referring toFIG.8, contract documents from public and private sources are processed using optical character recognition (“OCR”) with a text and imaging processing module528. The contract documents are then processed using ML/AI and pre-classified into contract document types according to the universal contract model. Confidence scores from the ML/AI process are provided with each object type. Next, human experts review the pre-classified contract documents to determine whether the ML/AI process correctly classified the contract documents. Correctly classified documents are passed to the document training corpus526. Incorrect classifications are correctly identified by human experts and the classification information passed back to the ML/AI algorithms to correctly classify future contract documents. Referring toFIG.9, the clause training corpus524is populated from the document training corpus526through the ML/AI process to transform the documents into a collection of single sentences. The sentences are then given legal classifiers using the ML/AI models with corresponding confidence scores. Sentences that have high ambiguity are joined with dependent content or a dependent sentence to create sentence pairs and re-analyzed for legal classification as a sentence pair. Sentences with lower ambiguity are optionally anonymized and presented to human classifiers to verify the legal classification. When human corrections are made, the corrections are provided to the feedback training corpus520and the clauses are provided to the clause training corpus524. Referring toFIG.10, in a third step, the clause training corpus524is amplified by legally re-classifying clauses from the clause training corpus524by passing them through ML/AI models to refine the clause classification. Clauses or sentences that remain unclassified from the document training corpus526are also classified with legal classifiers using ML/AI models and given confidence scores. Sentences that have high ambiguity due to external dependencies are joined to the dependent content/sentence to create sentence pairs (a type of clause) and resent the ML/AI to be classified with legal classifiers and given confidence scores. From the classification step, classified sentences/clauses with high confidence scores are passed directly to clause training corpus524. Clauses with lower confidence scores are passed to an optional anonymizer and then presented to human experts to verify the classification. Classifications that are corrected are sent to the feedback training corpus520and clauses that are not corrected are sent to the clause training corpus524. Referring toFIG.11, in a fourth step, the clause training corpus524is refined by first taking legally classified clauses from the clause training corpus524and passing them through ML/AI models to refine clause classification models. Clauses with a specific legal classification are selected for additional annotation/training, e.g. “obligations”. The clauses then, optionally, are anonymized by obfuscating private data and presented to human experts for further sub-classification, e.g. “indemnification,” “payment,” etc. Next human classifications are fed into ML/AI models for training and development of new business classifier models. New clauses of the same legal classification are given “business” or “subject” classifications by the ML/AI models and presented for human review. Verified “business” classified clauses are passed to the clause training corpus524for use in training. Corrected “business” classified clauses are passed to the feedback training corpus520for use in training. Referring toFIG.12, in a fifth step, the clause training corpus524is further refined with direction classifiers by, first, taking legally and business classified clauses from the clause training corpus524and passing them through ML/AI models to refine clause classification models. Public source clauses are then selected for named legal entity recognition. Clauses are presented to human experts for “named entity” annotation and normalized “party role” annotation. Human expert annotations are fed into ML/AI models for training and development of named entity classifiers and party role tagging models. Clauses are then presented to human experts for “direction” annotation using normalized “party role” alias substitution. Relevant clauses are annotated with “direction” tags, including From [role], To [role], Mutual, etc. For example, an Obligation may be classified as “Mutual” or “From Supplier”, and a Right to Renew may be classified as “To Customer”. Directionally classified clauses are then passed to the clause training corpus524for use in training. Referring toFIG.13, in a sixth step, the clause training corpus524is further refined with timing classifiers by, first, taking clauses from the clause training corpus524and passing them through ML/AI models to refine clause classification models. Next, clauses with timing contingencies (Obligations, Representations, etc.) are selected for additional annotation. The clauses then, optionally, are anonymized by obfuscating private data and presented to human experts for further “timing” classification, e.g. “event triggered”, “periodic”, “date specific”, etc. Human classifications are fed into ML/AI models for training and development of new timing classifier models. New clauses are given “timing” classifications by the ML/AI models and presented for human review. Verified “timing” classified clauses are passed to the clause training corpus for use in training. Corrected “timing” classified clauses are passed to the feedback training corpus520for use in training. Referring toFIG.14, in a seventh step, the clause training corpus524is further refined with conditional classifiers by, first, taking clauses from the clause training corpus524and passing them through ML/AI models to refine clause classification models. Next, clauses with conditionality/qualification (Obligations, etc.) (“conditionality” clauses) are selected for additional annotation. The clauses then, optionally, are anonymized by obfuscating private data and presented to human experts for further “conditionality” classification, e.g. “Subject to”, “Except for”, etc. Human classifications are fed into ML/AI models for training and development of new conditionality classifier models. New clauses are given “conditionality” classifications by the ML/AI models and presented for human review. Verified “conditional” clauses are passed to the clause training corpus524for use in training. Corrected “conditionality” classified clauses are passed to the feedback training corpus520for use in training. Referring toFIG.15, in an eighth step, ML/AI training is applied to the clause training corpus524using multiple learning models and techniques until performance/accuracy crosses a desired threshold (>X %). Next, once the threshold is met, high performing universal contract model rules/models514are packaged for deployment in a rule deployment package528. Industry-specific rules516are optionally packaged to supplement universal rules, and customer-specific rules518are optionally packaged to supplement universal rules. Referring toFIG.16, in a ninth step, universal contract data model rules514, industry specific rules516and customer rules518are deployed form the contract AI developer502to an AI engine530associated with the contract AI platform504. Finally, referring toFIG.17, in a tenth step, a contract management application passes new contract documents to the contract AI platform504. The contract AI platform504images and OCR processes the contract document to ensure that a high-quality document text layer is available. Natural Language Processing (NLP) and Verification Services are optionally applied to document text, for example, to provide supplementary annotation of the text like named entity recognition, sentence boundary detection, parts of speech tagging, or address verification. Next, document and contract transaction ML/AI rules are applied to the contract document. Next, multiple clause classification ML/AI rules are applied to the contract document. In a subsequent step, a normalized dataset according to the universal contract model is passed with the contract document and the contract management application ingests the dataset into the UCM transaction/object and data model and performs a contract model rollup. Finally, if the contract is reviewed by a user, the user's corrections from the contract management application captures corrections to data, which is passed back into ML/AI feedback loop of the contract AI platform504. Assessing Contractual Risk The present platform develops an approach where contractual risk can be measured by assessing the extent to which a contract contains clauses that increase risk transfer to one party or create barriers to risk transfer to another party and supports both weighted, customer-specific adjustments to the risk score components, while also maintaining a universal, standard weighting. This supports the development of risk sub scores and overall contract risk scores that are tailored to the needs of specific enterprises. A party can gain visibility into its contractual risk by scoring risk using an algorithm that analyzes risk factors to objectively measure the risk of each of a party's contracts, allowing the party to identify and manage issues before they can become problems. The present platform may include an implementation of a contractual risk model which measures the extent to which risk transfer is achieved or constrained (from a party-specific perspective) by the terms of any contract. In one implementation, one or more contractual terms/data points are declared to serve risk allocation purposes, and a maximum potential risk score is assigned to each such term, where a high score indicates an undesirable risk outcome (a likely risk increase) for the party whose contract portfolio is being assessed. One or more instance scores may then be declared for one or more specific values assigned to the contractual terms/data points, within the range between zero and the maximum. A normalized total contract risk score may now be evaluated for any one contract, for example, using an algorithm that scores contract risk as a percentage based on the instance score compared to the highest possible risk score. An optional approach supports a bifurcation of the universal risk score (based on the views of a pool of experts) and a customer specific version of the risk score, under which the customer applies a secondary weighting to elements of the universal risk score based on the views of its own risk experts. Another implementation of the risk scoring algorithm uses machine learning against a corpus of contractual documents and real-world risk outcome data to assign risk benchmark scores to the universal risk scoring model. For example, litigation outcome data (including judicial and settlement outcomes) could be used to assess the scoring of particular contract terms in achieving their intended risk allocation outcome. In that regard, and referring toFIG.18, a contractual risk score is calculated by implementing a set of universal contractual data fields (CDF) that is available for risk score allocation. Each field may be key: value, RDF triple, array, or other data representations. Also, a risk score scale is defined to indicate low risk transfer through to high risk transfer. For example, a low risk is associated with zero, and high risk is associated with a number, such as ten or more, with intermediate degrees of risk associated with numbers between zero and ten or more. A user interface allows a contract expert to select a CDF and assign a risk score to it. A subset of CDFs would not have risk scores assigned, since not all CDFs reflect risk allocation. The user interface further allows the contract expert to apply a conditional rule to a CDF-RiskScore pair. Risk scores for any CDF value may be bifurcated to reflect different risk outcomes in different circumstances. For example: Limitation of Liability for Supplier=“0” Risk Score IF Supplier Role is Our Company. Further, a risk score data store records all CDF-Risk Score pairs. The risk score data store is deployed to a production contract management system to display contract risk scores to a user associated with the contract. The contract risk scores may be a sum of contract risk values or the contract risk score may the value of the highest contract risk score for a CDF. Inputs to the score include external data, including obtaining from public and private sources updating in real time. For example, for the data model variable CounterpartyCountry (which describes the country of origin of a contract counterparty) the credit risk score associate with various countries can be modified in realtime based upon platform monitor news, government databases (for example, countries under embargo, countries identified as rogue states or along a spectrum of business friendliness). As another example, a data model variable KeyPerson could assign a credit risk score for news information about the person—for example, if a key person was the chief executive officer or chief technology officer and there is press release that he/she stepped down or was diagnosed with some disease or is taking a leave of absence, etc. then the score for a particular person is adjusted to reflect that risk. Further, the contract management system can alert a user when a contract risk score exceeds a threshold value. For example, and referring toFIG.19, anew document/version in a contract management system is submitted to the contract AI platform504and translated into a set of CDFs. The CDF data is then assessed and assigned a contract risk score according to the risk values associated with its various CDFs. A contract risk score reference data store is available to the contract management system. Individual and aggregated contract risk scores are presented within the contract management user interface to provide a normalized, measurable assessment of contract risk to contract management system users. Contract risk scores are processed by a rules engine to trigger alerts for unacceptable deviation from risk guidelines. A responsibility reference table is looked up to ascertain who receives risk alerts for any CDF or set of CDFs. Risk alerts are communicated to responsible persons via email, mobile and other methods. Using normalized contractual risk, a method of comparing contracts against a universal benchmark is implemented. By building on normalized data and risk models, any one organization can benchmark its contractual outcomes against a set of peers. In that regard and referring toFIGS.20A-20D, an exemplary model for scoring contractual risk is provided. In the example, a data representation of variables and values from the Contract Data Model are provided. At the top of each column is the Contract Data Model variable and a maximum risk score associated with that variable is provided at the top of an adjacent column. Below the Contract Data Model variable is provided various possible value of the variable paired with risk values associated with that variable value. In determine contract risk, lookups are performed for the value of each contract variable and risk values associated with Contract Data Model variable value are summed and compared to the maximum contract risk score for the variable values, thus providing a relative view of contract risk in comparison to maximum calculated contract risk. In another version of the data model, a normalized clause model is implemented. One layer of a contractual data model is the assessment of clause types included or excluded from a contract. To facilitate this analysis, a contract or contract transaction must be broken into clauses, and those clauses can then be compared against known clause types for classification. Particularly when machine learning or AI is applied to the problem, a concept of clauses and clause types is an important feature of a training corpus. One feature of clauses that undermines the clause analysis and comparison is variability in presentation, formatting, and layout, where those variations have no bearing on the substance or underlying words/language. By applying a normalized clause model to this problem, clauses can be represented, stored and analyzed in a format that abstracts clause content from clause presentation and formatting. Clauses that are substantively the same but with different formatting are now modeled in a way that removes the false positives regarding whether two clauses are different. A further benefit of abstracting clauses to a normalized, format-free model, is that clauses used with specific formatting in one context can be re-used in different contexts via the systemic application of formatting rules. An implementation of this normalized clause model represents the clause content as XML, with at least a single content block, an optional heading element, and optional nested subclauses, which are recursive. Another implementation of this normalized clause model represents the clause as the smallest unit of contract wording that is meaningful “standalone”, that is, without a material dependency on content external to itself. In this implementation, a single contractual sentence is represented as a clause where it can be classified as to meaning (and the normalized contract data attributes described above) without reference to other content. Other sentences, for which meaningful classification is dependent on external content, are, in an implementation, combined with that external wording, and the resulting combination (e.g. of two sentences) is modeled as a clause. The combined clause, assuming it is now meaningful “standalone”, is then classified as to meaning (using the normalized classification models described previously). A benefit of this sentence-level (or sentence-pair-level) model of a clause is that it facilitates more granular normalization of contract data than approaches which model clauses at the heading or block level. A further benefit is that machine learning methods can be trained to higher levels of performance/accuracy when clauses are modeled at granularity lower than heading or block levels. Training at block and heading level can result in clause content with high levels of variability and inconsistency, which increases the confusion element of machine learning methods. This is particularly problematic when the content in a single paragraph block (or the content associated with a particular heading) contains a large number of sentences. Another implementation of this normalized clause model enables an improved method of machine-processing of contract data that combines normalization of structured data, normalization of sentence classification, normalization of clauses and normalization of contractual objects Another implementation of the present platform allows for improvements in relationship visibility across a single enterprise, and optionally extended to one or more external or related enterprises, by linking unique party data from one or more contracts to trusted sources of unique legal entity identifiers. This allows a chain of contractual relationships to be created and presented automatically, based upon active contract terms, and supports improved compliance with regulatory compliance requirements, for example, know-your-customer regulations, anti-corruption regulations and anti-money-laundering regulations. In one approach, contractually-evidenced relationships between parties can be presented with zero to many degrees of separation, extending relationship analysis to address indirect as well as direct relationships. Another implementation of the present platform allows for automated document assembly that accepts input for desired contractual terms and utilizes functional language to implement those terms according to desired risk scores measured against a normalized contract model. A recommendation can be further implemented which provides for comparison of an assembled document compared to the normalized contract model to determine addition terms and clauses that would be desirable to improve a contract's risk assessment versus the normalized contract model. Another implementation of the present platform allows for mobile alerts for contracts and contractual terms which violate certain rules, score high or low in comparison to the normalized contractual model (as described above) or have terms which conflict with an alert or rule set against the normalized clause model. In embodiments, the present platform may provide data and alerts regarding contract models, scoring, automated contract preparation and/or trends over a network to a remote client device, by providing a user interface dashboard to a user for installation on the remote client device; receiving a request from the remote client device to present the data and/or alert; generating an alert from the data or alert that contains a universal resource locator (URL), which specifies the location of the data and/or alert and content related thereto; and transmitting the data and/or alert over a communication channel to the remote client device associated with the user based upon a destination address and transmission schedule that is associated with the remote client device, wherein the data and/or alert activates the user interface dashboard to cause the data or alert to display on the remote client device and to enable connection with the user interface dashboard when the remote client device is activated. In embodiments of the of the present platform, blockchain technology is incorporated to create a ledger of contract timeline and performance activities. Blockchain is a distributed database that maintains a continuously growing list of ordered records called blocks. Each block contains a timestamp and a link to a previous block. By design, blockchains are inherently resistant to modification of the data—once recorded, the data in a block cannot be altered retroactively. Blockchains are an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically. Blockchains are secure by design and an example of a distributed computing system with high byzantine fault tolerance. Decentralized consensus can therefore be achieved with a blockchain. This makes blockchains suitable for the recording of events, such as contractual performance. In an embodiment and referring toFIG.21, the present platform is deployed in a configuration to support contracting events in combination with blockchain technology. By combining the platform (with its universal contract model for describing contracts as objective, normalized point in time structured data) with blockchain technology (to verify important data in the contracting process), a trusted automated approach to forming and performing contractual events (such as contractual transfer of ownership/title for certain assets) is achieved, lowering transaction costs compared to systems with heavy reliance on human processing and verification. Such automated contracting systems can be referred to as “Smart Contract” systems. Smart Contract systems that are implemented without reference to trusted, normalized, machine-processable contracting terms will be limited in their application and scope due to ambiguity and contractual uncertainty. The present platform overcomes these limitations through the innovation of linking the Smart Contract system with a UCM enabled contract platform which does offer universal, normalized, point in time, machine processable contract data and terms. FIG.21describes, in an embodiment, an example of the present platform with Smart Contract capabilities. First, the platform allows two or more contracting parties to form a contract in step2002, expressed and verified using the UCM model. In an embodiment, the UCM model is recorded in (and can be looked up from) a blockchain register of the objects, terms and data supported by the UCM. Each version of the model is recorded in this UCM Contract Model blockchain with version information, allowing the model to evolve over time and allowing contracting parties to link their contracts to specific version of the UCM. Second, in a step2004, the platform includes an optional step of verifying the parties to the contract against a trusted register of legal entities, to ensure that those parties are accurately and uniquely identified, and in good standing (or in the case of natural persons, living and legally competent). Third, in a step2006, a final, executed and valid contract transaction is recorded in a Verified Contract register, itself also optionally implemented as a blockchain ledger. A Verified Contract is a contract that has been recorded and can be verified against the blockchain ledger This register stores contract transactions, and rolled up current state contract objects, together with machine processable terms based on the UCM data model. The Verified Contract register can be queried to ascertain the terms of any contracts it stores, subject to access control and permissions. Fourth, in an embodiment and in a step2008, the platform tracks events relevant to the performance of a contract. In the case of contracts with obligations to transfer ownership, the platform may optionally look up a Trusted Event register to ascertain whether a triggering event for execution of a transfer has occurred. Upon verification of a triggering event, in a step2010, the platform optionally performs a real-time verification that the contract is still valid (e.g. looking up the Verified Contract register) and that the parties are still in good standing and legally competent. Fifth, in an embodiment and in a step2012, having verified that a transfer obligation should be executed, the platform executes a transfer of ownership event, and records that transfer in a Transfer Event register, itself optionally implemented as a blockchain ledger. The transfer event is registered in terms reflecting the terms of the Contract. A trusted event is an event that can be verified against the Trusted Event register. Sixth, in an embodiment and in a step2014, having successfully recorded the transfer event in the transfer event register, the platform updates an Asset register to record the change of ownership of the asset. The Verified Asset register may, in an embodiment, itself be implemented as a blockchain ledger. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The processor may be part of a server, cloud server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platform. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, cloud server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, internet server, intranet server and other variants such as secondary server, host server, distributed server and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope of the disclosure. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, internet client, intranet client and other variants such as secondary client, host client, distributed client and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope of the disclosure. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications network. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g. USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another, such as from usage data to a normalized usage dataset. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it will be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general-purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable device, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It will further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
83,207
11861752
DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the presently disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the presently disclosed subject matter. Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “providing”, “obtaining”, “determining”, “manipulating”, “calculating”, “comparing”, “evaluating”, “selecting”, “modifying”, “extracting”, “generating”, “identifying” or the like, refer to the action(s) and/or process(es) of a computer that manipulate and/or transform data into other data, said data represented as physical, such as electronic, quantities and/or said data representing the physical objects. The term “computer” should be expansively construed to cover any kind of hardware-based electronic device with data processing capabilities including, by way of non-limiting example, the processor and memory circuitry (PMC)120disclosed in the present application. The terms “non-transitory memory” and “non-transitory storage medium” used herein should be expansively construed to cover any volatile or non-volatile computer memory suitable to the presently disclosed subject matter. The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non-transitory computer-readable storage medium. In known methods of providing feedback to users on a motion, the user performs the motion, his performance of the motion is tracked by sensors, processed, and then feedback of the performed motion is provided to the user. In some known solutions, the system provides feedback to the user by reporting a set of measurements related to the motion performed by the user. Consider an example of evaluating a certain type of a punch in a sport field, and evaluating some measurements of the motion, such as speed and force. Known systems may provide feedback on the speed of the motion by presenting the speed that was measured by the sensors. The system may also compare the motion performed by the user to a “perfect template” and provide the user with feedback based on the comparison. However, these known systems generate feedback by comparing the measurements to the measurements produced by a template motion. This approach often fails to account for multiple variations of the motion that are considered correct. Moreover, obtaining a perfect template motion is also often practically challenging. It is therefore advantageous to consider multiple variations of the motion in terms of measurements of that motion and to accurately reflect acceptable variation of the motion by collecting and processing representative set of example motions performed by skilled subjects instead of relying on a single template motion or rule-based heuristics. For example, when considering the speed of a performed motion of a user, it is advantageous to consider several speed measurements, which may be acceptable as correct performance of that type of punch. Comparing to a set of measurements rather than to a “perfect template” of a motion avoids the need to create a perfect template. It is also advantageous to identify specific aspects or characteristics of a given motion execution that require improvement, e.g. speed or force of a motion, to provide a focused feedback, e.g. specific instructions or guidance of how to achieve improvement of one or more aspects of the motion, and prioritize the feedback to provide the one feedback with highest potential impact on the performance first. Bearing this in mind, attention is drawn toFIG.1illustrating a high level illustration of a feedback environment including a user110, one or more sensors160that are operatively connected to the user110, a camera150, and a feedback system100for providing a total score on a motion of a user in accordance with certain embodiments of the presently disclosed subject matter. Feedback system100is operatively connected to sensors160and camera150. Assume for example that user110wishes to perform a specific motion, for example, a punch. The user can also be interested in a specific punch type (referred to occasionally as a designated target motion) such as a jab, cross, hook or uppercut. One or more sensors160are operatively connected to the user110and are configured to sense motion data indicative of a motion of the user110. For example, each of sensors160-aand160-bcan be IMUs mounted on the wrist of the user110. If the user110wishes to perform a target motion of a landed hook, IMUs sensors160sense motion data performed by the user110. In some examples, sensors160include other type of sensors, such as a pressure mat (not shown) or camera150. The other types of sensors are configured to capture motion data. For example, camera150is configured to capture a video of user110while performing the motion. Feedback system100, that is operatively connected to sensors160, includes processing and memory circuitry (PMC)120, communication interface140and a feedback device130configured to provide feedback to the user110, e.g. a cell phone including a microphone and a display. Once the user110has performed a motion, sensors160sense motion data. Optionally, camera150is configured to sense motion data. PMC120is then configured to obtain the sensed motion data from sensors160and camera150, e.g. by receiving such data from communication interface140communicating with the sensors160and camera150. PMC120is configured to process the obtained motion data and to provide a score to the motion performed by the user110with respect to a designated target motion. For example, the designated target motion can be selected by the user before or after performing the motion. If the target motion is a landed hook, PMC120is configured to provide a score on the motion performed by the user110with respect to the landed hook. In some examples, PMC120is configured to provide feedback to the user110on how to improve his motion to perform the landed hook in a more correct and accurate manner. Such feedback can be provided to the user by feedback device130, e.g. by displaying feedback on the display, or by providing audio feedback to the user110. Attention is now drawn toFIG.2illustrating a non-limiting block diagram of a feedback system100including PMC120, in accordance with certain embodiments of the presently disclosed subject matter. The numeral references of elements of feedback environment as appearing inFIG.1are also applicable toFIG.2. Feedback system100includes PMC120, communication interface140and feedback device130, as illustrated inFIG.1. As further detailed below, the processor of PMC120is configured to execute several functional modules in accordance with computer-readable instructions implemented on a non-transitory computer-readable storage medium. Such functional modules are referred to hereinafter as comprised in the processor. The processor can comprise an obtaining module210, a classifying unit220, a calculating and scoring module230, a feedback module240, a storage unit250, a detection unit260and a regression unit270. In some examples, user110wishes to perform a target motion of a landed hook. User110can select a designated target motion in feedback system100, e.g. by selecting one option of several displayed options on feedback device130. The displayed options can be pre-defined by feedback system and retrieved from storage unit250. Alternatively, the designated target motion could be preconfigured and announced to the user before motion execution. The selected motion can be communicated to PMC120, thus PMC120is configured to obtain data on the designated motion. As described inFIG.1, once the user110performs an action, sensors160are configured to sense motion data from the user110. PMC120is configured to obtain the motion data indicative of the motion of the user110, e.g. by obtaining module210through communication interface140. In examples where the feedback environment (illustrated inFIG.1) includes additional sensors, obtaining module210is configured to obtain motion data by receiving additional motion data from one or more additional sensors, e.g. through communication interface140. For example, camera150is configured to sense the additional motion data by capturing a video of user110, or a pressure mat (not shown) is configured to sense pressure of the user110. A person versed in the art would realize that other sensors configured to provide motion data are applicable to the disclosed subject matter, such as radars, LIDARS, microphones, infrared or depth cameras, force sensors, tension sensors and the motion data can be extracted using sophisticated yet well-known algorithms (e.g. for extracting motion data from a video captured by camera150, a pre-trained pose recognition model or image segmentation algorithms must be run). Once the motion data is obtained, the motion data is processed by PMC120for providing a total quality score with respect to the motion. The processing includes processing of the motion data in one or more trained models. The models can include detection model for detecting the fact of motion happening/performed by a user, as processed by detection unit260, classification model to predict class of a specific motion, e.g. punch type, as processed by classifying unit220and regression model to estimate numeric characteristics of the motion, such as punch speed or force of impact, as processed by regression unit270. Each of the models outputs one or more scores. These models are further explained in relation toFIG.3below. One or more scores of the models constitute a set of subscores in relation to the designated target motion. Based on the determined set of subscores, calculating and scoring module230is configured to provide a total quality score with respect to the designated target motion of the user, e.g., by applying calculation on the subscores for obtaining a consolidated total quality score. For example, a weighted average of some or all subscores can be calculated. The calculation and utilization of the set of subscores for obtaining a total quality score is further described below with respect toFIG.3. The total quality score is indicative of an overall correctness of the motion performed by the user in relation to the designated target motion. Calculating and scoring module230is configured to provide the total quality score to feedback module240comprised in PMC120. In turn, feedback module240is configured to provide the total quality score to the user, e.g. by displaying it on the feedback device, e.g. using feedback device130. In some examples, feedback module240is configured to provide, alternatively or in addition to the total quality score, focused feedback on the motion of the user. The focused feedback is feedback that focuses on one or more specific aspects or characteristics of the motion, rather than providing general feedback or score on the motion and includes feedback in relation to one or more subscores from among the set of subscores. For example, the focused feedback can relate to the speed characteristics score. The focused feedback can include guidance on the desired improved speed of the motion performed. Further details of possible feedbacks are described below in relation toFIG.4. Feedback module240is configured to provide the focused feedback e.g. by communicating the feedback to feedback device130. Feedback device130can then provide feedback to the user110, e.g. by displaying the feedback. Any of the above scores, calculations, measurements etc. can be stored, e.g. in storage unit250and can be used as a basis for analytics, such as providing the user with progressive feedback on his progress, performing statistics on users, etc. It is noted that the teachings of the presently disclosed subject matter are not bound by the feedback environment and system described with reference toFIGS.1and2. Equivalent and/or modified functionality can be consolidated or divided in another manner and can be implemented in any appropriate combination of software with firmware and/or hardware and executed on a suitable device. For example, feedback device130can comprise either or all of PMC120, communication interface140and camera150and perform their functionality. Also, those skilled in the art will also readily appreciate that the data repositories such as storage unit250, and as illustrated by an example inFIG.5below, can be consolidated or divided in other manners; databases can be shared with other systems or be provided by other systems, including remote third-party equipment. Referring toFIG.3, there is illustrated a flow chart of operations carried out by PMC120, in accordance with certain embodiments of the presently disclosed subject matter. As described, PMC120provides a total quality score with respect to a motion of a user, for example, when the target motion is a landed hook. In some examples, PMC120obtains data indicative of the designated target motion (block310). In some examples, the user110selects a motion that he wishes to perform. After the selection, PMC120obtains data indicative of the designated target motion by receiving the selection. Alternatively, the designated target motion is preconfigured and announced to the user110before motion execution, thus PMC120obtains data indicative of the designated target motion by receiving the preconfigured target motion. For example, PMC120obtains information on the selection of the user110, Before or after selecting the target motion of a landed hook, user110performs a motion. One or more sensors160that are operatively connected to user110sense motion data. For example, sensors160can be IMUs on the wrists of the user110. The sensed data is obtained by PMC120(block320), e.g. by receiving it from sensors160through communication interface140. In examples where the feedback environment includes additional sensors that sense additional motion data of the motion performed by the user110, PMC120obtains additional motion data by receiving motion data from one or more additional sensors. For example, camera150senses the additional motion data by capturing a video of user110, or a pressure mat (not shown) senses pressure of the user110. The obtained motion data is indicative of a motion of the user110. For example, the speed sensed by IMU sensor is indicative of the speed of the motion performed by the user110. A moving video of a skeleton of user110extracted from the video captured by camera150using known methods, is indicative of a movement, and hence a motion, performed by the user110, A person versed in the art would appreciate that other examples of sensors providing motion data are also applicable to the disclosed subject matter, for example, skeleton data to provide motion data on freeze motion. PMC120then processes the obtained motion data for providing a total quality score with respect to the designated target motion. As explained above, the processing includes processing of the motion data in one or more trained models to obtain data indicative of correctness of one or more aspects of the motion of the user in relation to the target motion. The models can include a detection model for detecting the fact of motion happening/performed by a user, a classification model to predict class of a specific motion, e.g. punch type, and regression models to estimate numeric characteristics of the motion, such as punch speed or force of impact. Each of the models outputs one or more scores. In some examples, storage unit250stores the trained models and PMC120obtains the trained models from storage unit250, The above models will now be explained in relation to the fighting motion example, in which the user wishes to perform a designated target motion of a landed hook punch. However, the specific examples of models should not be considered as limiting and a person versed in the art would appreciate that other models and classes of the models, can be applied with respect to other types of motions. In some examples, processing the obtained motion data in the models can be done e.g. by detection unit260, classifying unit220and regression unit270, comprised in PMC120, while assisting calculating and scoring module230. At block330, PMC120, e.g. using detection unit260and classifying unit220comprised in PMC120, processes the motion data in the classification model, for obtaining at least one motion class and determines a confidence score to each of the at least one motion class, wherein at least one motion class is associated with the designated target motion. In order to do so, detection unit260classifies the motion data to a motion performed class and a non-motion performed class and classifying unit220classifies the motion data include several classes, each relating to a certain type of punch and may include e.g. hook class, jab class, cross class or uppercut class. Additional impact type classes can also be included, e.g. to classify whether the user missed or hit a target. Such classes include a landed motion class and a missed motion class. Classifying the motion data to one or more classes can be done e.g. using known ML mechanisms, Neuronal networks, Decision Tree based algorithms, Support Vector Machines and other known classifiers. The classification model is pre-trained using data for motion examples obtained from expert users (also to be referred to throughout the description as “professionals in the field”). Using pre-trained classification model for classifying the motion data into one or more classes is advantageous compared to using a single motion template of a target motion for several reasons. First, the approach relying on a single motion template of a target motion, e.g. when one professional in the field performs the target motion, fails to account various variations of the target motion that are considered acceptable in the field. On the other hand, by using a pre-trained classification model, the model explicitly represents various acceptable variations of the target motion by collecting and processing in advance representative set of example motions and measurements of the motion, performed by professionals in the field instead of relying on a single template motion. Comparing to a set of measurements, as explained below, rather than to a “perfect template” of a motion avoids the need of known systems, to create a perfect template. In addition, in known systems, since the motion of the user is compared to a perfect template motion performed by a single professional in the field, the feedback to the user may include feedback to aspects of the target motion, which may be acceptable in the field. On the other hand, when providing feedback based on a pre-trained model, as described below, the focused feedback that is provided to the user already considers the acceptable variations to the target motion performed by several professionals in the field. Classifying the motion data using pre-trained models results in an output value for each class. In some examples, it is possible, using known classification algorithms, to output confidences between 0 and 1 for each possible class, where the sum of all the confidence scores for a given motion is 1. In such cases, the output value of each class is the confidence score for each class. If an algorithm returns unbounded positive output values for each class, the output values can be converted by dividing each value by the sum of all values for a given motion or using softmax function, resulting in confidence score for each class, where the sum of all confidence scores is 1. Alternatively or additionally, well-known normalization techniques can be applied to convert output values to proper probabilities, each probability constitutes the confidence score of the class. One or more confidence scores constitute a subscore in a set of subscores in relation to the designated target motion. Each subscore corresponding to a given aspect of the motion. Consider the example of user110performing a designated motion target being a landed hook punch. Processing the motion data in the detection and classification models can result in the following confidence scores for each class: Motion detection classes: motion performed class: 0.9 non-motion performed class: 0.1 Punch type classes, as processed in the classification model: hook class: 0.5 a jab class: 0.1 cross class: 0.3 uppercut class: 0.1 Impact type classes, as processed in the classification model: landed motion class: 0.3 missed motion class: 0.7 As noted, at least one of the motion classes is indicative of the designated target motion of the user. In the above example, three classes are indicative of the target motion of a landed hook namely: “motion performed class”, “hook punch class” and “landed motion class”. Each confidence score of these three classes will be a subscore in the set of subscores and corresponds to a given aspect of the motion. The confidence score of 0.9 for motion performed class is indicative of a high likelihood that a motion was indeed performed. The confidence score of 0.3 for landed motion punch class, as illustrated above, is indicative of low correctness of the “landed” aspect of landed hook motion, and the confidence score of 0.5 for hook motion punch class, is indicative of average correctness of the “hook” aspect of landed hook motion. As mentioned above, the models are illustrated by way of a non-limiting example in the field of fighting motions. A person versed in the art would appreciate that other models and classes of the models, can be applied with respect to other types of motions. An example of a dance motion includes step class, turn class, hand swing class, clap class etc. Other examples of motions and classes of the motions include bat/racket/hockey stick swings motions, kicks motions and ball kicks motions. Simultaneously or sequentially to the above process, regression unit270processes the obtained motion data using a regression model, for extracting at least one characteristic measurement, such measurements including a punch speed or force of impact of the punch (block340). In the example of a punch motion, characteristics of a punch motion can include speed, force, first acceleration, reaction time etc. In some examples, sensors160-aand160-bare IMU sensors on the wrists of the user110. Once the user110performs a motion, e.g. a punch, IMU sensors160-aand160-bsense one or more characteristic measurement such as acceleration and angular velocity of the wrist. Then using pre-trained regression model for speed and force of impact (or potential impact) can be estimated. For example, based on the motion data from the sensors160-a, regression models produces the following characteristic measurement the right hand: Speed: 5 meters per second; Force: 1500 Newton; Before or after extracting the characteristic measurements from the obtained motion data, regression unit270obtains reference characteristic measurement (block350). In some examples, in order to determine a characteristic score for each characteristic for a specific target motion, PMC120obtains reference characteristic measurements, i.e. measurements of one or more characteristics of the target motion that were measured by sensors160when the target motion was performed by professionals in the field. At least one of the reference characteristic measurements corresponds to at least one of the extracted characteristic measurements, respectively, to make sure that if speed was measured or estimated based on sensed data, corresponding reference speed measurements will be obtained. The reference characteristic measurements of the target motion can be stored e.g. at storage unit250and can be obtained by PMC120e.g. by retrieving one or more measurements from the storage unit250. Retrieving one or more measurements is further described below inFIG.5. PMC120then evaluates the extracted measurements of characteristics in the obtained motion data to the reference characteristic measurements retrieved from storage unit250and calculates at least one characteristic score, each based on one of the extracted characteristic measurements and the corresponding obtained reference characteristic measurements (block360). For example, the characteristic score is a value between 0 and 1 calculated using domain and feature specific heuristics on the extracted characteristic measurements and the reference characteristic measurements. One example of such heuristics is using a percentile of the extracted characteristic within reference characteristics measurements of the professionals. Reference is now made toFIG.5illustrating a non-limiting example of PMC120including a database of storing a reference characteristic measurement of professionals, and how to calculate at least one characteristic score, based on the extracted characteristic measurement and the corresponding obtained reference characteristic measurement. As illustrated inFIG.5, storage unit250, which is comprised in PMC120, comprises a target motion memory500. Memory500comprises one or more groups of motions types520.FIG.5illustrates two motion type groups: punch motion520-ae.g. in boxing, and swing motion520-be.g. in golf. Each of the groups can include one or more target motions510. Swing motion type520-bcan include target motions of back-swing, downswing, or upswing (not shown). Punch motion type520-aincludes two target motions: a landed hook510-aand missed jab510-b. Each target motion510includes one or more reference characteristic measurements identified by a characteristics name. Each reference characteristic measurement is associated with a list comprising one or more measurements. In some examples the reference characteristic measurements of the target motions are different from one target motion to another. As such, the specific target motion of a landed hook has certain characteristics which may not be relevant for other target motions, such as the upswing target motion in swing motion520-bor a kick target motion (now shown). The measurements for each characteristic comprise a list of measurements, where each measurement in the list of measurement indicates a measurement of a characteristic when a professional performed the target motion. In some examples, one or more professionals perform a target motion, and sensors associated with the professionals sense motion data. The motion data include data relating to measurement of one or more characteristics. The sensed measurements with the associated characteristics are then stored in memory500. For example, target motion landed hook510-aincludes two characteristics: characteristic name: “speed” and its associated measurements and characteristic name: “force” and its associated measurements. The measurements of speed include a list of measurements measured in meters per second (m/s): 4 m/s, 5 m/s, and 6 m/s. Each measurement indicates a measurement of speed of a professional performing a landed hook. As such, 4 m/s was taken by professional 1.5 m/s was taken by professional 2 and 6 m/s was taken by professional 3. Alternatively, one or more of the measurements were taken by the same professional on different occasions of performing a landed hook. Memory500includes another target motion of missed jab510-b. Missed jab also includes the following characteristics: characteristic name: “speed” and its associated measurements and characteristic name: “force” and its associated measurements. It should be noted that at least some characteristics of each target motions are different from one target motion to another, irrespective of whether the target motions are of the same type of motion. As such, the characteristics of missed jab510-bmay be different to those of landed hook510-a, and that the measurements of speed of target motion of missed jab510-bare different to the measurements of speed of target motion of landed hook510-a, as the “optimal” speed, e.g. the speed performed by a professional, of each motion can be different. Also, some characteristics of a certain target motion may not be relevant for another target motion. A person versed in the art would appreciate that memory500and its structure is a specific example, and the data may be stored in a different manner. For example, the memory500can store records data indicative of professionals and their respected characteristics for each target motion instead of a list of measurements for each characteristic as illustrated inFIG.5. Other forms of obtaining the measurements of characteristics of professionals can be used for the purpose of the described subject matter. Referring back toFIG.3, once reference characteristic measurements are obtained from memory500, PMC120calculates at least one characteristic score. Each characteristic score is based on one of the characteristic measurements extracted from the obtained motion data, and the corresponding obtained reference characteristic measurements obtained from memory500. Calculating the characteristic score is done using domain and feature specific heuristics. For the case when higher values of measurement are preferable (for example, the higher the punch force the better) one example of such heuristics is using a percentile of the extracted characteristic measurement within the corresponding obtained reference characteristic measurement. The characteristic score can be a value between 0 and 1. Below is a non-limiting example of calculating characteristics scores for characteristics of a landed hook designated target motion. Considering the landed hook, the characteristics can be: 1. Speed 2. Force The measurements of the above characteristics as extracted from the motion data sensed by sensors160associated with the user110, and obtained by PMC120are: Speed: 4.8 m/s Force: 1600 Newton For each characteristic measurement or a group of several characteristic measurements, calculating and scoring module230calculates a characteristic score. Hence, calculating and scoring module230obtains from memory500at least one reference characteristic measurement relating to landed hook510-a, e.g. the list of measurements associated with speed and the list of measurements associated with force: Speed (m/s): 4, 5, 6 Force (Newton): 1300, 1500, 1700 Using e.g. a percentile of the extracted characteristic measurements within the characteristic measurements of professionals obtained from memory500, calculating and scoring module230calculates the characteristic score based on the following stages and data: User's speed in motion (extracted from motion data): 4.8 m/s Professional's speed in motion (obtained from memory500): 4, 5, 6 m/s Calculating a percentile of 4.8 m/s of 4, 5, 6 m/s results in 0.33, This value is then used as a speed score: 0.33. A similar calculation can be done with respect to the force characteristic, based on the force obtained in user's motion and professional's force from memory500, to obtain a force score of 0.67. These characteristic scores, are each based on one of the extracted characteristic measurements and the corresponding obtained reference characteristic measurements. The characteristic scores are calculated with respect to the designated target motion of a landed hook performed by the user. It is to be noted that certain measured characteristics can result in a high score when considering a target motion of a first type, and a low score when considering a target motion of a second type. In some examples, one or more of the confidence scores from the detection model and the classification model, and one or more of the characteristic scores from the regression model may constitute a subscore in a set of subscores. This means that processing the motion data in the models is done so as to obtain a set of subscores that is composed of at least the determined at least one confidence scores and the determined at least one characteristic score (block370). In some examples, the confidence scores and characteristic scores relating to target motion constitutes subscores in the set of subscores. Considering for example, that the models output the following confidence scores and characteristic scores with respect to the landed hook target motion. Some of the confidence scores and characteristic scores constitute subscores in the set of subscores motion performed class: 0.9 (this confidence score constitutes a subscore in the set of subscores) non-motion performed class: 0.1 hook class: 0.5 (this confidence score constitutes a subscore in the set of subscores as a punch type score, since the target motion was a hook) a jab class: 0.1 cross class: 0.3 uppercut class: 0.1landed class: 0.7 (this confidence score constitutes a subscore in the set of subscores as a punch impact type score, since the target motion was a landed) missed class: 0.3 speed characteristic score: 0.5 (this characteristic score constitutes a subscore in the set of subscores) force characteristic score: 0.2 (this characteristic score constitutes a subscore in the set of subscores) Based on the set of subscores, calculating and scoring module230provides a total quality score for the motion of the user with respect to the designated target motion, based on the set of subscores, wherein the total quality score is indicative of correctness of the motion of the user with respect to the designated target motion (block380). Providing a total quality score can be done e.g. by performing calculation on the subscore to obtain a consolidated score. For example, calculation may include conducting a weighted average of some or all subscores. Another example could be calculating the product of some or all the subscores. The total quality score is indicative of correctness of the motion of the user with respect to the designated target motion. In the above example, exemplary equal weights given to each subscore in the set of subscores, result in a 0.56 as a total quality score for the motion performed by the user. Calculating and scoring module230is configured to provide the total quality score to feedback module240comprised in PMC120. In turn, feedback module240provides the total quality score to the user (block390), e.g. by displaying it on the feedback device, with or without the set of subscores. Attention is now drawnFIG.4, illustrating a flow chart of operations carried out by feedback module240, in accordance with certain embodiments of the presently disclosed subject matter. In some examples, feedback module240is configured to provide, alternatively or in addition to the total quality score focused feedback on the motion of the user. The focused feedback can be in relation to one or more subscores from among the set of subscores. For example, the feedback can relate to the speed characteristic subscore. In some examples, one or more predefined feedbacks are assigned to a low value of each subscore. the feedback itself can include a guidance on if and how to improve the speed of the motion performed. Feedback module240provides the feedback e.g. by communicating the feedback to feedback device130. Feedback device130then provides the feedback to the user110, e.g. by displaying the feedback. In some examples, in order to provide a user with feedback on the motion performed by him, such that the feedback assists the user to better perform the designated target motion, it is advantageous to provide the user with focused feedback. Focused feedback is feedback that focuses on one or more specific aspects or characteristics of the motion, rather than providing general feedback or a score on the motion and includes feedback in relation to one or more subscores from among the set of subscores. Focused feedback on a specific aspect may assist the user110to focus on improving a specific aspect of the motion, thus resulting in a better possibility of performing the motion from then onwards. For example, focused feedback can feedback on the speed characteristic score of the landed hook performed by the user110. Upon receipt of the feedback on the speed, the user110can focus on improving his speed in the next motion, thus resulting in a better possibility of performing the motion the next time, when the user110focuses on the speed. Assume, for example, that no consideration is being made with respect to which aspect of the motion to give feedback on, user110can receive feedback on an aspect which compared to others is performed with a high accuracy, yet is not perfect, instead of receiving feedback on another aspect which is performed with low accuracy. Providing feedback without any consideration on which aspect would bring the highest improvement in performance of the motion, in most cases may result in slow progress of performing the motion, as the user may be focusing on an aspect which is relatively performed with high accuracy and the room for improvement in that particular aspect is low. In order to provide focused feedback, it is advantageous to identify which aspect of the motion feedback will be provided. In addition, it is advantageous to provide guiding feedback on the specific aspect that was identified and selected to provide such feedback. In some examples, based on the set of subscores, with respect to each of the at least two subscores of the set of subscores, feedback module240manipulates a subscore, giving rise to a manipulated subscore (block410). For example, manipulating the subscore includes modifying the subscore to a maximal possible score. Considering the example of a landed hook motion with one subscore of the set of subscores being speed subscore—0.33, manipulating the subscore includes modifying the speed subscore to a value of 1. After manipulating one of the values of subscores, a new set of manipulated subscores is provided (similar to block370inFIG.3), wherein one of the subscores is manipulated. Similar to the calculation done based on the original and non-manipulated subscore (block380inFIG.3), feedback module240calculates a resulting manipulated total quality score with respect to the target motion of the user, based on the manipulated subscore, instead of the non-manipulated subscore (block420). Calculating a manipulated total quality score can be done, e.g. using calculating and scoring module230. Feedback module240then selects a resulting manipulated total quality score that meets an improvement criterion compared to resulting manipulated total quality scores that are obtained in response to manipulating other non-selected subscores from among the set of subscores (block430). For example, the improvement criterion is the resulting manipulated total quality score being the highest score. In some examples, one or more feedbacks are created and stored e.g. in feedbacks530in storage unit250illustrated inFIG.2, where each feedback includes one or more statements with respect to a low score of one of the subscores. Once a resulting manipulated total quality score is selected, feedback module240selects feedback that pertains to the subscore that is associated with the selected resulting manipulated total quality score and provides the feedback to the user110. For example, if the resulting manipulated total quality score is based on the manipulated set of subscores, where the speed score was manipulated, then feedback that includes one or more statements on the speed of the motion is selected and provided to the user, with respect to the target motion that was performed by him. Below is an example of providing feedback to the user. Returning to the example above of the set of subscores comprising the following subscores: Class/characteristic nameSubscoremotion performed0.9Punch type score0.5Punch impact score (landed class)0.7Speed score0.5Force score0.2Total quality score0.56 Given equal weights to each subscore in the set of subscores the total quality score was calculated as above. In order to select a subscore to provide respective feedback thereon, each subscore is manipulated (referenced as “Man.” In the table below), and a new manipulated total quality score is provided. The table below illustrates each manipulation on one subscore. Man.Man.Man.punchpunchMan.Man.Class/characteristicPunchtypeimpactspeedforcenamesubscoresubscoresubscoresubscoresubscoremotion performed10.90.90.90.9Punch type score0.510.50.50.5Punch impact score0.70.710.70.7Speed score0.50.50.510.5Force score0.20.20.20.21Total Man.0.580.660.620.660.72quality score Upon reviewing the manipulated total quality scores resulting from a manipulation on one of the subscores, it is illustrated that the highest manipulated total quality scores resulted from manipulation on force subscore. In such a case, PMC120identifies and selects this characteristic on which to provide feedback. Assuming a force subscore is selected, feedback corresponding to a low value of force is retrieved from feedback530in PMC120and is provided to the user. For example, the feedback can include the following statement: “Rotate your hips and shoulders in order to add force to your punch”. If the manipulated scores are equal for several aspects of the motion one can pick the feedback randomly out of the corresponding feedback subset. Alternatively, predefined default order could be used as a fallback in this case. In some examples, for subscores based on the classification models the confidences for the classes other than the one of target motion can be used for feedback generation as well. For example, assume the target motion is a landed hook and punch type classification model returns the following class confidences for a given motion: Jab: 0.4 Cross: 0.05 Hook: 0.5 Uppercut: 0.05 In this case feedback highlighting the difference between the jab and hook motions could be given as confidence for jab is the highest out of the classes other than target punch type. Example of such feedback can be “Hook is a power punch, the first should travel in an arc, not straight”. A person versed in the art would appreciate that the above example is based on manipulation on one subscore, and that manipulation can be done on more than one subscore, or a combination of subscores. Alternatively or additionally, the manipulated total quality scores can be sorted, and one or more corresponding feedbacks can be displayed to the user110, in a corresponding sorted manner, optionally, in decreasing ranking of importance. It is noted that the teachings of the presently disclosed subject matter are not bound by the flow chart illustrated inFIGS.3and4, and that the illustrated operations can occur out of the illustrated order. For example, operation340and350, or operation310and320shown in succession can be executed substantially concurrently or in the reverse order. It is also noted that whilst the flow chart is described with reference to elements of feedback system100and memory500, this is by no means binding, and the operations can be performed by elements other than those described herein or with a different structure of memory500. It is to be understood that the invention is not limited in its application to the details set forth in the description contained herein or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Hence, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting. As such, those skilled in the art will appreciate that the conception upon which this disclosure is based may readily be utilized as a basis for designing other structures, methods, and systems for carrying out the several purposes of the presently disclosed subject matter. It will also be understood that the system according to the invention may be, at least partly, implemented on a suitably programmed computer. Likewise, the invention contemplates a computer program being readable by a computer for executing the method of the invention. The invention further contemplates a non-transitory computer-readable memory tangibly embodying a program of instructions executable by the computer for executing the method of the invention. Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope, defined in and by the appended claims.
45,102
11861753
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. Where the reference label is used in the specification, the description is applicable to any one of the similar components having the same reference label. DETAILED DESCRIPTION Illustrative configurations are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed configurations. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Now referring toFIG.1illustrating a layout100of an air quality monitoring system110, which oversees air quality data from different sources. The air quality monitoring system110may include an air quality data processing module108, a plurality of air quality monitors102a,102b, and102c, reference monitors104, and environmental monitors106. The plurality of air quality monitors102a,102b, and102ccan include at least one chemical sensor configured to detect and measure chemicals such as ozone, nitrogen oxide, carbon dioxide, sulfur dioxide, volatile organic compounds, methane, or other hydrocarbons and other chemicals in gaseous state (herein described as “gaseous chemicals”). The plurality of air quality monitors102a,102b, and102cmay also include at least one particle sensor configured to detect and measure the presence of suspended particles in the air such as dust, smoke, pollen, or soot (herein collectively described as “particulate matter” or “PM”). The plurality of air quality monitors102a,102b, and102cmay include an enhanced gaseous chemical sensor having a multi-pass cell for light rays as will be described in more detail below. The plurality of air quality monitors102a,102b, and102cmay be located at multiple different locations. For example, multiple monitors may be distributed across a sizable area such as a county, a city, or a neighborhood. Several instruments may also be located within a building or a dwelling. In one configuration, the reference monitors104may include precision gaseous chemical sensors and are configured to provide measurements for use in calibrating the gaseous chemical sensors in the plurality of air quality monitors102a,102b, and102c. Further, the environmental monitors106are configured to measure environmental conditions such as humidity, temperature, atmospheric pressure, air density, ambient light, geographic location, wind speed and direction, and the like. With continued reference toFIG.1, the air quality data processing module108may be configured to communicate with the plurality of air quality monitors102a,102b, and102c, the reference monitors104, and the environmental monitors106. For example, the air quality data processing module108may receive data from these monitors such as measurements. Further, the air quality data processing module108may also transmit data to these monitors such as calibration data. The air quality data processing module108can correct measurements from the plurality of air quality monitors102a,102b, and102cusing cross-calibration factors as will be explained below. The air quality data processing module108is also configured to process the data from monitors and perform analyses to calculate or infer additional air quality data such as the amount of various gaseous chemicals in various locations, sources of those gaseous chemicals, and recommendations based on elicited requirements or preferences of end users. The air quality data processing module108is configured to communicate with mobile devices110b, computing devices110a, and server devices110cto receive data and provide received, calculated, and inferred air quality data. For example, the air quality data processing module108may receive user-input data and use that data to derive additional air quality data relevant to the area of analysis. The air quality data processing module108is also configured to communicate with other sources of data such as reporting system112and weather stations114. The air quality data processing module108may be implemented in any appropriate physical or virtual computing platform such as a networked server and may operate and act through any suitable interface such as a cloud computing platform. In one configuration, with continued reference toFIG.1, the air quality monitoring system110may also be configured to process incoming data to provide a variety of outputs. For example, air quality monitoring system110may analyze measurements from the plurality of air quality monitors102a,102b, and102cto determine the sources of the gaseous chemicals being detected. The air quality monitoring system110may provide actionable steps to affect the chemical sources such as ways to reduce the release of those chemicals or ways to minimize exposure to those chemicals. It may do so by making use of stated preferences or user requirements and/or ancillary (e.g., topological, geological, meteorological, or demographic) datasets relevant to the area of investigation. With reference toFIG.2illustrating a layout200of an illustrative configuration of an air quality monitor202(such as air quality monitors102a,102band102cinFIG.1) and some example components that may be included therein. The air quality monitor202may include a processing module204, a memory206, a communication module208, and at least one gaseous chemical sensor such as chemical sensor210aor chemical sensor210b(hereinafter collectively referred to as “chemical sensors210”), and environmental sensor212. The processing module204processes computing tasks and controls other components. The computing tasks may include calibration. Memory206stores data such as measurement data from chemical sensors210and calibration data such as cross-calibration factors. Chemical sensors210are configured to measure gaseous chemicals and particulates in analyte gas such as gas under-sampling by the air quality monitor202. The environmental sensor212measures environmental conditions such as temperature, pressure, humidity, location, wind speed, and the like. Further, the communication module208handles communication with other devices. For example, the communication module208may oversee communication between the air quality monitor202and air quality data processing module108ofFIG.1, user-devices such as mobile devices110band computing devices110aand110cand the like. Communication module208may communicate through any of a variety of wired and wireless mechanisms such as Wi-Fi, Bluetooth, mobile networks, long-range radio, satellite and the like. The air quality monitor202may also be configured to measure time, position, and other relevant information for computing devices. The components, functionality, and configuration of the sensor can be selected based on desired monitoring capabilities. The at least one air quality monitor102a,102band102cmay also measure various onsite atmospheric parameters such as the measured substance concentration of a target substance or a set of individual atmospheric readings. The set of individual atmospheric readings may include at least one of the following: barometric pressure, air temperature or humidity level. Now, referring toFIG.3illustrating a schematic300of a particular configuration of a air quality monitor capable of measuring a target compound and at least one environmental parameter (e.g., a weather condition) in a collocated and contemporaneous manner. The compound measurement function of the air quality monitor ofFIG.3is performed by the compound sensor302. These sensor(s) are point sensors, which means that their function is to measure a particular physical-chemical property of the target compounds to distinguish them from background atmospheric composition. Targeted compounds may include but are not limited to gases and aerosols emitted by industrial, anthropogenic, or natural activities. In particular, one configuration focuses on hydrocarbons and other greenhouse gases that absorb energy from radiation in the mid-IR region of the electromagnetic (EM) spectrum with wavelength between 1 um and 5 um. In one configuration, the compound sensor302is an absorption spectrophotometer that can measure mid-infrared absorption in the 3 um to 5 um range of the EM spectrum. The compound sensor302may be configured with other sensor technologies that may be similarly used for the measurement of target compounds. To capture a sample for analysis, a sampling cane316may be used to pump an air sample at a specific height and to avoid sampling water in the case of precipitation or other foreign agents of large size. The sample may be pumped and conditioned by a sample-pumping and conditioning system320. The system depicted may include a pump for sampling the air for the compound sensor302, a filter for the removal of particulate matter and a coalescent filter for the removal of water. The system may further include desiccant filters, temperature and pressure adjustment systems, valves, and additional drain pumps to facilitate moisture removal, temperature conditioning of the sample, flushing or other filter-regeneration tasks. The purpose of this is to provide a properly conditioned sample based on the air quality monitor requirements while limiting the necessary maintenance of the pumping and conditioning system in the sampling cane316. In some configuration, the compound sensor302may use an open path in order to avoid the necessity of pumping or conditioning samples. The sample may then be naturally transported into the sensing area by weather patterns without the use of the sampling cane316or sample-pumping and conditioning system320. In one illustrative configuration, with continued reference toFIG.3, the air quality monitor further includes a weather sensor system318collocated with the sampling point of the compound sensor302around the sampling cane316. The weather sensor system should at least include sensing elements to measure wind speed and direction. Further sensing about temperature, pressure, hygrometry, insolation and precipitation may also be used to refine the subsequent modeling effort. The wind speed and direction may be measured by a combination of a wind vane and an anemometer or by an ultrasonic anemometer alone. The wind direction measurement may be made in two or three dimensions. Temperature may be measured using MEMS sensors, thermistors, or other suitable sensing technology. Pressure may be measured using a barometer sensor and hygrometry may be measured using a moisture sensor. The sensors for temperature, pressure, and moisture may be connected to improve each of the measures as they are interdependent. Insolation may be measured using a photodiode or any other appropriate light-sensitive sensor. Precipitation may be measured using a precipitation sensor with auto-draining capability. While collocating the weather measurement with the sampling point is important for the purpose of accurately characterizing emissions, it is not absolutely necessary for performing the method as long as weather measurements are collected in close proximity to the sensor system (i.e., within 100 m). This conformation, i.e., being collocated, minimizes measurement error and is the illustrative configuration of the present disclosure. With continued reference toFIG.3, the data collected by the compound sensor302and weather sensor system318may be collected and processed by a local computing unit312. The local computing unit may also control the execution of the main sampling and measurement program and the actuation and controlling of any subsystem of the sensor system. The local computing unit312runs the main firmware, which schedules and collects data from compound sensor302and weather sensor system318, conditions the sensor signals into a rational format, performs data preprocessing, locally stores data, formats and prepares messages, and generates diagnostic and metadata pertaining to the identification, time stamping and operational diagnostics of the sensor system and supporting circuitry. The messages may be encrypted and transferred to a communication unit308, and messages may be received from remote assets. The communication unit308includes a modem or other interface that conditions the message to the right protocol for communication or receives external messages to be communicated to the local computing unit312. The communication protocol may be wired as in a SCADA system or wireless using Bluetooth®, Wi-Fi, LoRa, cellular, satellite, other radiofrequency, optical line of sight or other wireless data-transmission protocol. If a wireless protocol is employed, the data may be relayed using a communication antenna314if appropriate. In general, a communication system, which may consist of a communication antenna314and communication unit308, has a role that includes the communication of the measurement to a remote or centralized node and the receipt of communications related to settings and operations changes or firmware updates. The communication system may be used to relay messages to and from other sensor systems such as in a daisy chain, star or mesh configuration in order to reduce the communication cost when relying on external communication infrastructure such as cellular or satellite communication networks. In case of communication error or other cases that warrant it, the messages may be stored by the local computing unit312to communicate at a later, more opportune time. For example, when communication services may be interrupted, multiple channels of communication (such as multiple wireless data-transmission protocols) may be used to attempt to alert the local computing unit312to changes of operating conditions and to receive instructions. With continued reference toFIG.3, deployment of sensors in the field may require the exposure of the equipment to harsh outdoor conditions with no external support such as power access and communication infrastructure. The sensing system is housed in an enclosure310to protect the system from the environment and from tampering. Hazards may include but are not limited to precipitation, moisture, surface water and flooding, high temperature and insolation, low temperature, high wind, storms, hurricanes, typhoons, tornadoes, lightning, external impact and vibration, robbery, defacement, damage, earthquakes, light or electromagnetic interference, foreign agents or fauna and flora disturbance or intrusion. The enclosure310may also be highly visible by day and reflective at night to avoid accidental damage. The enclosure310may be directly on the ground, mounted on a foundation, or pole mounted. In one illustrative configuration illustrated inFIG.3, the sensor system may produce and manage its own power. In one configuration, the sensor system may include a solar power system304and a power conversion and storage system306. The solar power system304and power conversion and storage system306are designed to provide sufficient power to the various other subsystems and to provide sufficient reserves and capacity to ensure the proper functioning of the sensor system in most environmental conditions present in the field. Solar power system304may be replaced by wind- or gas-based power generation or any other form of compact power generation system if the conditions warrant it. For instance, at high latitudes, wind-based power generation may be preferable to solar on account of low insolation. The power conversion and storage system306may include a battery storage bank and a charge controller. The power conversion and storage system306may further include power converters for providing appropriate power to the various systems, relays, fuses, breakers, and switches appropriate for the power protection, function, and physical interfacing required by a particular configuration of the sensor system. The battery storage bank may include lithium-ion (such as LiFePO4 cells), lead acid (such as a deep-cycle sealed battery), or any other appropriate battery technology that can operate nominally in conditions that may include high and low temperatures and irregular charging profiles. The charge controller may use Pulse-Width Modulation (PWM), Maximum Power Point Tracking (MPPT), or other technology appropriate to convert the raw energy from the solar power system304to the battery storage bank charging requirements. All subsystems ofFIG.4may be modular in nature to facilitate the replacement of subsystems with minimal tools in the case of maintenance. Referring toFIG.4illustrating a communication architecture400of the sensor system ofFIG.3, the communication of data and commands is represented as illustrated. A sensing unit402a, which may or may not be the same as that described inFIG.3, can incorporate components such as a power system420, weather sensors426, compound sensors428, a computing unit424and a communication unit422. The sensing unit402acan relay messages as described above to centralized computing unit432using a network layer. The network layer may rely on existing communication infrastructure such as cellular or satellite, or it might use dedicated infrastructure such as custom wired or wireless systems including but not limited to Wi-Fi, Bluetooth®, LoRa, and other telemetry and data-transmission systems. The data transmission may rely on other network infrastructures such as the Internet or on dedicated networks such as intranet or LAN. The sensing unit402amay also directly transmit messages to non-networked systems or to local systems as may be the case for a local interface used by the sensor system user. The message from the sensing unit402amay be relayed through other sensor units as in daisy-chained or starred sensor system networks or through a resolute unit for the local storage, scheduling and packaging of messages from the sensing unit402a, and additional sensing units402b,402cdeployed in the vicinity of each other. This may be done to amortize the cost of expensive transmission technology such as satellite links. All the metadata related to the sensing unit402a, and additional sensing units402b,402cmay be relayed to the centralized computing unit432by the sensing metadata units404. Once data reaches centralized computing unit432, message processing is performed to transform raw data into actionable data. This may involve simple tasks such as data formatting or more complex tasks such as creating a maintenance-tracking system for the operator. In one configuration, the data processing is the conversion of weather and compound measurements into the detection, localization, quantification and qualification of target compound emissions. In addition to the detection, localization, quantification and qualification of the emissions, the centralized computing unit432may also be configured to minimize the number of air quality monitors. This is illustrated in detail in the successive configuration. To transform the raw compound measurements into speciation and concentrations, an external database416such as the HiTRAN database, may be queried for reference spectra, or internal databases of calibration measurements taken with the specific sensing unit402aduring calibration runs may be queried. With continued reference toFIG.4, the illustrated configuration of a supervisory control and data acquisition system (sometimes referred to herein as a SCADA system418) may be provided at the site. The SCADA system418may be deployed in a control room or in an on-site field office. Further, the SCADA system418may be connected to at least one on-site device including but not limited to pressure sensors, pressure vessels, separators, drills and the like. The supervisory control and data acquisition system (SCADA) may be configured to monitor and supervise at least one device. The monitoring would preferably include the physical condition and operational condition of that device. Supervisory control and data acquisition systems (SCADA) may rely on a control system architecture comprising computers, networked data communications and graphical user interfaces for high-level supervision of machines and processes. It may also refer to sensors and other devices, such as programmable logic controllers, which interface with process plant or machinery. An operator may monitor and issue process commands (e.g., controller set point changes, operation of interfaced devices, etc.). The subordinated operations, e.g., the real-time control logic or controller calculations, may be performed by networked modules connected to the field sensors and actuators. SCADA systems are generally a means of remote access to a variety of local control modules, which may be supplied by different manufacturers but according to standard automation protocols. In one illustrative configuration, physical conditions monitored and supervised by the SCADA system418may include physical conditions in the at least one device, such as a failure, crack, teardown of the device, etc. The physical conditions may be acquired as a physical factor by the SCADA system418, which indicates parametric values of the physical changes on the at least one device. In another illustrative configuration, the operational conditions monitored and supervised by the SCADA system418may be acquired as an operational factor. The operational factor may refer to a parametric value of the operational conditions. The operational conditions may include operating setpoints, controlling flow rates, pressure levels, flow rates, or even boundary conditions associated with at least one device beyond which the operation of the at least one device may cease. In another illustrative configuration, the SCADA system418may be communicably coupled to a first server (not shown in the figure). The SCADA system418may be configured to transmit the acquired operational factor and the physical factor to the first server. In the configuration illustrated inFIG.4, the centralized computing unit432may use information from the additional sensing unit402bfor enhanced localization, quantification, and qualification of the emissions. The additional sensing unit402bmay include multiple sensing units and may be of the same type as the sensing unit402aor any other sensing unit present on the sites. For example, the additional sensing unit402bmay be a flare lighting sensor used as an indicator to help attribute an emission detected by the sensing unit402ato a flare misfiring. Actuator commands may be used as a sensor feed as well. For example, the actuation of pneumatic equipment at oil sites may result in a predictable emission; therefore, command signals from actuators may be used to help predict expected emissions from an oil site. An example in the landfill industry may be variation in the pressure head of wells, which may be correlated with a local emission hotspot. This concept can be extended to all existing command signals and process sensors already present in equipment associated with potential emissions sources. Once the detection, quantification, qualification and localization of sources are obtained by the centralized computing unit432, actionable data may be generated. Actionable data may be data necessary to take a corrective action including but not limited to generating emission reports, creating maintenance lists or updating maintenance tracking and emissions-reduction tracking tools. The actionable data may further be used in commands or scripts for automation systems406. For example, actuators on a site may be automatically put in a safe position if an explosive concentration of a flammable compound is detected. Another example would be the operation of equipment such as sirens or visual cues that alert operators to perform emergency evacuation if a toxic compound is detected. At times, robotic or automated inspection and repair systems or equipment maintenance systems may be deployed in response to a command. For example, a drone may be deployed to perform a precise, automated inspection of a certain area identified by sensing unit402aor to perform fine-scale equipment-leakage detection. Another example would be automated excavation equipment deployed to place additional ground cover on a detected emission hotspot at a landfill. Yet another example would be the triggering of an automated self-diagnostic system in a continuous production environment that requires a lot of computation to identify process problems. Actionable data may be used to generate automated reports in document generation task414. For example, the sensor data may be used to generate with or without operator intervention regulation-mandated emission inventory reporting and edit auto-completed reports to be physically or digitally sent to the concerned agency. With continued reference toFIG.4, actionable data, emission data, and raw data may be transmitted to other servers412that may be internal or external. The purpose of this may be to relate raw data for archiving or post-processing or to send data to servers behind a firewall in specific user instances in which proprietary data is collected and requires different levels of encryption. In that case, raw encrypted data may not be decrypted in the centralized computing unit432for data safety reasons and may only be safely decrypted behind a client's firewall. Actionable data such as triage information, reports, maintenance and abatement data may be communicated through emails, text messages, dashboards or dynamic notebooks to static I/Os410and mobile I/Os408. Static I/Os410can include PCs and other fixed computing units such as those found in the office of the field manager. Mobile I/Os408can include pagers, PDAs, phones, tablets or laptop computing units and equivalents such as the phone of a field operator (such as a pumper) or a field supervisor in the case of oil and gas applications. Now referring toFIG.5illustrating a symbolic map500of a prospective field deployment. InFIG.5, a sensor system502, as depicted by a rounded-corner square, is deployed in the field508to detect emissions plumes512,524of target compounds depicted by gradients. These emissions plumes512,524may be emitted by point sources506,526depicted by circles or by area source522depicted by a polygon. The emissions plumes512,524are transported by advection by an airflow as denoted by streamline arrows510and by buoyancy and diffusion of the compound in air. Typically, the air flow is of a complex, three-dimensional geometry and depends on many parameters including but not limited to terrain, surface roughness and obstacles, temperature and pressure differential, insolation and inversion layer position, turbulence, and atmospheric boundary conditions or other atmospheric conditions forced by large-scale weather patterns. The streamline arrows510are a simplified view of the average transport (with turbulence represented by an average) of air parcels during the sampling time. Note that the streamline arrows510are influenced by the effect of terrain514as noted by isoclines and by the presence of obstacles520(e.g., trees) represented by the small black dots. In this specific snapshot, the point source506is emitting the target gas, thereby producing the emissions plume524which is transported by the air flow to the sensor system502. Note that the cross section of the emissions plume524increases when further from the point source506due to diffusion and turbulent mixing. The emissions plume524can also appear to have a tortuosity due to the dynamic change in wind speed and direction during the transport. In this example, point source526is not emitting, and the area source522is emitting but the emissions plume512does not intersect the position of the sensor system502in this particular snapshot. Note that plumes are typically three dimensional and may vary in vertical cross sections, though this is not displayed in this figure. It may therefore be necessary to have precise wind measurement collocated at the sensor system with a modeling of the emission transport that considers terrain, obstacles, rugosity and other field parameters that can affect transport. For instance, in the specific snapshot presented inFIG.5, local wind pattern518at a long distance comes approximately from the east before entering the field of interest. The wind measurement collocated at the sensor system502indicated an approximately northeast direction as denoted with streamline arrows510intersecting the sensor system502. From the perspective of the sensor system502, the area source522is located in the northeast sector, the point source506is located in the east-northeast sector and the point source526is in the east sector. Only the emissions plume524from point source506is measured by the sensor system502in this particular snapshot. With continued reference toFIG.5, if a model only accounted for a wind direction and/or speed from a local weather pattern such as the distant wind measurement of local wind pattern518, errors could be made. For example, the perceived source of the emissions plume524detected by sensor system502would be the east sector, and this would lead to the incorrect guess that the point source526is the source emitting the emissions plume524. However, if the collocated measurement of wind direction at the sensor system502is considered, the emissions plume524appears to be coming from the area source522, which is also incorrect. Note that a simple linear, local back-tracing of the wind parcel from the perspective of the wind sensor in the sensor system502would have led to the same bad conclusion that the area source522is the source since the terrain is the main source of the non-linear wind flux geometry. What this example shows is that identifying sources from wind speed and direction measurements alone is difficult without a large number of wind measurements. Multiple sensor systems as described inFIGS.3,5, and6may be deployed in a field for the acquisition of weather measurements and compound measurements. The sensor system takes these measurements and relays messages related to these measurements with timestamps, identifiers, and other metadata regarding sensor operations to a centralized computing unit432. Now referring toFIG.6illustrating a perspective view600of the configuration of the air quality monitor202. The system includes the enclosure310, the communication antenna314, and the solar power system304. The configuration of the system as inFIGS.3and6or any other sensor system configuration capable of measuring target gas and weather measurements in a collocated manner may be deployed in a field where prospective emission sources are present. With reference toFIG.7illustrating a schematic700of the at least one device deployed at the site. The at least one device may include pressure sensors702, flow sensors704, temperature sensors706, level sensors708and other discrete sensors710. The at least one device may be installed at the site as explained in detail in conjunction withFIG.9(described later herein). Referring toFIG.7, the illustrative pressure sensors702may include pressure switches, piezoelectric sensors, manometers and the like. The pressure sensors may be deployed at a compressor or a wellhead pressure head and may be configured to generate a signal in response to the pressure of a fluid maintained in the air quality monitors. Similarly, flow sensor704, temperature sensor706, fluid level sensor708and other discrete sensor710may be configured to generate electric signals representative of the flow rate, temperature, level and other parameters of the fluid, respectively. In one illustrative configuration, the electric signals from the sensors ofFIG.7may be acquired as operational factors by the SCADA system418. With reference toFIG.8illustrating an exemplary architecture800of the SCADA system418connected to the air quality monitor102and the at least one sensor deployed on the site. In the same configuration, the at least one device may be connected to the at least one sensor804, at least one control valve806, at least one solenoid808, at least one alarm810and at least one discrete sensor812. The at least one sensor may be configured to sense and determine one or more operational factors of the one or more devices and may further transmit the operational factors to the input/output module430. The SCADA system418may be connected to an input/output module430. The input/output module430may be further connected to the air quality monitor102via a master communication unit802. The master communication unit802may be further connected to the slave communication modules814a,814b. The air quality monitor102may be configured to create historical data related to emissions in view of the topology of the site, and it further may be configured to transmit the historical data to the input/output module430. The SCADA system418may be configured to acquire both the historical data from the input/output module430as a set of SCADA data and the operational factors, and it may transmit the set of SCADA data to the first server. Further, the slave communication modules814a,814bmay be connected to a master communication module of other air quality monitors (not shown in figure) positioned on site. Similarly, the SCADA system418may be configured to acquire SCADA data from the other air quality monitors102a,102b, and102cvia the slave communication module814a,814b. Further, it must be noted that the compound sensor may be the chemical sensor210aexplained in conjunction withFIG.2. Now referring toFIG.9illustrating an exemplary layout900of the site. The site may include at least one pumpjack902a,902b. The at least one pumpjack902a,902bmay be fluidically coupled to a chemical tank904, a production tank906, a separator unit908and/or a compressor910. As illustrated in the figure, at least one of the devices installed on the site may be connected to at least one sensor illustrated byFIGS.7and8. For example, the chemical tank904may be connected to a fluid level sensor708. The production tank906may be connected to at least one sensor such as the pressure sensor702or the flow sensor704. The separator unit908and the at least one pumpjack902a,902bmay be connected to discrete sensor710. The compressor910may be connected to the pressure sensor702. The sensors connected to the devices may be configured to sense the operational factors of the devices at the site such as pressure, flow rates and the like. With continued reference toFIG.9, in another illustrative configuration, the site may include at least one air quality monitor202. For example, the at least one air quality monitor202may include a first air quality monitor202positioned at a first location, which may be a north boundary of the site. The at least one air quality monitor202may include a second air quality monitor positioned at a second location, which may be a south boundary of the site distant from the first location. The at least one air quality monitor202may include at least one chemical sensor210(refer toFIG.2). The chemical sensors210may be configured to sense at least one set of attached parameters related to at least one location at which the at least one air quality monitor may be installed. The set of attached parameters may include a concentration of a target chemical gas emitted by the at least one device on site. The at least one air quality monitor202may be further configured to transmit the at least one set of attached parameters to the first server. With continued reference toFIG.9, in another illustrative configuration, the SCADA system418may be installed at the site and connected to the at least one sensor in a configuration similarly illustrated byFIG.8. The SCADA systems may be configured to acquire the SCADA data as illustrated inFIG.8and may include historical data regarding emissions from the at least one air quality monitor202and the operational factors from the at least one sensor connected to an at least one device deployed at the site. In another configuration, the set of SCADA data may further include at least one physical factor associated with the at least one device on site. The set of SCADA data may be further transmitted to the first server. Now referring toFIG.10, illustrating a schematic1000of another exemplary configuration of the supervision and monitoring of the operation of a pumpjack902from the at least one pumpjack902a,902bofFIG.9. The pumpjack902may include a motor1008, a gearbox1004, a walking beam1006, a horse head1010, a bridle1016, a pump1012and a piston1014positioned in the pump1012. The pumpjack902may be connected to a control unit1002. In the same configuration, the SCADA system418may be hard-wired or wirelessly connected to the control unit1002. The control unit1002may include a user interface and a display. A site operator may access the user interface to manually adjust the operation of the pumpjack902. The operation of the pumpjack902may include output from the motor1008, which may drive the gearbox1004. Driving the gearbox1004may further actuate the walking beam1006. As illustrated byFIG.10, the horse head1010may be connected to the walking beam1006. Further, the bridle1016may be extended from the horse head1010, and bridle1016may be connected to the pump1012. Actuation of the walking beam1006may further oscillate the horse head1010in a vertical direction, thereby operating the pump1012such that the piston1014in the pump1012may oscillate in tandem with the horse head. The actuation of the piston1014may lift or excavate the emulsion from the oil well. The control unit1002may be configured to control the operation of the pumpjack902via, for example, the output of the motor1008, the gear ratio and output speed from the gearbox1004or the actuation speed of the walking beam1006, which may further impact the oscillation speed of the horse head1010, the bridle1016and the pump1012. Operational factors such as speed (in RPM or meters/second) and frequency (cycles per second) may be acquired as operational factors by the SCADA system418. Further, a peak load on the walking beam1006, a depth at which the pump1012may be drilled into the oil well and a diameter of the piston1014may account for the physical factors acquired by the SCADA system418. Now referring toFIG.11illustrating a front view1100of the site that includes an emission. The site may include multiple potential emission sources E1, E2, etc. Further, the site may include a sensor S1. In the scenario depicted inFIGS.11-13, a target compound C1may be emitted from source E1and may form a plume P1covering a region R1. Further, an obstruction O is present which may obstruct the plume P1. As such, the obstruction may result in a region R2within the region R1over which the target compound C1is not present or is minimally present. The sensor S1, which may be lying within the region R1but outside the region R2, may detect the target compound C1. FIG.12illustrating a top view1200of the site that includes an emission source. FIG.13illustrating a top view1300of another example site scenario in which the mixing of multiple target compounds takes place. With continued reference toFIG.13, the site may include multiple potential emission sources E1, E2, etc. Further, the site may include the sensors S1and S2. Target compound C1is emitted from the source E1and forms the plume P1. Further, a target compound C2is emitted from the source E2and forms a plume P2. The plumes P1and P2merge in a region R3. As such, the region R3includes both the target compound C1and the target compound C2. The sensor S1, which may be lying outside the region R3, may detect only the target compound C2. The sensor S2lying in the region R3detects both the target compounds C1and C2and therefore generates a confounding signal. In one configuration, the confounding signal may be used to generate at least one signal from information regarding the identification of the concentration of the target gas. The air quality data processing module108(which may be on board S1and S2) may be configured to analyze the confounding signals and to identify one or more gases irrespective of their concentration. The sensor S1may use spectrophotometry to identify the characteristics of the one or more gases and to thereby classify the gases represented by the confounding signal. After classification, the sensor S1and S2may be configured to separate the signals and may further transmit the signals to the first server. The fundamental aspects of plume detection are depicted inFIGS.14A-14E. Now referring toFIG.14Aillustrating a symbolic top view1400A of the transport of an emission plume1408from a source1406to a sensor system1402via transport denoted by streamline1404. In reality, the emission plume1408may not be contiguous and may have a complex three-dimensional shape.FIG.14Apresents the transport in the case of a steady, medium-speed wind pointing directly at the sensor system1402. Further referring toFIG.14Billustrating a similar symbolic top view1400B but with a faster wind speed. Further referring toFIG.14Cillustrating another symbolic top view1400C showing the effect of a change in wind direction by angle “a.” Further referring toFIG.14Dillustrating yet another symbolic top view1400D showing the effect of a tortuous streamline1414. Further referring toFIG.14Eillustrating a symbolic representation1400E of a plume cross-section constructed by using the wind direction to “scan” across the plume. ComparingFIGS.14A and14B, it can be observed that an increase in speed may result in a narrower plume since the plume spread is determined by the balance between diffusion, turbulent mixing, and advection. At higher wind speeds, horizontal advection becomes the dominant force, and this changes an observed concentration at the sensor system1402. In particular, the maximum concentration observed across the plume may be higher in the case of higher wind speeds. However, higher wind speed can also result in more turbulent mixing in some conditions, which may influence this result. In particular, this can result in a large variance in the measurement of maximum concentrations. The differences between the low-speed and high-speed cases clearly highlights the importance of wind speed in transport and the consequent need to measure wind speed in conjunction with the concentrations of the emitted compounds. InFIG.14C, the average wind transport is shifted angularly relative to the direct source-to-sensor line seen in14A and14B. Angle1410is denoted “a.” In idealized conditions, an increase in “a” may result in a reduction of the observed plume concentration. The concentration in an idealized plume is maximum at the center. In practice, due to turbulence, the plume may be branched, and its cross-section profile may not follow a regular pattern like the one shown inFIG.14E.FIG.14Epresents an idealized profile of the cross section of the plume as measured by the sensor system1402. The sensor system1402may sample the plume at different angles and register an associated concentration point1416. When sufficient numbers are obtained, a fit of a point cloud1418can be obtained. If the measurements occur in idealized conditions when the wind speed, temperature and other parameters beside wind direction are stable, the plume flux may be calculated using a simple mass conservation equation by multiplying the area concentration of the plume cross section by its normal speed and by estimating the plume concentration in the height direction. This approach may be taken using plume theory for the estimation of the plume geometry and using a mobile sensor across the plume cross section to estimate the average plume concentration. One illustrative configuration instead uses shifts in wind direction to estimate the plume average concentration as depicted inFIG.14E. Another more precise configuration is given in the description of the inverse model used to estimate emission source and flux. The wind may change dynamically during transport from the source to the sensor system1402as shown inFIG.14D.FIG.14Dshows a case in which the transport from source to sensor is on average directed as denoted in an average flow direction, but it may have a dynamically tortuous path. Moreover, a wind direction as sensed by the sensor system1402is shown as vector1412. This shows that in a case with dynamic wind or a case in which the topology influences the actual path taken by air flow, the source position may not be given directly by the wind direction measurement at the sensor system or source. This highlights the need to model the air flow in the vicinity of the sensor in order to better understand the transport of the emission from a source to a sensor system when dynamic effects, obstructions, topology, or other factors may influence the transport. One of the major problems is that air quality monitoring systems are expensive and typically require expertise to operate properly. Real-time air quality monitoring at a finer scale may be cost prohibitive because air quality monitoring instruments can be expensive. Therefore, such problems motivate the minimization of the air quality monitoring setup on site. This is done by removing the air quality monitors which may be redundant and non-contributing. Air quality monitors that may be redundant are identified using a simulation model or a digital twin. The simulation model may be created using output from an air-quality-monitor-minimizing machine learning model. The air-quality-monitor-minimizing machine learning model may be trained using the at least one set of attached parameters sensed by the at least one air quality monitor202and the set of SCADA data. As previously described, the at least one air quality monitor202may include a first air quality monitor positioned at a first location and configured to generate a first set of attached parameters. Similarly, the second set of attached parameters may be generated by the second air quality monitor positioned at the second location. The centralized computing unit432may be connected to the first server and may be configured to obtain the at least one set of attached parameters sensed by the at least one air quality monitor202and the set of SCADA data to train an air quality monitor-minimization machine-learning model (hereinafter referred as “AQM-minimization machine-learning model”). The AQM-minimization machine-learning model may be configured to generate a trained AQM-minimization parameter. The centralized computing unit432may be configured to obtain the trained AQM-minimization parameter and to generate an emission-simulation model of the target substance. Using the emission-simulation model, the at least one set of attached parameters sensed by the at least one air quality monitor202and the set of SCADA data may be monitored iteratively over a pre-defined time period. Based on the monitoring of the at least one set of attached parameters sensed by the at least one air quality monitor202and the set of SCADA data, the emission-simulation model may be refined to a refined emission-simulation model. The refined emission-simulation model may be analyzed to determine the redundant or non-contributing air quality monitor from the at least one air quality monitor202. The redundant or non-contributing air quality monitor from the at least one air quality monitor202may be removed accordingly. Each of the methods described herein may be performed by hardware, software and/or firmware in accordance with the machine learning process, which may contain computer-executable instructions executed by the centralized computing unit432or an independent processor externally connected to the first server to perform functions relating to the methods described herein or, optionally, in conjunction with other processes. The AQM-minimization machine-learning model may be trained in accordance with pattern recognition of the air quality monitors202(using methods adapted from those found in Shi-qi Bao et al., 2016). With reference toFIG.15illustrating a layout1500of the pattern recognition model, the historical data from the air quality monitors202may be acquired from the air quality monitor data acquisition1502, and the set of attached parameters, as well as various other atmospheric characteristics, may be acquired from a database1504. The block1506illustrates a data source layer which may be configured to source the historical data from air quality monitor data acquisition1502, the set of attached parameters and a variety of other atmospheric data. Further, the sourced data may be pre-processed in the data preparation layer illustrated by block1508. The data preparation layer may include the following steps:Understanding the problem: This step identifies various associated problems for which the machine learning model (in this case the AQM-minimization machine-learning model) may be trained. This step collates all the issues pertaining to redundancy and non-contributing air quality monitors.Data Collection: The data may be collected from the data source layer. This step may ensure that the data sourced from the data source layer relates to diverse regions in the site and is not solely obtained from a single air quality monitor.Profiling and Data Exploration: The sourced data may be analyzed to identify trends, outliers, exceptions, incorrect, inconsistent, missing, or skewed information, etc. Although source data will provide all model findings, it may contain unseen biases. Data exploration helps to identify problems such as collinearity and to draw attention to situations in which data set standardization or other data transformations might be necessary.Data Cleaning and Validation: This step may help identify and resolve issues related to inconsistencies, outliers, anomalies, incomplete data, etc. Data cleaned in this step may be analyzed to find valuable patterns and information since it is free from irrelevant data. It is essential to build high-quality models, and missing or incomplete data are the main obstacles.Data Formatting: This step ensures a proper and consistent data format. Data incorrectly formatted may increase the number of errors generated by the AQM-minimization machine-learning model.Data Quality Inspection: This step involves an analysis of data quality to identify any redundant data, error or outlier still present in the data.Feature engineering and selection: Feature engineering is the selection, manipulation, and transformation of raw data so as to reveal valuable features or identify the most relevant variables for supervised machine learning. Feature engineering may yield an enhanced predictive model with more accurate predictions. It may involve imputation, the filling in of missing data in the datasets. Feature engineering may further involve encoding, which may convert non-numeric values into numeric form.Data Splitting: After feature engineering and selection, the data may be split into a training data set and an evaluation data set. Training data may be stored in the database and may be used to train and update the model over a pre-defined period of time. The evaluation data may be further processed using principle component analysis. With continued reference toFIG.15, the block1510may illustrate a principle component analysis layer. Principle Component Analysis (PCA) is an unsupervised learning algorithm that may be used for exploratory data analysis and predictive modeling. The evaluation data from the data preparation layer may be analyzed via PCA to identify high-variance patterns in the dataset. The patterns in the dataset may be further analyzed in a Non-Linear High-dimensional mapping layer illustrated by block1512. This may be done to record distances between data points in the sourced data so as to retain them during processing. The data may be further analyzed by an optimal plane selection layer illustrated by block1514. This layer may be configured to optimize the data by resampling and randomly splitting the data points in the dataset so as to prevent bias. Further, the optimized data may be used to evaluate the performance of classifiers in the classifier evaluation layer. The classifier evaluation layer illustrated by block1516may include the four metrics accuracy, confusion matrix, AUC-ROC (Area Under the Curve-Receiver Operator Characteristic) and Cross-Entropy Loss. The classifier evaluation layer may be configured to assess accuracy, to validate the dataset, to reduce noise and to assess the data. The block1518may illustrate a presentation layer which acquires data from the classifier evaluation layer and uses data from the air quality monitors202to determine which of the at least one air quality monitors may not be contributing to the data. With continued reference toFIG.15, the assessed data may be the trained AQM-minimization parameter. The centralized computing unit432may be configured to acquire the trained AQM-minimization parameter to generate an emissions-simulation model which may in turn be a digital twin of real-time emissions and configured to predict the emissions occurring on site. To improve the accuracy of the emission-simulation model, the set of SCADA data and the at least one set of attached parameters may be monitored and updated in the database1504over a predefined period of time. Based on the monitoring, the emission-simulation model may be refined iteratively to yield a refined emission-simulation model. Further, the refined emissions-simulation model may be configured to generate an emission output prediction. The predicted emissions may be analyzed with the set of SCADA data and the at least one set of attached parameters, and the data may be further analyzed to identify the air quality monitors located in close proximity to the predicted emissions that may be contributing to the data. The air quality monitors which may not be present in the vicinity of the predicted emissions may be flagged as redundant or non-contributing air quality monitors. A decision tree may be deployed with the emission-simulation model and may identify the redundant and non-contributing air quality monitor and remove from consideration such air quality monitors from among the at least one air quality monitors202. The present disclosure contemplates systems and methods which may be implemented or controlled by at least one controller so as to perform the actions herein described. For example, in some configurations, the controller, whether part of a sensor, computing device, etc., may be configured to process data from sensors, users or operators or to model, calculate or perform at least one simulation using any of the data sets, tables or maps described. It may also be configured to perform any or all described algorithms and any others similarly suitable or to control the operation of any disclosed parts or components in a manner necessary or appropriate for the proper function, operation and/or performance of any disclosed systems or methods. Examples of such systems and methods are illustrated by U.S. patent application Ser. No. 17/813,585 filed by the same Applicant, which is incorporated herein by reference. The utilization of the at least one set of attached parameters and the set of SCADA data may not be restricted to only air quality minimization methods. The at least one set of attached parameters and the set of SCADA data may be further utilized to determine the emission location of the target gaseous chemical and to quantify the emission of the target gaseous chemical. The emission location and the quantification of the emission of the target gaseous chemical may be illustrated in detail with reference toFIGS.16-24. In an illustrative configuration, the centralized computing unit432may be configured to acquire at least one set of attached parameters from the at least one air quality monitor202, and with the set of SCADA data may be further configured to train an emissions-location machine learning model. The emissions-location machine-learning model may generate a trained emissions-model parameter. The centralized computing unit432may be further configured to generate an emissions-simulation model of a plume of the target gaseous chemical using the trained emissions-model parameter. A in the case of an emissions-simulation model, the at least one set of attached parameters and the set of SCADA data may be monitored over a predefined period of time. Using data obtained from the monitoring, the emissions-simulation model may be refined iteratively over a predefined time period to create a refined emissions-simulation model. The refined emissions-simulation model may be analyzed with the set of SCADA data and the at least one set of parameters to locate the sources of emissions in the site. In an exemplary configuration, the simulation model of an emission plume created by the emissions-location simulation model may be a Gaussian Plume Model. An aspect of the system may use a reduced-order model rather than a full dispersion advection transport model for the simulation of the transport of the trace gas of interest. In particular, Gaussian Plume modeling may be used. The Gaussian plume model uses a Gaussian approximation of the plume geometry to approximate dispersion. The model assumes a flat terrain and a well-mixed dispersion process. The Gaussian Plume is a reduction of a steady state solution to the flow equations for the case of a simple terrain geometry. Therefore, a small number of parameters suffices to describe the model. Such parameters might include the source-to-sensor distance and direction, the wind direction, the height of the source and the height of the sensor. Internal parameters include the dispersion widths in the horizontal and vertical directions through the intermediary of the standard deviation of the Gaussian shape. A simple reduction involves the assumption of identical standard deviations for both vertical and horizontal terms. Some approximation of the dispersion width can be obtained using Pasquel curves that may depend on the atmospheric stability class at the time of transport and the distance between source and sensor. One configuration of the present disclosure directly estimates the stability class and or the dispersion standard deviation using the measured standard deviation of the wind at the sensor location on a time scale corresponding to the time of transport from the sensor to the source. This standard deviation is calculated over many samples using the wind direction change during a period of interest. For example, one might use one sample per second taken over a period of one minute to calculate the wind standard deviation. It is then possible to use the horizontal wind standard deviation to calculate the stability class and to then use this value to calculate the dispersion standard deviation. Alternatively, the standard deviation of horizontal wind can be used to directly approximate the plume dispersion width. When the internal dispersion terms are obtained, the other inputs such as the trained emission-location simulation model parameter, the set of SCADA data and the average direction of wind during the observation period can be used to solve the Gaussian plume equation. Note that the direct Gaussian plume equation relates the flux at the source to a concentration at a selected point. An inverse Gaussian plume equation permits us to relate the concentration at a point to the flux at the evaluated emission source. Because the position of the source and the measurements at the site setup can be determined, and because wind speed, wind direction and concentration may have been measured continuously, the flux of an emission source may be estimated using the inverse Gaussian equation. With continued reference ofFIG.15, showing an illustrative configuration, the centralized computing unit432may be configured to acquire at least one set of attached parameters from the at least one air quality monitor202, and with the set of SCADA data it may further be configured to train an emission-quantification machine learning model. The emission-quantification machine-learning model learning model may generate a trained emission-quantification model parameter. The centralized computing unit432may be further configured to use the trained emission-quantification-model parameter to generate an emission-quantification simulation model of a plume of the target gaseous chemical. As in the case of an emissions-simulation and emissions-location model, the at least one set of attached parameters and the set of SCADA data may be monitored over a predefined period of time. Using data from the monitoring, the emission-quantification simulation model parameter may be refined iteratively over a predefined time period to create a refined emission-quantification simulation model. The refined emission-quantification simulation model may be analyzed with the set of SCADA data and the at least one set of parameters so as to locate emission sources and quantify the emissions. In an illustrative configuration, the emission-quantification machine learning model may be executed by a Quantification Algorithm. The quantification algorithm may be used to quantify and detect leaks through the use of continuously monitored concentration and wind data. There are four major steps in the algorithm: localization, event detection, background calculation, and an analysis of atmospheric stability. The localization step uses the location of the sources and detectors to calculate the probability of a detector seeing an event or leak from each sensor. Emission plumes, for example methane plumes of equivalent size, are compared with the peak events at each sensor. The most probable source will be identified, and the source will collapse if there is no event identified. The probabilities associated with each detector then provide a weighted average of the flux rate at each source. During event detection, the methane plumes “seen” by the at least one air quality monitors202are isolated so that each individual event can be identified. The background calculation involves estimating the background concentration associated with each detector when no event is detected. The background concentration is used as a baseline to determine the significance of an event when there is a spike in methane readings. In the last step, the atmospheric stability is predicted from wind speed and direction to account for the spreading of the plume. Localization and Atmospheric Stability: The Gaussian plume model is the basis of the quantification algorithm and is the reason for some of the major modeling choices such as the use of a multivariate normal distribution of concentration and a radial basis coordinate system. The effects of wind speed and direction, mixing, and atmospheric stability are accounted for in the Gaussian plume model. Now referring toFIG.16illustrating a representation1600of a Gaussian plume model (adapted from J. M. Stockie, 2011).FIG.16depicts a plume of a target gas such as methane modeled as radially extending with horizontal and vertical spread. For an emission rate Q g/s and wind velocity u m/s, the concentration distribution profile known as the Gaussian plume solution associated with data from the at least one air quality monitor202at a height of z meters and a source at a height of H meters is provided by the equations C⁡(r,y,z)=Q4⁢π⁢ur⁢exp⁢(-y24⁢r)⁢(exp⁡(-(z-H)24⁢r)+exp⁡(-(z+H)24⁢r))(2.1)r=12⁢σ2(x)(2.2)σ2(x)=a⁢xb(2.3)x=R⁢cos⁡(θ-θ0),and⁢y=R⁢sin⁡(θ-θ0).(2.4) In equation (2.1), the first term Q/4πur represents the initial condition or initial flux. The second term exp (−y2/4r) represents the spread of the plume away from the y-axis. The third and fourth terms exp−(z−H)2+exp−(z+H)2/4r represent the change in the plume as a function of height. The parameter σ is the standard deviation of the concentration distribution, and r represents its variability. Variables y and z are the Cartesian coordinates while a and b are the diffusion parameters related to the atmospheric stability class. A relationship between the time of day, Pasquill-Gifford stability class and the diffusion parameters can be determined. In equation (2.1), the concentration distribution profile is projected to radial basis coordinates. A function T dependent on wind direction may be defined using the equations T1=12⁢π⁢u⁡(a⁢Rb)2,(2.5)T2=exp(-R2⁢sin2(π⁡(θ-θ0)1⁢8⁢0)2⁢(a⁢Rb)2),(2.6)T3=exp(-(z-H)22⁢(a⁢Rb)2)⁢and(2.7)T4=exp(-(z+H)22⁢(a⁢Rb)2).(2.8) During localization, there is a probability p(n,m) that a detector n=1, 2, . . . , N can “see” a source m=1, 2, . . . M at a given time. This probability is a function of wind speed and direction. The angle θ0and radial distance R between the source and detector are measured, and then the flux from source m is computed using concentration data from detector n. Now referring toFIG.17illustrating a graphical representation1700showing radial distance R and angle θ0between source S1and detector D1. The conditional probability is then expressed as P(Sm|Dn,tk)=p(n,m) forn=1,2, . . . ,N,m=1,2, . . .M,k=1,2, . . .J.(2.9) The probability P(Sm|Dn,tk) in (2.9) is the probability that source m caused a reading at time tkat detector n. The probability curves are given for all possible paths of the Gaussian plume in radial coordinates. The input parameter θ0n,mrepresents the angle between the specific source m and detector n. The function T is dependent on wind direction so that T⁡(θjn,m)=T1×T2(θjn,m)×(T3+T4)ρg⁢a⁢s,j=1,2,…,J⁢and(2.1)θn,m=(-8⁢9+θ0n,m,89+θ0n,m),m=1,2⁢…,M,n=1,2,…,N.(2.11) In addition, the condition is set that if θjn,m>360° or θjn,m<0 then θjn,m—is replaced by θjn,mmodulo 360°. The next step is to normalize (2.14) at time tk, k=1, 2, . . . , J given some wind direction θkn,mand wind speed uk. We obtain the expressions P⁡(Sm❘Dn,tk)=(T˜(θ1n,m),…,T˜(θJn,m,))⁢at⁢tim⁢e⁢tk⁢for(2.12)k=1,2,…,J,m=1,2⁢…,M,and⁢n=1,2,…,N⁢…P⁡(B|Dn,tk)=1-∑m=1M⁢P⁡(Sm|Dn,tk)⁢and(2.13)Tˆ(θin,m)=T⁡(θjn,m)∑j=1J⁢T⁡(θjn,m),i=1,2,…,J.(2.14) With reference toFIG.18illustrating a schematic representation1800of an example of localization for a site1802(e.g., Colorado State University's METEC Lab experimental site) with probability curves1804given as a function of wind direction and a probability table1806, the functions associated with localization and atmospheric stability may compute Radial Gaussians, fluxes, BNL dispersion coefficients, quantities related to geometry or probabilities related to sites. The next phase of the quantification algorithm uses each set of concentration data to identify events corresponding to the respective detectors. A preliminary analysis considered 1-minute date at 3-minute intervals to see if their concentration peaked. A peak in concentration is analyzed by using the difference formula to approximate the gradient or slope of the concentration curve. If the gradient exceeds a threshold of 0.75, the time period is classified as an “event” with a nonzero flux rate; otherwise, it is classified as “no event” with a negligible flux rate. The start and end times of the event must also be specified. An event is said to start if the change in concentration is greater than some δt, and it is said to end when the change is less than −δt. In this way, the event is represented as a symmetric curve with the same slope at the beginning and end of the event. The baseline concentration must first be represented by a continuous line. To determine the line, the background concentration is calculated from data corresponding to a wind direction between 25 degrees and −25 degrees from θ0. Data obtained within 15 minutes before or after an event is removed, and then a continuous, 5-minute rolling average is taken. If there is no concentration data moving forward, the backward fill is applied to populate missing values forward in time and forward fill is applied to propagate the last observation forward. The wind speed is filtered so that it cannot drop below 0.5 m/s or exceed 10 m/s. Now referring toFIGS.19and20illustrating graphical representations1900and2000of five sample events and an example background concentration, respectively. The graphical representations1900and2000are of results from (a) event detection and (b) background concentration that are depicted. The associated functions for event detection and background calculation may be quantify and detect leaks, respectively. In some configurations, total hourly flow rate may be determined using either (i) a maximum-probability-based method or (ii) the total weighted average method. For method (i) in (4.1), the total hourly flow rate is calculated as the average of hourly sensor-based flow rates from the most probable source. This average is restricted to sensors with conditional probabilities higher than 75% or to the sensor with the highest probability reading if no other sensor has a probability reading higher than 75%. This method works best if only one source is active, and the rest are inactive with negligible or no emission reading. The maximum and minimal flow rates at each sensor are provided if the sensor has a specific flow rate over 75%. For method (ii) in (4.2), the flow rate of each source is given by the weighted average of all partial flow rates. The weights are given by the hourly conditional probabilities associated with each sensor that has probabilities higher than 100/M (100 per million). The flow rate over all sources is then summed to form a total flow rate for sources that have a total probability of leak over 100/M. This method is more efficient at accounting for multiple sources but less so for a single emitting source. Method⁢(i):=Qm(P>0.7⁢5(Sm|t6⁢0)),m=1,2,…,M,(4.1)Method⁢(ii):Q˜=∑m=1M⁢P>1⁢0⁢0M(Sm|t6⁢0)⁢Qm,(4.2)P⁡(Sm|t6⁢0)=∑n=1N⁢C⁡(Dn,T)⁢P⁡(Sm|Dn,T)∑n=1N⁢C⁡(Dn,T)(4.3) METEC Round 2 Testing and Validation Findings and Results of MVP1 Quantification Model: Following a field-testing campaign in a real-world environment at a site (such as the Methane Emissions Technology Evaluation Center (METEC) at Colorado State University), illustrative results from the development, testing and implementation of methods for quantifying methane emissions from oil and gas facilities using sensor nodes and an analytics platform are presented. The analytics platform integrates detector data, meteorological conditions, and cloud analytics to detect and quantify methane emissions for remote locations. This first minimum viable product for quantification (MVP1) has been or will be updated with the results from subsequent tests. An installation illustrative of the present disclosure was used to perform three days of around-the-clock, live methane emissions tests to investigate the diurnal effect on quantification methods. The design of the experiment included a total of forty-four test conditions (experiments) wherein programmed methane releases were introduced from actual natural gas site structures including gas processing units, well heads and storage tank batteries. A total of eight sensor nodes forming a larger sensor network were deployed at the fence line of a 202 ft×280 ft site with a detector-to-source distance ranging from 69 to 212-ft. The duration of each test was 60 minutes, and each test was followed by 15 minutes without methane release to re-establish baseline for the next test. Each test was repeated three times to examine various quantification models and to ensure reproducible, consistent results. Methane release rates ranged from a low of 0.05 to a high of 0.84 g/s, which is a wide range representing average well pad emissions. Wind speed and direction may be measured using ultrasonic wind sensors installed in some of the sensor nodes. With reference toFIG.21illustrating a plot2100comparing predicted emission quantifications and the actual quantification at the METEC site over 3 days, the plot2100illustrates a plurality of points2102for actual quantified emissions flux, a plurality of points2104for a predicted model N generated by the quantification algorithm and a plurality of points2106for a predicted model S generated by the quantification algorithm. As illustrated, the plurality of points2106depicting predicted model S may have a higher deviation with an error of 16% compared to the plurality of points2104depicting predicted model N and its error of 3%. Therefore, for the prediction of emissions, the model N may have a higher significance than the model S. Now referring toFIG.22illustrating a workflow diagram2200that demonstrates a framework for quantification. As shown in the quantification workflow diagram ofFIG.22, as field testing progresses, time series data from individual detectors are streamed to Amazon Web Servers (AWS) in real time. The data may include signals from the sensing element as it responds to local methane concentrations at the location of the detector in addition to wind speed in m/s and wind direction measurements (0 to 360 degrees). Detector data are pushed to AWS for pre-processing before being passed to the developed model for emission rate and source location predictions. When the data is downloaded to local servers, it is passed on to an extraction, transformation, and loading (ETL) computation pipeline in preparation for the prediction algorithm. Before being fed into the model, the concentration data (ppm) is augmented by the GPS coordinates of the individual sensors and a single data file corresponding to the timespan (typically one hour) of a given test. Initial detector placement is decided prior to the testing campaign by studying multiple wind rose diagrams created from historical weather station data to identify the most likely dominant wind directions around the test site. The visualization of time series and hourly aggregated statistics about concentration, wind speed and wind direction from all detectors and weather sensors enable the user to assess node engagement, to adjust the experimental setup and, if necessary, to align sensors with the dominant methane dispersion directions as determined by the prevailing wind. Now referring toFIG.23illustrating an example plot2300of methane dispersions, i.e., the influence of the wind on methane emissions as recorded on Jul. 21, 2023. As seen in the plot, the quantity of methane detected may be highest between midnight and dawn, especially when the wind speed is at its lowest (between 0 and 10 m/s). Lower wind speeds may account for a lower degree of methane emission dispersion. Further, referring to the plot2300, it may be seen that higher-speed winds between afternoon and evening may disperse the methane emissions and result in low detection and quantification of the methane gas. Plume Dispersion Model for Quantification of Methane Emissions: In a real environment, an industrial plume may propagate and diffuse from the moment an emission is released from a point source as shown inFIG.12. This transport process is the combination of the diffusion (due to turbulent eddy motion) and advection (due to the wind) that define the term “dispersion” (Stockie, 2011). The released contaminant will be transported through the air in an axisymmetric pattern in the idealized case. A method used to model this phenomenon may be derived from the advection-diffusion equation, and it yields a Gaussian distribution profile that decays with distance. A dispersion model is essentially a computational procedure for predicting concentrations downwind of a pollutant source. It is constructed using knowledge of the emissions characteristics (stack exit velocity, plume temperature, stack diameter, etc.), terrain data (surface roughness, local topography, nearby buildings, etc.) and the state of the atmosphere (wind speed, stability, mixing height, etc.) (MacDonald, 2003). The complexity of the plume source inversion arises from the need to recover information about the source emission rate(s) and the locations using concentration signatures from a few detectors. These emissions are related through a highly nonlinear and high-dimensional turbulent dynamic that pervades the near-surface atmosphere. A number of analytical and approximate solutions for atmospheric dispersion may be derived under a wide range of assumptions, boundary conditions and parameter dependencies. One of these solutions is the Gaussian plume solution, which is an approximate solution for single point-source emissions and is given by C⁡(x,y,z)=Q2⁢π⁢U⁢σy⁢σz*exp⁢(-y22⁢σy2)*[exp⁡(-(z-H)22⁢σz2)+exp⁡(-(z+H)22⁢σz2)] wheresy=S.D. of horizontal distribution of plume concentration=a×b (m)sz=S.D. of vertical distribution of plume concentration=c×d (m)C=Concentration at the detector (kg/m3)H=Effective height of emission source (m)U=Wind speed along x-axis, assuming invariable with height (m/s)Z=Detector height above ground (m). Data Post-Processing: The plume model outputs may include predicted release rates (or instantaneous fluxes) at each detector. The predicted release rates from each detector may be grouped to form a big sample of flux data called the “population.” After obtaining a full time series flux for each detector, bootstrap resampling may be performed to quantify the random errors and provide a confidence range for the statistics reported. The mean flux for each detector may be calculated and added to the population. Further, summary statistics and an estimation of the precision of the reported statistics may be provided by using bootstrap resampling as described immediately below. As already explained in conjunction withFIGS.14A-14E, the flux of an emissions plume may be determined upon receipt of a predetermined number of plume samples at a plurality of angles from the plurality of air quality monitors (i.e., sensor systems1402) installed at the site. Further, an associated concentration point may be registered using data from the plurality of angles. A fit of a point cloud may also be obtained. When the measurements occur under idealized conditions, the plume flux may be calculated using a mass conservation equation by multiplying an area concentration of the plume cross section by its normal speed and by estimating the plume concentration in the height direction. The site parameters may include wind speed, wind direction, temperature, and others. The quantification framework may be applicable to more than one air quality monitors from among the at least one air quality monitors202. The quantification of the emissions from the each of the air quality monitors202may be analyzed with the set of attached parameters associated with each of the respective air quality monitors and the SCADA data in accordance with a rule. The rule may be as elementary as crossing a threshold, or it might be more complicated and derived over time using machine-learning models. The comparison may be done using the centralized computing unit432or at a remote location (e.g., the internet/web hosting server). One exemplary aspect is the assessment of a signal-to-noise (SNR) ratio, which may be explained in conjunction withFIG.24. With reference toFIG.24illustrating a graphical representation2400of example SNRs associated with different sensors (air quality monitors202) at the site, an SNR for the first air quality monitor may be depicted by a curve2302, an SNR for the second air quality monitor may be depicted by a curve2304, an SNR for the third air quality monitor may be depicted by a curve2306and an SNR for another air quality monitor may be depicted by a curve2308. Furthermore, a threshold SNR of 0.7 may be selected. With this threshold SNR, the air quality monitors with SNR<0.7 may be considered redundant or non-contributing and hence may be removed from the site. In one illustrative configuration, a regional atmospheric parameter for the site may be procured from a second server. In some configurations, the regional atmospheric parameter for the site may be the height of a pressure boundary layer (hPRBL). As such, the hPRBL may be procured from the second server, which may be the High Resolution Rapid Refresh (HRRR) model maintained by National Oceanic and Atmospheric Administration (NOAA). As will be appreciated by those skilled in the art, a pressure boundary layer (PRBL)—also known as the atmospheric boundary layer (ABL) or peplosphere—is the lowest part of the atmosphere. The NOAA HRRR model is an improved observation model for a land surface that is updated using a combination of satellite, radar, commercial airplane, and weather balloon data. Atmospheric parameters can be interpolated using numerical weather prediction (NWP) models and actual data procured from a variety of observing systems and instruments (e.g., radar, lidar, sonar, remote stations, flight data or satellite images/data). Examples of procured/predicted atmospheric parameters include: time of day, date and physical measurement or indirect calculation of cloud, dew point temperature, wind speed/max-speed/time-average/at-height, height of pressure boundary layer (hPRBL), surface visibility, precipitation types (snow/ice-pellets/freezing-rain/rain), vertical velocity/mean-velocity, surface pressure, best 4-layers lifted index, snow depth, water equivalent accumulated snow depth, temperature, component of wind, component of wind shea low-level/deep-level, surface lifted index, radar reflectivity maximum/composite/echo-top, radar vertically-integrated liquid water, cloud fraction high-level/mid-level/low-level cloud, lightning, storm relative helicity, maximum of updraft helicity over layer 2 to 5 km AGL, maximum updraft/downdraft velocity and total column integrated graupel. In one illustrative example illustrated inFIG.24, the numerical weather-prediction model may be the High Resolution Rapid Refresh (HRRR) model processed/supplied by the National Oceanic and Atmospheric Administration (NOAA). The HRRR is an interval-updated (hourly updated) assimilation and model of atmospheric parameters and weather-related reporting. The HRRR (and other systems) are used for various applications including those related to aviation (and transportation in general), severe weather and energy. Details of the HRRR and other atmospheric modeling are available for download from the International DOI Foundation at https://doi.org/10.1175/MWR-D-15-0242.1. Depending on deployment location or other factors, other numerical weather prediction models may be utilized alone or in combination. While numerical weather prediction models report/calculate/estimate/provide different atmospheric parameters, one particularly useful atmospheric parameter is the height of pressure boundary layer (hPRBL). Through ongoing research, other variables may be incorporated to improve agreement between predicted and actual results. Data from a zero hour-analysis data set is procured for each numerical weather prediction NWP model run. (This is done hourly for the HRRR model.) The procured data set may consist of three-dimensional data at, in one example, a three-kilometer per-node resolution. The HRRR is updated every hour and gives detailed weather forecasts and conditions at a 3-kilometer spatial resolution. The processing power required to create the HRRR is substantial and is met using supercomputers whose main output is available via both web-lookup and presentation. For instance, 95 million data points representing the United States are processed and reported every hour. In addition, various data points can be obtained from the National Weather Service (NWS). These data points may include: total cloud, dew point temperature, wind speed at 10 meters (m) above ground-level, percent of frozen precipitation, total precipitation, precipitable water, height, height of cloud top, lifted condensation level, pressure boundary layer height, model terrain height, surface visibility, categorical precipitation types (snow, ice pellets, freezing rain and rain), wind gust speed, vertical velocity, mean vertical velocity, pressure mean sea level, surface pressure, pressure of level from which parcel was lifted, best 4-layers lifted index, snow depth, Water equivalent accumulated snow depth, temperature, component of wind, component of wind sheaLow Level, component of wind sheaDeep Layer, surface lifted index, radar reflectivity, maximum radar reflectivity, composite radar reflectivity, echo top, radar vertically-integrated liquid water, high-level cloud fraction, mid-level cloud fraction, low-level cloud fraction, lightning, storm relative helicity, maximum of updraft helicity over layer 2 to 5 km above ground level, maximum updraft velocity, maximum downdraft velocity and total column integrated graupel. As will be further appreciated, the behavior of the pressure boundary layer (PRBL) is directly influenced by its contact with a pressure surface. For example, the PRBL usually responds to changes in surface radiative forcing in an hour or less. In this layer, physical quantities such as flow velocity, temperature and moisture display rapid fluctuations (turbulence), and vertical mixing is strong. It should be noted that above the PRBL is the “free atmosphere” in which the wind is approximately geostrophic (parallel to the isobars) whereas within the PRBL the wind is affected by surface drag and turns across the isobars. The hPRBL therefore signifies the height above sea-level to which the pressure boundary layer (PRBL) exists. The hPRBL has proven useful for monitoring operating emissions at a site. For example, when the hPRBL is at a relatively low elevation, emission accumulate at the site. In some instances, when incredibly low hPRBL and stagnation conditions exist, the concentration level of a compound increases at a constant rate. In other words, for such cases, the time:concentration ratio is constant. Because global average methane levels are about 1.876 parts per million, the nominal leakage from operating devices (e.g. pneumatics operating on well-provided gases that include methane (CH4)) can be utilized to establish and/or confirm operating emissions. The height of the pressure boundary layer (hPRBL) is further explained in detail in conjunction withFIGS.26A-29. As mentioned above, the first measured substance concentration and the first set of individual atmospheric readings (also referred to as on-site atmospheric parameters) may be transmitted to the first server. The on-site atmospheric parameters may include physical measurement or indirect calculation of: wind-speed, wind-direction, air-pressure, air-temperature, humidity, etc. The first measured substance concentration may be in parts per million of the substance such as methane, nitrogen, nitrogen oxides, oxygen, ozone, carbon oxides, argon, sulfur oxides, water vapor, etc. In some configurations, the first measured substance concentration and the first set of individual atmospheric readings may be transmitted by the air quality monitor to the first server at an interval (e.g., 1 second). Further, in some configurations, the first measured substance concentration and the first set of individual atmospheric readings may be averaged prior to transmission to the first server. The averaging may be performed over an averaging-time such as a 1-minute interval. It should be noted that the averaging may be performed to create at least one time-averaged, measured-on-site atmospheric parameter. Some examples of time-averaged, measured-on-site atmospheric parameters include: air temperature, relative humidity, barometric pressure, wind-direction, wind stability class, circular standard deviation of past (e.g., 10 minutes) of wind-direction, current wind-speed, time-average wind-speed (5-minute/10-minute/30-minute), the hPRBL, etc. This averaging may in some situations occur on site at an air quality monitor before transmission. Alternatively, the raw data may be directly transmitted. In both cases, the averaging of the measured-on-site atmospheric parameters may be useful for efficiently utilizing resources such as available energy, transmission capacity/bandwidth, etc. The time-averaged, measured-on-site atmospheric parameters may be transmitted over a cellular network to the first server (e.g., a cloud-attached server such as an Amazon Web Services server) for storage, transformation and/or processing. In one configuration, the raw data associated with the measured-on-site atmospheric parameters may be sent to and stored on a Postgres database (a free and open-source relational database management system emphasizing extensibility and SQL compliance). Referring now toFIG.25illustrating a first example graphical representation2502of the pressure boundary layer (PRBL) for a site is illustrated in accordance with some configurations. The first example graphical representation2502shows a plot with time-of-day along X-axis and the height of the pressure boundary layer (hPRBL) along y-axis. As can be seen inFIG.25, the hPRBL varies as the day progresses. For instance, the hPRBL is low between midnight until morning and is relatively higher during the day, i.e., from sunrise until sunset. As will be appreciated, the variation in hPRBL over a 24-hour period is due to the variation in the speed of winds at the site. Since wind speed is relatively higher during the period between the sunrise until sunset, the hPRBL during this period is observed to be higher. As a result of variation in hPRBL, the concentration of a substance mixed in the atmospheric air (due to emission/leakage from an emission source present at the site, for example) may also vary. As will be further appreciated, the concentration of the substance may be lower during a period when the average wind speed is high, and the concentration may be higher during a period when the average wind speed period is relatively lower.FIG.25shows a second example graphical representation2504depicting a graphical plot with substance concentration in the air at the site along the y-axis and time along the x-axis. For example, the substance may be methane (CH4) gas. As shown in the second example graphical representation2504, during the period between midnight and sunrise, i.e., when the wind speed is low and the hPRBL is also observed to be low, there is a gradual (almost linear) increase in the concentration of the methane (CH4) gas in the atmospheric air surrounding the site. This condition may be referred to as the “trapping condition” and may occur due to wind stagnation. During the emission of the substance from an emission source under the trapping condition, there is a gradual accumulation of the substance in an atmosphere surrounding the site, and there will be a higher first-measured substance concentration of the target substance measured with the first air quality monitor. Therefore, for accurate predictions, it is important to take the trapping condition into consideration. Further, the hPRBL data may indicate a specific time period during a 24-hour period during which such trapping conditions may be observed. As will be appreciated, the measured concentration of the emission under the trapping condition may not provide an accurate representation of the total average emissions since the measured concentration might be higher than the usual measurements owing to the emission trapping or trapping condition. This is further explained in detail in conjunction withFIGS.26A-27. Now refer toFIGS.26A-26B, andFIG.27illustrating scenarios of consistent leaking emissions source. Two example scenarios (also referred to as trapping conditions) with an emissions source2606consistently leaking at two different times of the day are illustrated.FIG.26Ashows a schematic front view2602A of the emissions source2606along with its immediate surroundings and a schematic top view2602B of the emissions source2606along with its immediate surroundings at a first time (8 AM) of the day. Further,FIG.26Billustrates a schematic front view2604A of the emissions source2606along with its immediate surroundings and a schematic top view2604B of the emissions source2606along with its immediate surroundings at a second time (7 AM) of the day.FIG.27further shows a schematic front view2704A of the emissions source2606along with its immediate surroundings at a first time (7 AM) and a schematic front view2704B of the emissions source2606along with its immediate surroundings at a second time (8 AM) of the day. It should be noted that the hPRBL at the first time (7 AM) of the day is higher than the hPRBL at the second time (8 AM) of the day. This may be due to lower wind speeds at the first time (7 AM) of the day as compared to the second time (8 AM) of the day. As such, the concentration of the substance in the immediate surroundings of the emissions source2606is lower during the first time (7 AM) as compared to the second time (8 AM).FIG.27illustrates a schematic front view2704A of the emissions source2606along with its immediate surroundings at the second time (8 AM) and the schematic front view2704A of the emissions source2606along with its immediate surrounding at the first time (7 AM) of the day. As can be seen, due to higher wind speeds at the first time (7 AM), the hPRBL is raised. Consequently, the concentration of the substance reduces as compared to the second time (8 AM) of the day. The height of pressure boundary layer (hPRBL) has proven useful for the monitoring of operating emissions at a site. For example, when the hPRBL is at a relatively low elevation, emission accumulate at the site. In some instances, when incredibly low hPRBL and stagnation conditions exist, the concentration level of a compound increases at a constant rate. In other words, the time:concentration ratio is constant. Because global average methane levels are about 1.876 parts per million, the nominal leakage from operating devices (e.g., pneumatics operating on well-provided gases that include methane) can be utilized to establish and/or confirm operating emissions. Using the hPRBL effects, the AQM-minimization, emissions-location, and emissions-quantification machine-learning models associated with the at least one air quality monitor202may be trained and bound accordingly. In other words, trained machine-learning models may conduct the computations associated with reducing the number of air quality monitors, determining the location of an emission source, and quantifying the emissions. The machine-learning models may be trained specifically for each of the at least one air quality monitors provided at the site or may be trained to be specific to the site based on on-site atmospheric parameters and the atmospheric parameters (procured as either raw data or transformed/processed data). An illustrative machine-learning models may be based on a gradient tree-boosting algorithm. In particular, the machine-learning models may utilize a FastTreeTweedie algorithm in the ML.NET framework. Alternative machine-learning models such as simple-stress regression models could be used, but the gradient tree-boosting algorithm (decision tree) ensembles may provide better performance and may therefore be preferred. Further, other alternative machine-learning models may include common regression models, linear regression models (e.g., ordinary least squares, gradient descent, regularization), decision trees and tree ensembles (e.g., random forest, bagging, boosting), generalized additive models, support vector machines, artificial neural networks, etc. The machine-learning models may be used to identify the emission sources and also to isolate the correlation between elevated concentrations and atmospheric variables. For example, a machine-learning model configured as a tree-based model with a gradient tree-boosting algorithm may be trained with ten leaves and three-hundred trees. The machine-learning model may be trained daily for each air quality monitor on up to 90 days' data. The trained machine-learning models may be used to generate from device measurements a trained AQM-minimization parameter, emissions-location model parameter and the emissions-quantification model parameter for each minute. Using the trained AQM-minimization parameter, emissions-location model parameter and the emissions-quantification model parameter, the corresponding simulation models may be generated and refined iteratively to create refined simulation models as explained in earlier configurations. Once trained, the machine learning models may be used to obtain AQM-minimization parameters, emissions-location model parameters and the emissions-quantification model parameters. The trained AQM-minimization parameters, emissions-location model parameters and the emissions-quantification model parameters may also include a trained set of atmospheric parameters. The trained atmospheric parameters may include a machine-learning-based, measured-substance concentration and a machine-learning-based set of individual atmospheric readings. For example, as explained earlier, an emission-simulation model of a plume may be generated using emissions-location model parameters. The emission-simulation model may generate predicted substance concentrations for a plume in real time or according to an interval (for example, each minute) using the on-site atmospheric measurements at the air quality monitor (AQM) and other procured atmospheric parameters (for example, the variables obtained from hourly-supplied variables by the numerical weather prediction models). Now referring toFIG.28illustrating a graphical plot2800of the first substance concentrations obtained from the emission-simulation model for an exemplary configuration over a period of 18 hours on a particular day of a year (e.g., Feb. 28, 2022). The graphical plot may be generated using the predicted substance concentrations obtained at one-minute intervals from the emission-simulation model. It should be noted that the trained emissions-location model parameters and the emissions-quantification model parameters may be used to generate at least one function. Such functions may give the location of the source of an emission of a target substance, quantifying emissions of the target substance, etc., as will be discussed in detail hereinafter. As will be understood, the total emissions at a site (e.g., an oil well) may be a combination of operating emissions and fugitive emissions. The operating emissions and fugitive emissions at the site may include emissions from wellheads, tanks, separators, processing equipment, flowback tanks, etc. Now referring toFIG.29illustrating another graphical representation2900of the predicted substance concentrations obtained from the prediction model over a period of 72 hours (e.g., from Mar. 7, 2022, to Mar. 9, 2022). The graphical representation2900may be generated using the predicted substance concentrations obtained at one-minute intervals from the prediction model. Further, the graphical representation2900shows the contribution to the overall predicted substance concentrations from each of the different types of emission source. The different types of emission source may include flowback tanks2902, processing equipment2904, separators2906, tanks2908and wellheads2910. Further, as plotted along the y-axis of the graphical representation2900, the contribution of each of the different types of emission source is represented in kilograms/hour (kg/hr.). The on-site emissions are often difficult to ascertain because there are off-site sources such as global atmospheric levels (e.g., 1.9 parts per billion of methane), nearby tanks, nearby wells, passing locomotives, nearby painting facilities, etc. As will be further understood, in most situations, an emissions source generates a plume (largely based on wind—direction and wind speed), and the distribution of the plume is complicated. In order to determine the location of the emission source at the site, at least one air quality monitor may be placed at each of a variety of locations. The at least one air quality monitor measures various on-site atmospheric parameters including the measured-substance concentration of the target substance and a set of individual atmospheric parameters. The set of individual atmospheric readings may include at least one of the following: barometric pressure, air temperature or humidity level. Now referring toFIG.30illustrating a graphical representation3000of contributions (labeled along the y-axis) from five features with respect to time of the day (x-axis). Note that at least one of the atmospheric readings may make some contribution to the plurality of the predicted substance concentrations of the prediction model. For example, inFIG.30, the graphical representation3000shows the influence on the substance concentration prediction of the model. Further, the graphical representation3000shows contributions from five features at each time step. As can be seen inFIG.32, some features may make a higher contribution at a given time of day. The five features may include a feature-1 of wind direction (shown by line3002), a feature-2 of wind speed (shown by line3004), a feature-3 of barometric pressure (shown by line3006), a feature-4 of air temperature (shown by line3008) and a feature-5 of humidity level (shown by line3010). As will be further appreciated, isolating wind-direction's effect on the pollutant concentration predictions requires the use of statistical methods when training the regression model. Isolating the effect of wind direction allows one to remove the effects of ambient atmospheric concentrations of the targeted pollutant, the height pressure boundary layer (hPRBL), wind-speed, temperature, humidity, etc. This can be done without understanding or modeling the phenomena behind atmospheric concentration and/or the effects of hPRBL, wind-speed, temperature, humidity, etc. Instead, the model uses large amounts of data to train the simulation models to accurately predict the measured pollutant concentration from the values of other known parameters. The model then uses the simulation models to determine what portion of the predicted concentrations can be attributed to the wind-direction alone. For machine-learning regression models configured as tree-based models, the contribution of a feature may be determined by exploring the opposite sub-tree for each decision node containing the given feature, i.e., by comparing the results when making the “wrong” decision at each node containing the feature with the results when making the “right” decision. Some alternative configurations involve fixing values of all-but-one feature and incrementing the wind-direction feature around the full circle, i.e., 360 degrees, thereby generating a new prediction for each wind-direction. The value predicted at the actual measured wind-direction is then compared to the predicted values at all other directions to determine the wind-direction contribution. With reference toFIG.31illustrating a topological view of a site3100A, in accordance with some configurations and polar charts3100B, may indicate weighted means of predicted substance concentrations grouped in a predetermined number of wind-direction buckets. The polar charts indicate a concentration associated with each wind direction bucket from 0 to 360 degrees. As shown inFIG.31, the site includes three air quality monitors, i.e., air quality monitor3102A (W), the air quality monitor3102B (ESE) and the air quality monitor3102C (NNW). Corresponding to each air quality monitor is a line chart showing elevated concentration (ppm) as a function of measurement angle and the associated polar plot. In particular, for the air quality monitor3102A (W), the line chart3104A and a polar chart3106A is shown. For the air quality monitors3102B (ESE), the line chart3104B and a polar chart3106B is shown. For the air quality monitor3102C (NNW), the line chart3104C and a polar chart3106C is shown. The line charts3104A,3104B,3104C and the polar charts3106A,3106B,3106C help visualize the circular distribution of the wind-direction effect. The representative emissions-quantification machine learning model may be used to predict the methane concentration for a time period such as the last 10 days. The time period may be selected so as to ensure that data representing the wind blowing in every direction is obtained. Further, the wind-direction contribution value may be calculated for all of the predictions. The wind-direction contribution value may be an amount in parts per million (ppm) that the individual wind-direction affected the predicted ppm. All the predictions may be then grouped into 72 wind buckets (one bucket for every 5 degrees of the full circle's 360 degrees) based on the wind direction from the individual measurements. Further, a weighted Methane Mean may be calculated for each 5-degree bucket. A value function may be defined as: FeatureContribution[WindDirection]+ActualCh4−PredictedCh4. It should be noted that the function may be weighted with a recency bias. If no wind data is available for a specific wind bucket, the missing data may be filled in by interpolating it from the surrounding buckets for which data is available. In this way, for each of three air quality monitors, a weightCh4Mean value associated with a 5-degree wind bucket is obtained. These values are represented in the line charts3104A,3104B,3104C and the polar charts3106A,3106B,3106C. In some configurations, the plurality of representative circular normal distributions associated with emission sources may be derived from representative Von Mises distributions using the corresponding (Gaussian) plume models. The Von Mises distributions represent linear relationships between the leak flux (the term leak and emission may have been used interchangeable in this disclosure) at a given emission source and the expected measured substance concentration at the air quality monitor. The Von Mises distributions consider the distance between the leak source and air quality monitor, the angular distance between wind-direction and source-to-device bearing, and the average wind-speed and atmospheric stability class for each wind-direction bin. Now referring toFIG.32illustrating a process overview diagram of a process3200for quantifying emissions of a target substance at a site is illustrated in accordance with some configurations of the present subject matter. As already explained above, a first set of on-site parameters may be measured with the first air quality monitor over a period of time to obtain a plurality of individual measurements of each target substance. The plurality of individual measurements of the first set of onsite atmospheric parameters may include a first measured substance concentration of the target substance measured with the first air quality monitor and a first set of individual atmospheric readings. The first measured substance concentration and the first set of individual atmospheric readings may be transmitted to the first server. Further, a regional atmospheric parameter for the site such as a height of the pressure boundary layer (hPRBL) may be procured from a second server. With continued reference toFIG.32at a first step3202, at least one machine learning model associated with the first air quality monitor may be trained to conduct computations for the quantification method. The illustrative machine learning model may be based on a gradient tree-boosting algorithm. A machine learning model may utilize a FastTreeTweedie algorithm in the ML.NET framework. The machine learning models may be used to generate a simulation model which may in turn be used for identifying the emission sources, quantifying the emissions, and isolating correlation between elevated concentrations and atmospheric variables. For example, a machine learning model configured as a tree-based model and a gradient tree-boosting algorithm may be trained with ten leaves and three-hundred trees. Further, the machine learning model may be a hierarchy-based model. For example, as shown inFIG.32, a first hierarchy level may include the hPRBL parameter3204, a second hierarchy level may include the parameters wind direction3206A and wind speed3206B, and a third hierarchy level may include the parameters temperature3208A, time of the day3208B, wind stability class3208C and humidity3208D. Using the simulation model, a plurality of predicted substance concentrations of the target substance corresponding to the first air quality monitor may be obtained using the atmospheric measurements from the air quality monitor (AQM) and other procured atmospheric parameters. A plot of weighted means of the plurality of predicted substance concentrations grouped in a predetermined number of feature groups may be generated. The predetermined number of feature groups together may be representative of feature values over a predetermined range. In some configurations, each feature group may be associated with a wind-direction bucket. As such, a predetermined number of wind-direction buckets together may be representative of wind-directions over a full circle, i.e., wind-directions over 360 degrees. As such, in some configurations, a plot may be generated depicting the weighted means of the plurality of predicted substance concentrations grouped in a predetermined number of wind-direction buckets together with representative wind-directions that together cover a full circle. The plot may depict various different feature groups including the wind-direction buckets. As will be understood, each feature group may make some contribution to the plurality of the model's predicted substance concentrations. Therefore, at step3210, the contribution of each of a plurality of parameters to the model's predicted substance concentrations may be calculated and a graphical representation3212thereby plotted. For example, the plurality of parameters (as represented on the y-axis of the graphical representation3212) may include wind speed, wind direction, temperature, pressure, month (i.e., time of the year), humidity, hPRBL and hour (i.e., time of the day). Thus, the graphical representation3212of the predicted contributions of the parameters (features) to substance concentration with respect to time of the year (x-axis) may be plotted. As can be seen, some of the parameters (features) may make higher contributions at a particular time of the year/month. By isolating the contribution of each parameter on the predicted pollutant concentration and leveraging statistical methods used in the training of the regression model, the effects of ambient atmospheric concentrations of the targeted pollutant may be removed. The prediction model relies on the statistical analysis of large amounts of data to train the machine learning model to accurately predict the measured pollutant concentration from the values of other known parameters. The prediction model then uses the machine learning model to determine what portion of the predicted concentration can be attributed to only the wind-direction. With reference now toFIG.32, the relative contribution of each parameter may be obtained from the machine learning model by the analysis of an opposite sub-tree for each decision node associated with the parameter. For example, assessing the wind-direction contribution may involve varying the value associated with a wind-direction while fixing values associated with the other parameters. Further, assessing the wind-direction contribution may involve obtaining from the simulation model a predicted target substance concentration for each value of wind-direction. Furthermore, assessing the wind-direction contribution may require comparing predicted substance concentrations with measured target substance concentrations and determining the wind-direction contribution by making comparisons. The contribution of each parameter may be adjusted corresponding to the plurality of predicted substance concentrations by using at least one adjustment factor. The plurality of adjusted contribution values may be grouped into a predetermined number of feature groups. For each of the predetermined feature groups, a weighted mean of the plurality of associated predicted substance concentrations may be determined. Further, a mapping may be generated of the weighted mean of the plurality of predicted substance concentrations grouped in each feature group. Further, for each emission source of the plurality of emission sources in a location map of the site, a simulated plume model may be generated based on the wind-direction. The simulated plume model may depend on the various atmospheric conditions prevailing at the site. Further, for each emission source of the plurality of emission sources, a plurality of representative circular normal distributions may be calculated for each air quality monitor. The plurality of representative circular normal distributions may be calculated using the simulated plume model by setting a plurality of presumed flux values to that obtained from the simulated plume model. For example, the plurality of representative circular normal distributions may be derived from representative Von Mises distributions for all of the plurality of emission sources at the site using the corresponding (Gaussian) plume models. With continued reference toFIG.32, at step3214, an analysis may be performed of the plurality of representative circular normal distributions vis-a-vis the mapping to identify a relevant representative circular normal distribution from the at least one representative circular normal distribution. For example, as already explained above, at step3414, the analysis may be performed of the line charts corresponding to the circular distribution of the features (e.g., wind-direction) and line graphs depicting the Von Mises distributions. In order to perform the analysis, the line charts and the line graphs are mapped onto each other to identify the most fitting Von Mises representation. This Von Mises representation indicates the target emission source. The distance to the emission source may also be determined using the above analysis and the location map. Further, the total emission of the target substance at the site may be quantified by aggregating the plurality of emission sources. With reference toFIG.33illustrating a graphical representation of the combination3302of line charts (corresponding to the line charts3104A,3104B,3104C ofFIG.31) and combination3304of line graphs associated with the graphical representations of the Von Mises distributions (corresponding to the analysis ofFIG.32).FIG.33further shows a graphical representation of the combination3304of line graphs associated with polar plots3306,3308(corresponding to the polar charts3106A,3106B,3106C (ofFIG.31)) and polar graphical representations of the Von Mises distributions (corresponding to the analysis ofFIG.31). In order to perform the analysis, the combination3302of the line charts and the combination3304of line graphs may be mapped on to each other to identify the most fitting Von Mises representations with respect to the line charts. The most fitting or relevant Von Mises representations may be identified from among all of the Von Mises representations. In other words, the best fit is identified between the Von Mises representation associated with each emission source and the respective mapping of the weighted means of the plurality of the predicted substance concentrations. This Von Mises representation indicates the target emission source. Referring toFIG.33, the air quality monitor signal in the NNW direction shows the best fit to its Von Mises representation, which may signify the most accurate identification of the target substance emission source. This analysis may be performed using a plurality (e.g., thousands) of combinations of plumes and adjusting their weights (fluxes) to find the best fit. As such, the simulated plume fluxes may be adjusted across all emission sources to match the elevated concentrations associated with each wind-direction. Alternatively, the heights of the Von Mises representations may be adjusted to fit the line charts. Referring now toFIGS.34A-34B, illustrating an air quality monitor minimization method3400for reducing at least one air quality monitor from a site. The air quality monitor minimization method3400may be performed using the measurements of target substance concentration from the at least one air quality monitor provided at the site and a set of individual atmospheric readings. The simulation of a plume of the target gas may be generated using a trained machine learning model and may be used to reduce the number of air quality monitors at the site and for the quantification of the total emissions at the site that result from emission/leakage from at least one emissions source present at the site. At step3402, a first air quality monitor may be provided. The first air quality monitor may include a first sensor responsive to the target substance and a first location at which the first air quality monitor is located on the site. At step3404, a first set of attached parameters may be measured with the first air quality monitor over a period of time to obtain a plurality of individual measurements of each parameter of the first set of attached parameters. The plurality of individual measurements of the first set of attached parameters may include a first measured substance concentration of the target substance measured with the first air quality monitor. The plurality of individual measurements of the first set of attached parameters may further include a first set of individual atmospheric readings. The first set of individual atmospheric readings comprises at least one atmospheric reading at the first location selected from the following: a barometric pressure, an air temperature, a humidity level, a wind-direction, or a wind-speed. For example, the wind-direction and the wind speed may be obtained from an anemometer provided on the site. At step3406, the first set of attached parameters may be transmitted to a first server. At step3408, a second air quality monitor may be provided. The second air quality monitor may include a first sensor responsive to the target substance and a second location at which the second air quality monitor may be located. At step3410, a second set of attached parameters may be measured with the second air quality monitor over a period of time to obtain a plurality of individual measurements of each parameter of the second set of attached parameters. At step3412, the second set of attached parameters may be transmitted to the first server. At step3414, a SCADA system418may be provided at the site. The SCADA system418may be connected to at least one device on the site which may be but is not limited to being a first device that may include pressure sensors and a second device that may include pressure vessels, separators, drills and the like. The SCADA system may be configured to monitor and supervise at the least one device and, preferably, physical factors and operational factors of the at least one device. At step3416, a set of SCADA data may be acquired from the at least one device. The set of SCADA data may correspond to historical data on operations of the at least one air quality monitor202and the physical factors and operational factors of the at least one device. At step3418, the set of SCADA data may be transmitted to the first server by the SCADA system418. At step3420, an AQM-minimization machine-learning model may be trained. The AQM-minimization machine-learning model may be a pattern recognition machine learning model, and that model may be configured to identify patterns of emissions occurring on the site. The AQM-minimization machine-learning model may generate a trained AQM-minimization model parameter. The centralized computing unit432may be connected to the first server and may be configured to acquire the trained AQM-minimization model parameter. At step3422, using the trained AQM-minimization, the centralized computing unit432may generate an emission-simulation model. The emission-simulation model may be a digital twin of the real-time emissions occurring on the site and may be configured to predict the emissions in the site. At step3424, the first set of attached parameters, the second set of attached parameters and the set of SCADA data may be monitored over a predefined period of time. The centralized computing unit432may be configured to monitor the first set of attached parameters, the second set of attached parameters and the set of SCADA data to record any change of emissions occurring in the site. The first set of attached parameters, the second set of attached parameters and the set of SCADA data may be updated based on the monitoring. At step3426, based on the updated first set of attached parameters, the second set of attached parameters, and the set of SCADA data, the emission-simulation model may be refined iteratively. At step3428, the refined emission-simulation model may be analyzed to determine a redundant or non-contributing air quality monitor. The refined emissions-simulation model may be configured to generate an emission output to predict emission in the site. The predicted emissions may be analyzed with the set of SCADA data and the at least one set of attached parameters and may be further tracked to identify the air quality monitors located in close proximity to the predicted emissions. The air quality monitors which may not be present in the vicinity of the predicted emissions may be flagged as redundant or non-contributing air-quality monitors. At step3430, the redundant or non-contributing air-quality monitors from the first air quality monitor and the second air quality monitor may be removed. Now referring toFIG.35illustrating a flowchart3500for an emission-location method. The emission-location method may be performed using the target substance concentration measurements from the at least one air quality monitor provided at the site and a set of individual atmospheric readings. The simulation of a plume of the target gas may be generated using a trained machine learning model and may be used to locate emissions at the site. At step3502, a first air quality monitor may be provided. The first air quality monitor may include a first sensor responsive to the target substance and a first location at which the first air quality monitor is located on the site. At step3504, a first set of attached parameters may be measured with the first air quality monitor over a period of time to obtain a plurality of individual measurements of each parameter of the first set of attached parameters. The plurality of individual measurements of the first set of attached parameters may include a first measured substance concentration of the target substance measured with the first air quality monitor. The plurality of individual measurements of the first set of attached parameters may further include a first set of individual atmospheric readings. The first set of individual atmospheric readings comprises at least one atmospheric reading at the first location selected from among: a barometric pressure, an air temperature, a humidity level, a wind-direction, or a wind-speed. For example, the wind-direction and the wind speed may be obtained from an anemometer provided on the site. At step3506, the first set of attached parameters may be transmitted to a first server. At step3508, a SCADA system418may be provided at the site. The SCADA system418may be connected to at least one device on the site which may be but is not limited to being a first device that may include pressure sensors and a second device that may include pressure vessels, separators, drills, and the like. The SCADA system may be configured to monitor and supervise the at least one device and, preferably, physical factors and operational factors of the at least one device. At step3510, a set of SCADA data may be acquired from the at least one device. The set of SCADA data may correspond to historical data on operations of the at least one air quality monitor202and the physical factors and operational factors of the at least one device. At step3512, the set of SCADA data may be transmitted to the first server by the SCADA system418. At step3514, an emissions-location machine learning model may be trained. The centralized computing unit432connected to the first server may be configured to acquire the first set of attached parameters and the set of SCADA data to train the emissions-location machine learning model. The machine-learning models may be based on a gradient tree-boosting algorithm. In particular, the machine-learning models may utilize a FastTreeTweedie algorithm in the ML.NET framework. Alternative machine learning models such as a simple-stress regression model could be used, but the gradient tree-boosting algorithm (decision tree) ensembles may provide better performance and may therefore be preferred. Further, other alternative machine learning models may include common regression models, linear regression models (e.g., ordinary least squares, gradient descent, regularization), decision trees and tree ensembles (e.g., random forest, bagging, boosting), generalized additive models, support vector machines, artificial neural networks, etc. The output generated from the trained emissions-location machine learning model may be a first trained emissions-model parameter. At step3516, using the first trained emissions-model parameter, an emissions-simulation model may be generated. The centralized computing unit432may be connected to the first server and may be configured to acquire the trained AQM-minimization model parameter. The emission-simulation model may be a digital twin of the real-time emissions occurring on the site and may be configured to predict the emissions in the site. At step3518, the first set of attached parameters, the second set of attached parameters and the set of SCADA data may be monitored over a predefined period of time. The centralized computing unit432may be configured to monitor the first set of attached parameters, the second set of attached parameters and the set of SCADA data and to record any change of real-time emissions occurring on the site. Based on the monitoring, the first set of attached parameters, the second set of attached parameters and the set of SCADA data may be updated. At step3520, based on the updated first set of attached parameters, the second set of attached parameters and the set of SCADA data, the emissions-simulation model may be refined iteratively. At step3522, the refined emission-simulation model may be analyzed with the set of SCADA data and the first set of attached parameters to determine the location of the emissions. FIG.36illustrates a flowchart3600of an operating emissions quantification method for quantifying emissions of a target substance at a site. At step3602, a first air quality monitor may be provided. The first air quality monitor may include a first sensor responsive to the target substance and a first location at which the first air quality monitor is located on the site. At step3604, a first set of attached parameters may be measured with the first air quality monitor over a period of time to obtain a plurality of individual measurements of each parameter of the first set of attached parameters. The plurality of individual measurements of the first set of attached parameters may include a first measured substance concentration of the target substance measured with the first air quality monitor. The plurality of individual measurements of the first set of attached parameters may further include a first set of individual atmospheric readings. The first set of individual atmospheric readings comprises at least one atmospheric reading at the first location selected from among a barometric pressure, an air temperature, a humidity level, a wind-direction and a wind-speed. For example, the wind-direction and the wind speed may be obtained from an anemometer provided on the site. At step3606, the first set of attached parameters may be transmitted to a first server. At step3608, a SCADA system418may be provided at the site. The SCADA system418may be connected to at least one device on the site that may be but is not limited to being a first device that may include pressure sensors and a second device that may include pressure vessels, separators, drills and the like. The SCADA system may be configured to monitor and supervise at the least one device and, preferably, physical factors and operational factors of the at least one device. With continued reference toFIG.36, at step3610, a set of SCADA data may be acquired from the at least one device. The set of SCADA data may correspond to historical data on operations of the at least one air quality monitor202and the physical factors and operational factors of the at least one device. At step3612, the set of SCADA data may be transmitted to the first server by the SCADA system418. With continued reference toFIG.36, at step3614, an emissions-quantification machine learning model may be trained. The centralized computing unit432connected to the first server may be configured to acquire the first set of attached parameters and the set of SCADA data to train the emissions-quantification machine-learning model. The machine-learning models may be based on a gradient tree-boosting algorithm. In particular, the machine-learning models may utilize a FastTreeTweedie algorithm in the ML.NET framework. Alternative machine learning models such as a simple-stress regression model could be used, but the gradient tree-boosting algorithm (decision tree) ensembles may provide better performance and may therefore be preferred. Further, other alternative machine learning models may include common regression models, linear regression models (e.g., ordinary least squares, gradient descent, regularization), decision trees and tree ensembles (e.g., random forest, bagging, boosting), generalized additive models, support vector machines, artificial neural networks, etc. The output generated from the trained emissions-location machine learning model may be a first trained emissions-model parameter. At step3616, using the first trained emissions-model parameter, an emissions quantification-simulation model may be generated. The centralized computing unit432may be connected to the first server and may be configured to acquire the trained emissions-quantification model parameter so as to train the emission quantification simulation model. At step3618, the first set of attached parameters, the second set of attached parameters and the set of SCADA data may be monitored over a predefined period of time. The centralized computing unit432may be configured to monitor the first set of attached parameters, the second set of attached parameters and the set of SCADA data to record any change of real-time emissions occurring in the site. Based on the monitoring, the first set of attached parameters, the second set of attached parameters and the set of SCADA data may be updated. At step3620, based on the updated first set of attached parameters, the second set of attached parameters and the set of SCADA data, the emissions-quantification simulation model may be refined iteratively. At step3622, the refined emissions-quantification simulation model may be analyzed to determine the location of the emissions. Further, at step3624, the emissions from the located emissions sources may be quantified based on the refined emission-quantification simulation model, the set of SCADA data and the trained emission-quantification model parameter. In an alternative configuration, the at least one set of attached parameters, the set of SCADA data, the atmospheric readings and the output from the aforementioned simulation models may be encrypted using blockchain technology. Blockchain technology is an advanced database mechanism that allows transparent information sharing in a network through the use of distributed ledgers. In a process control system for an oil site, a distributed ledger may be maintained by nodes. The nodes may receive transactions, i.e., the sharing of data between field devices such as one or more sensors connected to the compressors, separator units, pumpjacks, controllers, operator workstations or other devices operating within the oil site. In some scenarios, additionally, the transactions may involve process parameter values such as operational factors and physical factors. The transactions may be broadcast to the distributed ledgers. The recorded process parameter values and product parameter values may then be retrieved to verify the emissions occurring at the site. Additionally, regulatory data may be recorded in the distributed ledger. For example, in response to a triggering event such as an alarm, an error, a leak, a repair event, a process milestone, a corrective action, etc., process control elements such as field devices or controllers may generate transactions that include data from the triggering event such as the time at which the event occurred, the duration of the event, process parameter values for process plant entities involved in the event, product parameter values for products involved in the event, etc. The regulatory data would then be recorded in the distributed ledger so that regulatory agencies can review the data. The distributed ledgers may be utilized to execute smart contracts. Process control systems can deploy smart contracts to the distributed ledger to exchange value as might be done upon the receipt of quantified emission data. Smart contracts may also be deployed to the distributed ledger to allow machines such as field devices to transact by themselves, i.e., exchange data therebetween without human intervention. For example, according to the terms of a smart contract, a computing device on a first oil site may automatically provide a predetermined token amount to a computing device on a second oil site upon receiving indications from one or more field devices on the first oil site that an emission had been recorded. By utilizing distributed ledgers on an oil site and, in some scenarios, smart contracts, each process plant or a network of process plants may provide a trusted, secure, and immutable record of transactions within the oil site. The secure, immutable, and trustless nature of distributed ledgers is particularly important for process control systems since cyber intrusions may lead to damage, destruction and/or the loss of not only an at least one device at the site but also the loss of human life. Due to the difficulty of changing the recorded data in the distributed ledgers, the at least one set of attached parameters, the set of SCADA data, the set of atmospheric readings and the output from the aforementioned simulation model may be encrypted and subsequently decrypted using a unique key which may be available to the systems engaged in the transaction. In another alternative configuration, at least one maximum power point tracking (MPPT) controller may be included. For example, an MPPT controller may be included in an air quality monitor via a roll-over procedure. In case of a failure of one of the MPPT controllers, one of the MPPT controllers may send a ticket to the operator to replace the failed MPPT controller. Further, in some configurations, the battery voltage may be connected to an interface (input/output (I/O)) pin of a microcontroller to enable transmission of the battery health data to the operator. Moreover, a change in the energy level of the battery may be signaled. For example, if the air quality monitor system is deployed with a signal-booster that has a power status level, the controller could sense the low power status of the battery and therefore decrease or altogether turn off the signal boosting. In some example configurations, a cellular booster may be used. Such boosters may be provided by any of a variety of vendors such as Wilson Electronics, LLC of Cottonwood Heights, Utah, USA. At least one of the components at the oil site, or in a monitored section of the oil site, may include an underlying physical, and/or operational issue, such as for example, a technical error, worn equipment, that eventually may end up in failure of the said component. Therefore, a forthcoming emission resulting by the underlying issues may not be detected due to the absence of air quality monitors at the monitored site. In an alternative configuration, now referring toFIG.37illustrating a schematic layout3700of an oil site post removal of the air quality monitor (interchangeably referred to as “monitored site, oil site”) as an illustrative configuration. Further, the oil site may include at least one component, for example, a pumpjack3702, a chemical tank3704, a production tank3706, a separator unit3708, and a compressor unit3710. As explained earlier, any deformity or underlying technical issue in these components may result in events such as an emission. In an illustrative configuration, the oil site may include at least one event detection device, or an event monitor embedded on a plurality of aerial monitoring devices3714a,3714b(hereinafter commonly referred to as aerial monitoring devices3714), and a plurality of sensor posts3716a,3716b(hereinafter commonly referred to as sensor posts3716), and a sound detecting device (not shown in figure). The event detection device may be independently established or may be assembled and connected to the air quality monitor102. When assembled within the optional air quality monitor102the event detection device may be hard-wired to the communications circuit installed therein. Further, when the event detection device is independently installed at the oil site, the event detection devices may be wirelessly connected to the communications circuit of the optional air quality monitor102. With continued reference toFIG.37, in one configuration, the aerial monitoring devices3714may include an unmanned aerial vehicle such as a remote-controlled drone, which may hover above, and may monitor the oil site. As described earlier, the aerial monitoring devices3714may be equipped with event detection devices that may include image-capturing device such as a camera, and/or a sound sensor such as a microphone for sensing alarm systems installed separately, or on the event detection device. In another configuration, the sensor posts3716, and the optional air quality monitor102installed at the monitored site may be equipped with the image-capturing device, and the sound sensor. In one configuration, the event detection device may be configured to detect at least one event occurring at the monitored site. At least one event may include any human activity, for example, a maintenance activity occurring at the monitored site, or an emission occurring or that may have already occurred. Such events may be dependent or related, for example, human activity resulted due to occurrence of the emissions. In response to occurrence of these events, the event detection device may initiate sensing the activities and may further generate a set of event parameters in accordance with the sensed events. For example, with continued reference toFIG.37, a maintenance activity is illustrated by site operators present at the site, highlighted by the box3720. This maintenance activity may be a result of emissions from any one of the components, such as opening or damage to maintenance hatch3718. Therefore, the event detection device may be configured to sense the maintenance activity and generate the set of event parameters, such as a video footage of the of the emissions from the maintenance hatch3718, and the site operators using at least one event detection device equipped on the aerial monitoring devices3714, and the sensor posts3716. Further, the illustrative sound sensor embedded on the aerial monitoring devices3714and the sensor posts3716may also sense sound emitted due to failure in the components, or due to the maintenance activity. Along with the event parameters, the event detection device may also optionally include the optional air quality monitor102to sense the amount of particulate matter in atmosphere due to the emissions. The sensed set of event parameters may be stored in a central repository, or a database connected to the first server. In another configuration, the first server may also receive the physical and operational factors from the SCADA system installed at the oil site. In this configuration, the physical and operational factors as sensed by the SCADA system may be associated with the components at the monitored site. The physical factors may correspond to any physical anomality of the components, such as iteratively opening of a maintenance hatch, change in orientation of an access portal to the component, physical damages occurred to the components, and the like. The operational parameters, as explained earlier, may include pressure, volume, density, temperature, and flow rate of the fluid which may be processed in the components, as well as operational boundaries of the components. In one illustrative configuration, now referring toFIG.38illustrating an exemplary layout3800of an oil site connected to refineries and a locality. In this configuration, the oil site3802may be connected to a natural gas refinery3804, and a crude oil refinery3806. The oil site3802, the natural gas refinery3804, and the crude oil refinery3806may be further connected to a locality3808through a rail network3810, a road network3812or a pipeline network3814. Now, the installation of oil site3802may impact weather characteristics in and the areas surrounding the oil site3802, such as frequent flooding of the plains, soil contamination, or depletion in air quality. Further, such weather characteristics may be reported by various atmospheric sensors, or even satellite monitoring the area. These weather characteristics may be stored in the second server. In one configuration, the centralized computing unit432as illustrated earlier may be connected to the first server and the second server. The centralized computing unit432may obtain the set of event parameters, the set of SCADA data (interchangeably referred to as set of SCADA parameters), as well as the weather characteristics from the first server and the second server, respectively. In another configuration, the centralized computing unit432may be embedded with a machine learning platform, which may be configured to generate an on-device prediction, training, example collection, and/or other machine-learning tasks or functionality. The machine learning functions may include an emissions-prediction-machine learning model. As explained earlier, the machine learning models may be based on a gradient tree-boosting algorithm, a FastTreeTweedie algorithm in the ML.NET framework, or regression models. Further, other alternative machine learning models may include common regression models, linear regression models (e.g., ordinary least squares, gradient descent, regularization), decision trees and tree ensembles (e.g., random forest, bagging, boosting), generalized additive models, support vector machines, and artificial neural networks, among others. The machine learning platform, with the set of event parameters, the set of SCADA data, and the atmospheric readings may train a predictive model to predict emissions occurring from any component installed at the site, such as a trained emission-prediction-machine-learning model. Particularly, the predictive model may include an emission-prediction-machine-learning model. The emission-prediction-machine-learning model, when implemented, may function as an emission prediction system configured to perform emission prediction method, for predicting different or new emissions occurring from at least one component at the site. Further, the emission-prediction-machine-learning model may also be trained using an ontology of the oil processing, i.e., with data related to type, and ratings of the components, and their corresponding overhauling or maintenance codes. Now, referring toFIG.39illustrating a process layout3900of training the emission-prediction-machine-learning model. The process may be initiated by the machine learning platform at block3902. Further, the machine learning platform may include a data ingesting block3904, which may be configured to receive historical data, or the set of event parameters, the set of SCADA data, and the atmospheric readings related to an event occurred in the past. For example, the chemical tank3704may experience a volumetric change, or variation in pressure of the fluid stored therein in the event of a leak resulting in emission. Therefore, the variation in volume or pressure may be sensed by the SCADA system as the operational factors along with physical characteristics such as a deformity in the chemical tank3704as physical factors. Further, the particulate matter in surrounding atmosphere may be sensed by the event detection device. Further, video footage of any activity of the site, such as repairing or deformity of the chemical tank3704may be received by the data ingesting block3904. The emission-prediction-machine-learning model may be trained with such sensed parameters, before and after the occurrence of the event. The machine learning platform may include an anomaly model3906. The anomaly model3906may flag the event of the leak as an anomaly event with all corresponding parameters or inputs, before and after the leak has occurred. Further, the anomaly event along with associated parameters may be stored in the auxiliary database3908. In one configuration, after a predefined time period, the anomaly model3906may be configured to regularly receive the set of event parameters, the set of SCADA data, and the atmospheric readings, and based on the analysis, may further store these parameters in the auxiliary database3908as refined set of event parameters for every component, such as a refined first set of event parameters for a first component and a refined second set of event parameters for a second component, and their refined set of SCADA parameters. As may be appreciated, the anomaly model3906may also receive the set of event parameters, physical parameters, and operational factors from multiple oil sites distributed in the same country, or globally, via long-range communications such as satellite communications, and may further store them in the auxiliary database3908. Therefore, the auxiliary database3908may be formed as a robust database with multiple anomalies that may be detected and solved at various oil sites distributed worldwide. Based on the events that may be flagged as an anomaly along with the set of event parameters, the set of SCADA data, and the set of atmospheric readings from various oil sites globally from the auxiliary database3908, the predictive model or the emission-prediction-machine-learning model may be trained by the ML Predictive trainer3910. The ML Predictive trainer3910may be configured to receive data from the auxiliary database3908to provide a robust trained model to predict emissions, or an emissions event fugitively associated the components. As such, the emission-prediction-machine-learning model may be refined regularly by the ML Predictive trainer3910in response to receiving the set of event parameters, the set of SCADA data, and the set of atmospheric readings iteratively, and over a predefined time period to generate a refined emission-prediction-machine-learning model. The emission-prediction-machine-learning model, or the refined emission-prediction-machine-learning model may be configured to generate a predicted parameter, or a predicted emissions parameter associated with the components at the site. Further, when the emission-prediction-machine-learning model may be iteratively refined, the refined emission-prediction-machine-learning model may also generate a refined predicted emission parameter. In one configuration, with continued reference toFIG.39, in present or an ongoing processing in the monitored site, the centralized computing unit432monitoring the SCADA data may notice a variation or change in operational factors of the component at the site due to the underlying technical issue or deformity therein. Therefore, in response, the centralized computing unit432may be configured to analyze the refined predicted emission parameter with the change in the operational parameters, i.e., identify the cause of the variation or change on operational factors of the component that may build-up to an emission, and predicting emissions according to the cause to validate a forthcoming emission fugitively associated with the component. Therefore, the forthcoming emission, or the predicted emission may be displayed at an ML dashboard3912installed at the oil site. The refined predicted emission parameter may include various build-up events to emissions, a number of potential emission sources, the emission flux, or source flux associated with at least one potential emission sources, along with the location of the potential emission sources. Therefore, in response to the determining predicted emissions, the refined predicted emission parameter may be compared to a set of rules. The rules may be as elementary as crossing a threshold against the predicted emissions parameter or may be more complicated and derived over time. The rules may be set to primarily limit the predicted emissions and may be established by the centralized computing unit432. After comparing, a forthcoming breach in the rules may be determined, for example, the predicted emissions from the predicted emissions parameter may cross a threshold, and the like. To prevent the forthcoming breach in rules, the centralized computing unit432may determine an appropriate action to abort the forthcoming breach. The action may be determined by the machine learning platform embedded in the centralized computing unit432, using prescriptive analysis. The prescriptive analysis may determine or suggest options on aborting the predicted, or forthcoming events such as emissions. The prescriptive analysis may be performed by the machine learning platform embedded in the centralized computing unit432, using a prescriptive model, thereby forming an event aborting system. The prescriptive model may be configured to select the best course of actions based on an input of the predicted emission parameters. Referring toFIG.40illustrating a process layout4000, the prescriptive model may perform event aborting method, using prescriptive analysis. In one configuration, the input4002may be configured to receive the refined predicted emissions parameter. The action agent4006, in response to the received input, may be configured to analyze the extent of the anomaly associated with an event. For example, the action agent4006may be configured to analyze a deviation of the refined predicted emissions parameter from the set of rules and may assign an action associated with the deviation. Accordingly, a deviation threshold may be set against the deviation. If the deviation of the refined predicted emissions parameter associated with the component may extend beyond the deviation threshold, the action agent4006may be configured to assign an action for a complete shutdown of the component. In one configuration, the assignment, or instruction for a complete shutdown action may be transmitted to the SCADA system. Upon receipt of the instruction, the SCADA system may be configured to shut down the components or cease the operation of the components. In addition, when the deviation of the refined predicted emissions parameter associated with the component may not extend beyond the deviation threshold, the action agent4006may be configured to assign a maintenance activity to repair the component, which may be notified to a site operator. The actions assigned by the action agent4006may be simulated in an environment4004, to simulate and predict any implications of post-implementation of the action. For example, in a simulation created using the refined predicted emission parameters the assigned action may be evaluated to determine a simulated parameter indicative of a result of implementing the assigned action. Therefore, the actions followed by a minimal maintenance activity may be recommended to a site operator, via the output4008. While many different repair actions may take place, one action may require a vehicle to be activated to bring a repair technician to the rural oil facility to repair the leak. It is noted that any of the activities herein that are manual may have an underlying machine instruction associated with the action leading up to the manual activity; this machine instruction may come in the form of a ticket, job order, text message, email, report, etc. indicating that the action is to be implemented. With continued reference toFIG.40, actions implemented against the predicted emissions may not be restricted to implementing the basic maintenance activity or removal of the component on the site. The machine learning platform, using prescriptive analysis, may generate a component modification system to implement a component modification method to suggest a design review of the oil site which may include replacement or additional installations of components at the site. In one configuration, the centralized computing unit432may be configured to generate a digital simulation model of the site using the set of event parameters and the SCADA data. Further, the digital simulation model may be configured to generate a digital simulation model parameter which may further include a Computer-Aided-Drafting (CAD) based set of frames such as a 3-dimensional image frames, isometric frames or 2-dimensional images of the oil site. These image frames may include a simulated set of components and simulated operational factors associated thereto, and which may be digitally created using the set of event parameters, the SCADA data, and the atmospheric parameters. As may be appreciated, with receiving refined set of event parameters, the refined set of SCADA data, and the atmospheric parameters, the digital simulation model may be refined to the refined digital simulation model to generate the refined digital simulation model parameter. Referring toFIG.41illustrating a schematic layout4100of a digital twin4102of the oil site, the digital twin4102of the oil site may be created using the digital simulation parameter. After creation of the digital twin, the centralized computing unit432may be configured to virtually implement or simulate the refined predicted emissions parameter thereon. Based on the simulation, the centralized computing unit432may identify a design or any anomaly within the design that may be a cause in the forthcoming events such as emission. For example, referring toFIG.41, a fluid supply from a pipeline4104to a processing station4106may be illustrated by the digital twin. However, when the refined predicted emissions parameter may be simulated with the digital twin, a drop in pressure of the processing station4106may be detected based on simulating the digital twin4102with the build-up events from the refined predicted emissions parameter. Therefore, the position, pipeline connection, or supply to the processing station4106may be flagged as a design anomaly, and a forthcoming event associated with the processing station4106may be determined accordingly. With reference toFIG.42illustrating schematic layout4200of a digital twin4102of the oil site, as explained earlier, the digital twin4102may be modified accordingly in response to determination of the forthcoming leak to a modified digital twin4202. It may be seen that the pipeline4104may be redesigned to switch fluid supply to another processing station4204. This modified digital twin4202may be implemented in a real-world oil site, particularly by way of imitating design on forthcoming oil sites and any deviation occurring between the digital twin and the forthcoming oil site may be sensed. Therefore, post implementation, a roll-back procedure may be executed in which the digital twin may be modified iteratively with the sensed deviation to reduce maintenance or operational expenditure in addition with reducing downtime of the site accordingly. Furthermore, the digital twin4102may be generated with the weather characteristics received from the second server to simulate weather characteristics surrounding the areas of the oil site, such as change in soil quality, oil leaks resulting in contamination of soil, or excessive flooding of terrain (indicative of a low-lying region). Therefore, any changes in weather patterns and their influence on the forthcoming oil site may be determined. As a result, expenditure involved in maintenance post installation of the oil site may be reduced by changing installation to a location well suited to the forthcoming oil site. Therefore, in addition to minimizing forthcoming leaks, any modification of digital twin4102may be shared with oil sites globally to design institutions of the oil sites. As such, modification in the digital twin when collated with data received real-time implementation of the oil site may create an efficient design along with requisite components therein, thereby preventing any false, or any over-installation of components at the oil site. In this manner, the design reviews of the site may also reduce capital expenditure on installation of the oil site. With continued reference toFIG.37, specific details are given in the above description to provide a thorough understanding of the configurations. However, it is understood that the configurations may be practiced without these specific details. For example, circuits may be shown in block diagrams in order not to obscure the configurations in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures and techniques may be shown without unnecessary detail in order to avoid obscuring the configurations. Also, it is noted that the configurations may be described as a process which is depicted as a flowchart, a flow diagram, a swim diagram, a data flow diagram, a structure diagram or a block diagram. Although a depiction may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process is terminated when its operations are completed but could have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine-readable medium tangibly embodying instructions may be used in the implementation of the methodologies described herein. For example, software codes may be stored in a memory. Memory may be implemented within the processor or external to the processor. As used herein the term “memory” refers to any type of long term, short term, volatile, nonvolatile or other storage medium and is not to be limited to any particular type of memory, number of memories or type of media upon which memory is stored. The controllers, computing devices, server devices and other components of systems can include machine-readable media and at least one processor, programmable logic controllers or logic control unit, distributed control systems, secure processors, memory and the like. Secure storage may also be implemented as a secure flash memory, secure serial EEPROM, secure field programmable gate array or secure application-specific integrated circuit. Processors can be standard central processing units or secure processors. Secure processors can be special-purpose processors that can withstand sophisticated attacks that attempt to extract data or programming logic. A secure processor may not have debugging pins that enable an external debugger to monitor the secure processor's execution or registers. In other configurations, the system may employ a secure field programmable gate array, a smartcard or other secure devices. Other types of computing devices can also be used. Memory can include standard memory, secure memory, or a combination of both memory types. By employing a secure processor and/or secure memory, the system can ensure that both data and instructions are highly secure. Memory can be incorporated into the other components of the controller system and can store computer-executable or processor-executable instructions including routines executed by a programmable computing device. In some configurations, the memory can store programs for preset configurations. Stored programs (e.g., simulation programs, calibration programs, graphic mapping programs, etc.) can be modified by a subject, operator or remote manager to provide flexibility. The present disclosure contemplates methods, systems, and program products on any machine-readable media for accomplishing various operations. The configurations of the present disclosure may be implemented using existing computer processors or by a special-purpose computer processor for an appropriate system incorporated for this or another purpose or by a hardwired system. Configurations within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a machine, the machine properly views the connection as a machine-readable medium. Thus, any such connection is properly termed a machine-readable medium. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general-purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. The machine-readable media can be part of sensors, computing devices or other components disclosed herein. Unless the word “or” is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list or (c) any combination of the items in the list. The term “comprising” is used throughout to mean including at least the recited feature(s) such that any greater number of the same feature and/or additional types of other features are not precluded. It will also be appreciated that specific configurations have been described herein for purposes of illustration but that various modifications may be made without deviating from the technology. Further, while advantages associated with certain configuration of the technology have been described in the context of those configuration, other configurations may also exhibit such advantages, and not all configurations necessarily need to exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other configurations not expressly shown or described herein. In general, in the following claims, the terms used should not be construed to limit the claims to the specific configuration disclosed in the specification and the claims but should be construed to include all possible configuration along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure. As used herein, the term ‘intermittent’ (and related variants such as, for example, ‘intermittently’) refers to something that happens at irregular or occasional intervals, not continuous or constant. The term can be used in various contexts, such as in telemetry to describe the transmission of data that alternates periods of transmission with radio-silence. Intermittent is often used to describe a characteristic of a system or process that is not consistent or steady, but rather occurs in stops and starts. Implementation of the techniques, blocks, steps and means described above may be done in various ways. For example, these techniques, blocks, steps and means may be implemented in hardware, software or a combination thereof. For a digital hardware implementation, the processing units may be implemented within at least one application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described above and/or a combination thereof. For analog circuits, they can be implemented with discreet components or using monolithic microwave integrated circuit (MMIC), radio frequency integrated circuit (RFIC) and/or micro electro-mechanical systems (MEMS) technologies. Furthermore, configurations may be implemented by hardware, software, scripting languages, firmware, middleware, microcode, hardware description languages and/or any combination thereof. When implemented in software, firmware, middleware, scripting language and/or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine-readable medium such as a storage medium. A code segment or machine-executable instruction may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a script, a class or any combination of instructions, data structures and/or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters and/or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. The methods, systems, devices, graphs and/or tables discussed herein are examples. Various configurations may omit, substitute or add various procedures or components as appropriate. For instance, in alternative configurations, the methods may be performed in an order different from that described, and/or various stages may be added, omitted and/or combined. Also, features described with respect to certain configurations may be combined in various other configurations. Different aspects and elements of the configurations may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples and do not limit the scope of the disclosure or claims. Additionally, the techniques discussed herein may provide differing results with different types of context awareness classifiers. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly or conventionally understood. As used herein, the articles “a” and “an” refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. By way of example, “an element” means one element or more than one element. “About” and/or “approximately” as used herein when referring to a measurable value such as an amount, a temporal duration and the like encompasses variations of ±20% or ±10%, ±5% or +0.1% from the specified value as such variations are appropriate in the context of the systems, devices, circuits, methods and other implementations described herein. “Substantially” as used herein when referring to a measurable value such as an amount, a temporal duration, a physical attribute (such as frequency) and the like also encompasses variations of ±20% or ±10%, ±5% or +0.1% from the specified value as such variations are appropriate in the context of the systems, devices, circuits, methods and other implementations described herein. As used herein, including in the claims, “and” as used in a list of items prefaced by “at least one of” or “at least one of” indicates that any combination of the listed items may be used. For example, a list of “at least one of A, B, and C” includes any of the combinations A, B, C, AB, AC, or BC and/or ABC (i.e., A, B, and C). Furthermore, to the extent that more than one occurrence or use of the items A, B, or C is possible, multiple uses of A, B, and/or C may form part of the contemplated combinations. For example, a list of “at least one of A, B, and C” may also include AA, AAB, AAA, BB, etc. While illustrative and presently preferred configurations of the disclosed systems, methods and/or machine-readable media have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed and that the appended claims are intended to be construed to include such variations except as limited by the prior art. While the principles of the disclosure have been described above in connection with specific apparatuses and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the disclosure.
171,566
11861754
DETAILED DESCRIPTION The following description illustrates only a principle of the present invention. Therefore, those skilled in the art may implement the principle of the present invention and devise various apparatuses included in the spirit and scope of the present invention although not clearly described or shown in the present specification. In addition, it is to be understood that all conditional terms and exemplary embodiments mentioned in the present specification are obviously intended only to allow those skilled in the art to understand a concept of the present invention in principle, and the present invention is not limited to exemplary embodiments and states particularly mentioned as such. Further, it is to be understood that all detailed descriptions mentioning specific exemplary embodiments of the present invention as well as principles, aspects, and exemplary embodiments of the present invention are intended to include structural and functional equivalences thereof. Further, it is to be understood that these equivalences include an equivalence that will be developed in the future as well as an equivalence that is currently well-known, that is, all devices devised to perform the same function regardless of a structure. Therefore, it is to be understood that, for example, a block diagram of the present specification shows a conceptual aspect of an illustrative circuit for embodying a principle of the present invention. Similarly, it is to be understood that all flowcharts, state transition views, pseudo-codes, and the like show various processes that may tangibly embodied in a computer-readable medium and that are executed by computers or processors regardless of whether the computers or the processors are clearly illustrated. Functions of various devices including processors or functional blocks represented as concepts similar to the processors and illustrated in the accompanying drawings may be provided by hardware having a capability to execute appropriate software as well as dedicated hardware. When the functions are provided by the processors, the above-mentioned functions may be provided by a single dedicated processor, a single shared processor, or a plurality of individual processors, in which some of them may be shared. In addition, terms mentioned as a processor, a control, or a concept similar to the processor or the control should not be interpreted to exclusively cite hardware having capability to execute software, but should be interpreted to implicitly include digital signal processor (DSP) hardware and a read only memory (ROM), a random access memory (RAM), and a non-volatile memory for storing software without being limited thereto. The above-mentioned terms may also include other well-known hardware. In the claims of the present specification, components represented as means for performing functions mentioned in a detailed description are intended to include all methods for performing functions including all types of software including, for example, a combination of circuit devices performing these functions, firmware/micro codes, or the like, and are coupled to appropriate circuits for executing the software. It is to be understood that since functions provided by variously mentioned means are combined with each other and are combined with a scheme demanded by the claims in the inventions, any means capable of providing these functions are equivalent to means recognized from the present specification. The above-mentioned objects, features, and advantages will become obvious from the following detailed description provided in relation to the accompanying drawings. Therefore, those skilled in the art to which the present invention pertains may easily practice a technical idea of the present invention. Further, in describing the present invention, if it is judged that a detailed description of a well-known technology associated with the present invention may unnecessarily make unclear the gist of the present invention, it will be omitted. Hereinafter, various exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. FIG.1is a block diagram showing a service providing system according to an exemplary embodiment of the present invention.FIG.2is a timing diagram illustrating an operation of a driving related guidance system according to an exemplary embodiment of the present invention. Referring toFIGS.1to2, a service providing system1000according to an exemplary embodiment of the present invention includes all or some of a vehicle terminal device100, a user terminal device200, and a service server300. A vehicle terminal device100may be provided in a vehicle to collect various data necessary for providing a driving related guidance service according to the present invention, perform pre-processing, analysis, etc., on the collected data, transmit the collected data or processed data to a service server300, and provide various driving related guidance by interworking with the service server300. Specifically, the vehicle terminal device100acquires a driving image captured during driving of the vehicle (i.e., while the vehicle is moving) (S11), receives advanced driver assistance system (ADAS) data from an ADAS module that assists driving of the vehicle, receive driving data from an electronic control unit (ECU) on-board diagnostics (OBD) module of the vehicle, and receive location data acquired through a location data acquisition unit150(S12). In addition, when a specific driving situation is detected during driving of the vehicle (S13:Y), the vehicle terminal device100may transmit ADAS data, location data, driving data, and a driving image related to the detected specific driving situation to the service server300(S14). The vehicle terminal device100may be implemented with various devices such as a navigation device or a car dash cam or a car video recorder, which is a vehicle imaging device. However, the vehicle terminal device100is not limited thereto and may be implemented as a communication dongle that relays ADAS data, driving data, location data, and a driving image acquired from various vehicle devices to the service server300. The aforementioned expression of “the vehicle is moving” refers to a state in which the vehicle is being driven by an autonomous driving system or by a person and may have a concept including various types such as a stopped state of the vehicle, a driving state of the vehicle, and a parking state of the vehicle. In addition, the specific driving situation of the vehicle described above may include a first driving situation in which an accident did not occur during driving of the vehicle but which involved an accident likelihood and a second driving situation in which an accident has occurred during driving of the vehicle. In addition, the vehicle is a concept including a transporter capable of moving living organisms using power and may be a concept including all transport machinery such as a railroad vehicle running on a track and a vehicle, a motorcycle, a bicycle, etc., driving on the road. Meanwhile, the service server300may receive ADAS data, location data, driving data, and a driving image related to the specific driving situation of the vehicle from the vehicle terminal device100, analyze the received data and the driving image, and generate guidance information related to the specific driving situation of the vehicle (S15). For example, the service server300may analyze the data and the driving image received from the vehicle terminal device100and generate accident situation prediction information on an accident situation that may have occurred in the vehicle in the first driving situation. Alternatively, the service server300may analyze the data and the driving image received from the vehicle terminal device100and generate accident legal evaluation information based on details of an accident of the vehicle. Also, the service server300may provide a driving related guidance service for the vehicle using the generated guidance information. Specifically, the service server300may provide the driving related guidance service for the vehicle to the vehicle terminal device100by transmitting the generated guidance information to the vehicle terminal device100. In addition, the service server300may provide the driving related guidance service for a vehicle to the user terminal device200by transmitting the generated guidance information to the user terminal device200. In this case, the vehicle terminal device100and the user terminal device200may provide driving related guidance through a sound or a screen using the guidance information received from the service server300. Here, the user terminal device200is a device of a person who needs guidance related to driving of a vehicle, such as a driver or a passenger of a vehicle, and the user terminal device200may be implemented as various devices such as a smartphone, a tablet computer, a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), smart glasses, project glasses, a navigation device, or the like. The service providing system1000according to the present invention may provide a service related to driving of the vehicle using at least one of ADAS data, location data, driving data, and a driving image received from the vehicle terminal device100. Hereinafter, each of the constituent modules constituting the service providing system.1000according to an exemplary embodiment of the present invention will be described in more detail with reference to the accompanying drawings. FIG.3is a block diagram of a vehicle terminal device according to an exemplary embodiment of the present invention. Referring toFIG.3, the vehicle terminal device100according to an exemplary embodiment of the present invention includes all or some of a communication unit110, an image acquisition unit120, an ADAS data acquisition unit130, an ECU data acquisition unit140, and a location data acquisition unit150, an image processing unit160, an output unit170, a storage unit180, an impact detection unit190, and a controller195. The communication unit110may perform communication between the vehicle terminal device100and other devices. Specifically, the communication unit110may perform a function of transmitting and receiving data by communicating with all or some of the service server300, the user terminal device200, and the vehicle terminal device provided in another vehicle. In particular, the communication unit110may transmit a driving image, ADAS data, driving data, and location data acquired during driving of the vehicle to the service server300, and the service server300may generate guidance information related to driving of the vehicle based on the data received from the vehicle terminal device100. Further, the communication unit110may receive driving related guidance information of the vehicle generated in the service server300. Here, the communication unit110may be implemented using various communication methods such as a type that is connected in a wireless or wired manner through a local area network (LAN) and the Internet, a type connected through a universal serial bus (USB) port, a type that is connected through a mobile communication network such as 3G and 4G, a type that is connected through a short-range wireless communication method such as near field communication (NFC), radio frequency identification (RFID), and Wi-Fi. The image acquisition unit120may acquire an image captured during driving of the vehicle. As an example, if the vehicle terminal device100is a device having an image capturing function, the image acquisition unit120may be a camera that performs image capturing. As another example, when the vehicle terminal device100is a communication dongle, the image acquisition unit120may be implemented as a module that receives an image captured by an external camera. In this way, the vehicle driving image acquired by the image acquisition unit120may include numerous objects located in a real world environment in which the vehicle is moving, for example, vehicles, people, animals, bridges, buildings, roadways, sidewalks, roadway guidance signs, crosswalks, intersections, traffic lights, center dividers, bus stops, trees, etc. For example, the captured image of a roadway may include a plurality of lanes separated according to lane markings, a roadway including a plurality of lanes, and a plurality of vehicles running on the roadway. In addition, the driving image may include a roadway guidance sign depending on the roadway in which the vehicle is moving. Here, the lane marking may refer to each of both lines forming a lane in which the vehicle is located. Further, the lane is formed by the lane markings such as a primary lane, a secondary lane, or an N lane, and may refer to a roadway in which a vehicle travels. The vehicle driving image may include a front image captured by a front camera and a rear image captured by a rear camera. In addition, the vehicle driving image may further include a left image captured by a left camera and a right image captured by a right camera. The vehicle driving image acquired by the image acquisition unit120may be transmitted to the image processing unit160for image processing. The ADAS data acquisition unit130may receive ADAS data of a vehicle from the ADAS module. Here, ADAS may include a forward vehicle start alert (FVSA) that guides or warns of departure of a preceding vehicle located in front of the vehicle, a forward collision warning system (FCWS) that informs or warns of a possibility of a collision with the preceding vehicle located in front of the vehicle, a lane departure warning system (LDWS) that informs or warns that the vehicle is deviating from a lane marking, a curve speed warning system (CSWS) that informs or warns of a sharp curve in front of the vehicle, a sudden stop notification system that informs or warns that the vehicle has suddenly stopped, a sudden turn notification system that informs or warns that the vehicle has turned sharply, a blind spot detection (BSD) system that informs or warns of another vehicle present in a driver's blind spot. In addition, ADAS may include an over-speed spot crackdown guidance, autonomous emergency braking (AEB) system, a lane keep assist system (LKAS) that maintains a lane by adjusting a driving direction in case of lane departure, advanced smart cruise control (ASCC) that maintains a distance to a preceding vehicle, while moving at a set speed, an around view monitor (AVM) system which visually shows a situation around the vehicle, a driver attention warning (DAW) provided when a driving pattern is determined to be careless by analyzing vehicle signals such as a steering angle or a steering torque of the vehicle and a driving pattern of a vehicle driver such as a location of the vehicle in a lane, and the like. However, the example of ADAS is not limited to the aforementioned example, and the ADAS according to the present invention is a concept including all driver assistance functions to assist the driver's safe driving in numerous driving environments that the vehicle driver may encounter while driving. Meanwhile, when at least one of the plurality of ADAS functions described above is executed during driving of the vehicle, the ADAS data acquisition unit130may acquire ADAS identification data executed from the ADAS module and data detected in the process of executing the corresponding ADAS. As an example, when the FCWS function among ADAS functions is executed during driving of the vehicle, the ADAS data acquisition unit130may acquire ADAS identification data indicating that the executed function is FCWS and data (e.g., distance to the preceding vehicle) detected in the process of executing the FCWS. The ECU data acquisition unit140may receive driving data of the vehicle from an electronic control unit (ECU) module of the vehicle. Here, the ECU refers to an electronic control device that controls a state of an engine, an automatic transmission, an ABS, etc., of the vehicle by a computer. Specifically, the ECU data acquisition unit140may be connected to an OBD terminal coupled to the ECU of the vehicle, periodically perform polling using an OBD communication protocol in the connection between the ECU data acquisition unit140and the ECU through the OBD interface, and acquire driving data of the vehicle from the ECU. Here, the driving data of the vehicle may include data of a change in a start ON/OFF state of the vehicle, speed data of the vehicle, a steering angle of a vehicle steering device, steering torque data, fuel data of the vehicle, and the like. The location data acquisition unit150is a device that acquires location data through a global navigation satellite system (GNSS). The GNSS refers to a navigation system capable of calculating a location of a receiving terminal using radio signals received from satellites. Specific examples of the GNSS include a global positioning system (GPS), Galileo, a global orbiting navigational satellite system (GLONASS), COMPASS, an Indian regional navigational satellite system (IRNSS), a quasi-zenith satellite system (QZSS), etc. The location data acquisition unit150according to an exemplary embodiment of the present invention may acquire location data by receiving a GNSS signal provided in an area where the vehicle terminal device100is used. Alternatively, the location data acquisition unit150may acquire location data through communication with a base station or an access point (AP) in addition to the GNSS. The image processing unit160may process the driving image of the vehicle acquired by the image acquisition unit120. Specifically, the image processing unit160may perform compression processing of image data. As an example, the driving image of the vehicle acquired by the image acquisition unit120is a continuously captured image composed of a plurality of frames along a time axis, and a capacity of such an image is very large when not compressed, and it is very inefficient to store the image as it is in a memory, and thus, the digitally converted image should be compressed. Accordingly, the image processing unit160may perform compression processing based on a method using a correlation between frames, a spatial correlation, and a visual characteristic sensitive to a low frequency component. In addition, the image processing unit160may combine a front driving image and a rear driving image acquired through the image acquisition unit120, and generate a top-view image in which a host vehicle object (or own vehicle object) is disposed at a location of a host vehicle in the combined image. The top-view image may be transmitted to the service server300, and the top-view image transmitted to the service server300may be analyzed together with at least one of the ADAS data, the driving data, and the location data so as to be used to generate an accident situation prediction information, legal evaluation information, and the like. Here, the top-view image may have, for example, a graphics interchange format (GIF) formed by extracting a plurality of frames. That is, even if the driving image is compressed through the image acquisition unit120, a data size may be large in the case of transmitting video to a server, and thus, according to the present invention, dynamic information is included for the accident situation prediction information, the legal evaluation information, and the like but, in order to achieve the purpose with a minimum data size, the image processing unit160may generate the driving image in a GIF format and transmit the image to the service server300. According to the aforementioned example, the operation of generating the top-view image is performed in the image processing unit160of the vehicle terminal device100but the present invention is not limited thereto. According to another exemplary embodiment of the present invention, the vehicle terminal device100may be implemented to transmit the captured driving image to the service server300, and the service server300may be implemented to recombine the driving image received from the vehicle terminal device100to generate the top-view image. Meanwhile, the output unit170is a unit that outputs the data of the vehicle terminal device100to a user as an image and/or sound. Here, the output unit170may include all or some of a display unit (not shown) and an audio output unit (not shown). The display unit is a unit that outputs data that may be visually recognized by the user. The display unit may be implemented as a display unit provided on a front of a housing of the vehicle terminal device100. In addition, the display unit may be integrally formed with the vehicle terminal device100to output visual recognition data or may be installed separately from the system100such as a head up display (HUD) to output visual recognition data. The audio output unit is a unit that outputs data that may be audibly recognized by the vehicle terminal device100. The audio output unit may be implemented as a speaker that expresses data that the user of the vehicle terminal is informed of as sound. The storage unit180serves to store various data and applications required for the operation of the vehicle terminal device100. In particular, the storage unit180may store data, for example, an OS, a route search application, map data, and the like necessary for the operation of the vehicle terminal device100. In addition, the storage unit180may sort and store a driving image, ADAS data, driving data, and location data generated by the operation of the vehicle terminal device100by time or location. The storage units180may be implemented not only as an internal storage element such as a random access memory (RAM), a flash memory, a read-only memory (ROM), an erasable programmable ROM (EPROM), an electronically erasable and programmable (EEPROM), a register, a hard disk, a removable disk, a memory card, a universal subscriber identity module (USIM) but also as a removable storage such as a USB memory. Meanwhile, the impact detection unit190may detect an impact during driving of the vehicle and output an impact level value corresponding to the detected impact to the controller195. Here, the impact detection unit190may be implemented by, for example, an acceleration sensor. The controller195may control the overall operation of the vehicle terminal device100. Specifically, the controller195may control all or some of the communication unit110, the image acquisition unit120, the ADAS data acquisition unit130, the ECU data acquisition unit140, the location data acquisition unit150, the image processing unit160, the output unit170, the storage unit180, and the impact detection unit190. In particular, the controller195may provide a driving related guidance service of a vehicle in connection with the service server300. Here, the driving related guidance service of the vehicle includes at least one of an accident situation prediction guidance service, an accident legal guidance service, a vehicle driver's destination prediction guidance service, a vehicle driver's past driving history guidance service, and another driver's past driving history guidance service. Specifically, the controller195may detect a specific driving situation during driving of the vehicle and control the communication unit110to transmit ADAS data, location data, driving data, and a driving image related to the detected specific driving situation to the service server. Here, the specific driving situation may include a first driving situation in which an accident did not occur while the vehicle was moving but which involved an accident likelihood, and a second driving situation in which an accident occurred during driving of the vehicle. Specifically, the controller195may receive an impact level value from the impact detection unit190during driving of the vehicle, and when an impact which exceeds a preset first impact level and which is less than a second impact level is detected and when at least one of a lane change notification, a forward collision notification, a sharp curve notification, a sudden stop notification, a sudden turn notification, and a blind spot notification is detected based on the ADAS data before the impact is detected, the controller may determine the first driving situation which involves an accident likelihood. Here, the second impact level is a threshold value of an impact level at which the vehicle may be determined as an accident, and may be a value calculated by digitizing various experimental data. That is, if the driver does not detect another vehicle moving in a next lane while driving and finds the other vehicle while changing lanes and stops suddenly, it may be a situation which involved an accident likelihood even if the vehicle driven by the driver did not have an actual accident. In this case, since no actual accident has occurred, the impact detection unit190may detect the impact level exceeding the first impact level and less than the preset second impact level and ADAS data acquired by the ADAS data acquisition unit130includes lane change notification data, blind spot notification data, and sudden stop notification data. Alternatively, when the driver does not recognize a preceding vehicle in front of the vehicle during driving and finds the preceding vehicle in front of the vehicle and makes a sudden stop, it may be a situation which involved an accident likelihood even if the vehicle driven by the driver did not have an actual accident. In this case, since no actual accident has occurred, the impact detection unit190may detect the impact level exceeding the first impact level and less than the preset second impact level and ADAS data acquired by the ADAS data acquisition unit130includes the forward collision notification data and sudden stop notification data. Alternatively, if the driver does not recognize a sharp curve and travels at a high speed and finds the sharp curve and makes a sudden deceleration, it may be a situation which involved an accident likelihood even if the vehicle driven by the driver did not have an actual accident. In this case, since no actual accident has occurred, the impact detection unit190may detect the impact level exceeding the first impact level and less than the preset second impact level and ADAS data acquired by the ADAS data acquisition unit130includes sharp curve notification data. In this way, when an impact level value within a predetermined level range is detected by the impact detection unit190during driving of the vehicle and at least one of a lane change notification, a forward collision notification, a sharp curve notification, a sudden stop notification, a sudden turn notification, and a blind spot notification is detected based on the ADAS data, the controller195may determine the first driving situation which involved an accident likelihood. However, when an impact exceeding the preset second impact level is detected by the impact detection unit190, the controller195may determine the second driving situation in which the accident has occurred. Meanwhile, when the first driving situation in which an accident did not occur while the vehicle was moving but which involved an accident likelihood is detected, the controller195may control the communication unit110to transmit ADAS data, location data, driving data, and a driving image before a point in time of the detected first driving situation to the service server300. In this case, the service server300may generate accident situation prediction information on an accident situation that might have occurred in the vehicle in the first driving situation by analyzing the data and the driving image received from the vehicle terminal device100in the first driving situation of the vehicle. In addition, the service server300may collect driving images, driving data, ADAS data, location data, and accident data for each driving situation from each vehicle and perform learning through machine learning or deep learning based on the collected data to construct an accident likelihood prediction model. In addition, the service server300may predict an accident likelihood for each driving situation and each road section through the constructed accident likelihood prediction model, generate accident situation prediction information including an accident likelihood, and provide the generated accident situation prediction information to vehicles that satisfy conditions (vehicle speed, driver's driving habit, driving information of neighbor vehicles, road condition information of a road on which a vehicle is moving or a load located in a driving route of the vehicle, etc.) of a predicted result. Accordingly, the service server300according to an exemplary embodiment of the present invention may receive various driving related data such as driving images, driving data, ADAS data, location data, accident data, road condition information, etc. from vehicles moving on the entire road through communication such as vehicle to vehicle (V2V), vehicle to infrastructure (V2I), vehicle to everything (V2X), etc., predict an accident likelihood through learning such as machine learning, deep learning, etc., and provide information on a high accident likelihood to a vehicle expected to satisfy a specific condition. Here, the accident situation prediction information generated by the service server300may be used to provide an accident situation prediction guidance service. Specifically, the service server300may transmit the generated accident situation prediction information to the vehicle terminal device100or the user terminal device200, and the vehicle terminal device100or the user terminal device200may perform guidance based on the accident situation prediction information. For example, if the driver does not detect another vehicle moving in a next lane while driving and finds the other vehicle while changing lanes and stops suddenly, the service server300may generate accident situation prediction information such as “Left rear collision could have occurred after about 3 seconds” through an analysis and the vehicle terminal device100or the user terminal device200may perform guidance based on the generated accident situation prediction information. In addition, when a second driving situation in which an accident occurred during driving the vehicle is detected, the controller195may control the communication unit110to transmit ADAS data, location data, driving data, and a driving image before or after a predetermined period of time from a point in time of the detected second driving situation. In this case, the service server300may generate accident legal evaluation information based on details of the accident of the vehicle in the second driving situation by analyzing the data and the driving image received from the vehicle terminal device100in the second driving situation of the vehicle. Here, the accident legal evaluation information generated by the service server300may be used to provide an accident legal guidance service. Specifically, the service server300may transmit the generated accident legal evaluation information to the vehicle terminal device100or the user terminal device200, and the vehicle terminal device100or the user terminal device200may perform guidance based on the accident legal evaluation information. For example, in the event of an accident, the service server300may generate accident legal evaluation information such as “Fault rate of the host vehicle is 30% and fault rate of the other vehicle is 70%” through analysis, and the vehicle terminal device100or the user terminal device200may perform guidance based on the generated accident legal evaluation information. That is, according to the present invention, the data transmitted to the service server300may be set to be different according to whether a driving situation of the vehicle is the first driving situation which involved a danger of accident or the second driving situation in which an accident has occurred, and a driving related guidance service according to each situation may be controlled to be provided. Meanwhile, when an expected destination of the vehicle driver is determined according to a change in a state of the vehicle from parking to driving in the service server300, the vehicle terminal device100according to an exemplary embodiment of the present invention may provide a destination prediction guidance of the vehicle driver. For example, when the state of the vehicle is changed from parking to driving, the service server300may generate destination prediction information such as “Expected destination of vehicle at a current parking location is the office (work)” through analysis, and the vehicle terminal device100or the user terminal device200may perform guidance based on the generated destination prediction information. In addition, when the service server compares location data of the vehicle with the previously stored ADAS data set for each driving location and detects ADAS data corresponding to a location of the vehicle driver, the vehicle terminal device100according to an exemplary embodiment of the present invention may provide the vehicle driver's past driving history guidance. For example, when ADAS data corresponding to the location of the vehicle driver is detected, the service server300may generate driver's past driving history information such as “Current location is the location where a blind spot notification was performed” through analysis, and the vehicle terminal device100or the user terminal device200may perform guidance based on the generated information. In addition, when the service server300compares location data of the vehicle with the previously stored ADAS data set for each driving location for another driver and detects ADAS data of the other driver corresponding to a location of the vehicle driver, the vehicle terminal device100according to an exemplary embodiment of the present invention may provide the other driver's past driving history guidance. As an example, when ADAS data of the other driver corresponding to the location of the vehicle driver is detected, the service server300may generate the other driver's past driving history information such as “Current location is the location where a sharp curve notification was performed from a number of drivers” through analysis, and the vehicle terminal device100or the user terminal device200may perform guidance based on the generated information. Hereinafter, a method for providing a driving related guidance service of the vehicle terminal device100according to an exemplary embodiment of the present invention will be described in more detail with reference toFIGS.4and5. FIG.4is a flowchart illustrating a method for providing a driving related guidance service of a vehicle terminal device according to an exemplary embodiment of the present invention. Referring toFIG.4, first, the vehicle terminal device100may acquire a driving image captured during driving of the vehicle (S110). The vehicle terminal device100may receive ADAS data of the vehicle from an ADAS module that assists driving of the vehicle, receive driving data of the vehicle from an ECU module of the vehicle, and receive location data of the vehicle from the location data acquisition unit150(S120). The vehicle terminal device100may detect a specific driving situation during driving of the vehicle (S130). If a first driving situation in which an accident did not occur while the vehicle was moving but which involved an accident likelihood is detected, the vehicle terminal device100may transmit ADAS data, location data, driving data, and a driving image before a point in time of the detected first driving situation to the service server300(S140). Also, when the service server300generates accident situation prediction information on an accident situation that might have occurred in the vehicle based on the data and the driving image received from the vehicle terminal device100, accident situation prediction information may be received from the service server300(S150). In this case, the vehicle terminal device100may perform guidance based on the received accident situation prediction information (S160). However, when a second driving situation in which an accident occurred during driving of the vehicle is detected, the vehicle terminal device100may transmit ADAS data, location data, driving data, and a driving image before or after a predetermined period of time from a point in time of the detected second driving situation to the service server300(S170). Also, when the service server300generates accident legal evaluation information based on details of the accident based on the data and the driving image received from the vehicle terminal device100, the accident legal evaluation information may be received from the service server300(S180). In this case, the vehicle terminal device100may perform guidance based on the received accident legal evaluation information (S190). Meanwhile, step (S110) of acquiring the driving image may include acquiring a front driving image captured by a front camera and a rear driving image captured by a rear camera. In this case, the method according to an exemplary embodiment of the present invention may further include combining the front driving image and the rear driving image and generating a top-view image in which a host vehicle object is placed at a location of the host vehicle in the combined image. Here, the generated top-view image may be transmitted to the service server300and used to generate accident situation prediction information and accident legal evaluation information. In addition, the top-view image may have a graphics interchange format (GIF) formed by extracting a plurality of frames. Meanwhile, the method according to an exemplary embodiment of the present invention described above may further include performing a destination prediction guidance, a driver's past driving history guidance, another driver's past driving history guidance in connection with the service server300. FIG.5is a flowchart illustrating a method for detecting a driving situation of a vehicle terminal device according to an exemplary embodiment of the present invention. Referring toFIG.5, step S130of detecting a specific driving situation during driving of a vehicle according to an exemplary embodiment of the present invention may include the following steps. First, the vehicle terminal device100may detect an impact level (S131). In addition, the vehicle terminal device100may determine whether the detected impact level exceeds a preset first impact level and is less than a preset second impact level (S132). If the detected impact level exceeds the preset second impact level (S132:N1), the vehicle terminal device100may determine the second driving situation in which an accident has occurred (S133). In addition, when the detected impact level is less than the preset first impact level (S132:N2), the vehicle terminal device100may determine that it is not a specific driving situation. However, if the detected impact level exceeds the preset first impact level and is less than the preset second impact level (S132:Y), it may be determined whether at least one of a lane departure notification, forward collision, a sharp curve notification, a sudden stop notification, a sudden turn notification, and a blind spot notification is detected based on ADAS data before the impact is detected (S134). If at least one of the lane departure notification, the forward collision, the sharp curve notification, the sudden stop notification, the sudden turn notification, and the blind spot notification is detected based on the ADAS data before the impact is detected (S134:Y), the vehicle terminal device100may determine the first driving situation which involves an accident likelihood (S135). However, if at least one of the lane departure notification, the forward collision, the sharp curve notification, the sudden stop notification, the sudden turn notification, and the blind spot notification is not detected based on the ADAS data before the impact is detected (S134:N), the vehicle terminal device100may determine that it is not the first driving situation which involves an accident likelihood (S136). Meanwhile, the aforementioned vehicle terminal device100may provide a driving related guidance service in connection with the service server300. Hereinafter, the service server300according to an exemplary embodiment of the present invention will be described in more detail with reference to the accompanying drawings. FIG.6is a block diagram showing a service server according to an exemplary embodiment of the present invention.FIG.7is a block diagram specifically showing the service providing unit330according to an exemplary embodiment of the present invention. Referring toFIGS.6and7, the service server300according to an exemplary embodiment of the present invention may include all or some of a communication unit310, a storage unit320, a service providing unit330, and a controller340. In addition, the service providing unit330may include all or some of an accident situation prediction guidance service providing unit331, an accident legal guidance service providing unit332, a destination prediction guidance service providing unit333, a driver's past driving history guidance service providing unit334, and another driver's past driving history guidance service providing unit335. The communication unit310may perform communication between the service server300and other devices. Specifically, the communication unit310may perform a function of transmitting and receiving data by communicating with the vehicle terminal device100and the user terminal device200. In particular, the communication unit310may receive a driving image, ADAS data, driving data, and location data acquired during driving of the vehicle from the vehicle terminal device100, and generate guidance information related to driving of the vehicle based on the data received from the vehicle terminal device100. Also, the communication unit310may transmit the generated guidance information related to driving of the vehicle to the vehicle terminal device100and/or the user terminal device200. Here, the communication unit310may be implemented using various communication methods such as a type that is connected in a wireless or wired manner through a local area network (LAN) and the Internet, a type connected through a universal serial bus (USB) port, a type that is connected through a mobile communication network such as 3G and 4G, a type that is connected through a short-range wireless communication method such as near field communication (NEC), radio frequency identification (RFID), and Wi-Fi. The storage unit320functions to store various data and applications required for the operation of the service server300. In particular, the storage unit320may sort and store various types of data and driving images received from the vehicle terminal device100for each terminal device. In addition, the storage unit320may store various programs for the operation of the service providing unit330. Here, the storage unit320may be implemented not only as an internal storage element such as a random access memory (RAM), a flash memory, a read-only memory (ROM), an erasable programmable ROM (EPROM), an electronically erasable and programmable (EEPROM), a register, a hard disk, a removable disk, a memory card, a universal subscriber identity module (USIM) but also as a removable storage such as a USB memory. This storage unit320may be implemented in the service server300or may be implemented in the form of an external database (DB) server connected to the service server300. The service providing unit330may generate guidance information related to a specific driving situation of the vehicle by analyzing the data and the driving image received from the vehicle terminal device100. Specifically, the service providing unit330may analyze and record location coordinates where a specific driving situation of the vehicle occurs using the location data received through the communication unit310. In addition, the accident situation prediction guidance service providing unit331may generate accident situation prediction information for service provision, the accident legal guidance service providing unit332may generate accident legal evaluation information for service provision, the destination prediction guidance service providing unit333may generate the driver's destination prediction information for service provision, the driver's past driving history guidance service providing unit334may generate the vehicle driver's past driving history for service provision, and the other driver's past driving history guidance service providing unit335may generate the other driver's past driving history information for service provision. In addition, the service providing unit330may provide a vehicle driving related guidance service based on the generated guidance information. Specifically, the service providing unit330may perform a guidance service related to driving of the vehicle by transmitting the generated guidance information to the vehicle terminal device100and the user terminal device200. In this case, the vehicle terminal device100and the user terminal device200may provide driving related guidance through sound or a screen using the guidance information received from the service server300. The service providing unit330may include all or some of an accident situation prediction guidance service providing unit331, an accident legal guidance service providing unit332, a destination prediction guidance service providing unit333, a driver's past driving history guidance service providing unit334, and the other driver's past driving history guidance service providing unit335. Specifically, the accident situation prediction guidance service providing unit331may receive ADAS data, location data, driving data, and a driving image before a point in time of the first driving situation which involves an accident likelihood from the vehicle terminal device100and generate accident situation prediction information on an accident situation that might have occurred in the vehicle in the first driving situation by the received data and the driving image. Here, the driving image may be a top-view image. This driving image will be described in detail with reference toFIG.8. FIG.8is a diagram illustrating a top-view image according to an exemplary embodiment of the present invention. Referring toFIG.8, a top-view image is an image produced as a view from the top to the bottom. The top-view image may be generated by changing a camera view of a front driving image captured by the front camera of the vehicle and a camera view of a rear image captured by the rear camera to a view from the top to the bottom, and combining the changed front and rear images. In addition, a top-view image with high accuracy may be generated by additionally combining a left image and a right image captured by left and right cameras with the image captured by the front and rear cameras of the vehicle. As another example, a camera view of the front driving image captured by the front camera of the vehicle and a camera view of the rear image captured by the rear camera may be changed from the top to the bottom, and the changed front and rear images may be sequentially connected in time order to create a top-view image. In the top-view image, an object51corresponding to the host vehicle may be disposed at a location of the host vehicle in the image, and objects52and53corresponding to other vehicles may be disposed at locations of the other vehicles in the image. Here, the objects51,52, and53may be expressed as an actual captured image of the vehicle or may be expressed as an image obtained by converting the vehicles into figures through animation or the like. Meanwhile, the accident situation prediction guidance service providing unit331may detect vehicles in the front, rear, and sides of the host vehicle from the driving image, predicts an accident situation that might have occurred in the vehicle in the first driving situation using at least one of ADAS data and driving data of the host vehicle for the detected vehicles, and generate accident situation prediction information. Here, the ADAS data may include identification information of ADAS executed before the point in time of the first driving situation and data detected during the execution of ADAS (for example, a distance to a preceding vehicle, etc., in the case of FCWS). In addition, the driving data of the vehicle may include speed data of the vehicle before the point in time of the first driving situation, a steering angle of a steering device of the vehicle, steering torque data, fuel data of the vehicle, and the like. The operation of the accident situation prediction guidance service providing unit331will be described in more detail with reference toFIGS.9A and9B. FIGS.9A and9Bare diagrams showing a method for predicting an accident situation according to an exemplary embodiment of the present invention.FIG.9Ashows a top-view image of the first driving situation according to an exemplary embodiment of the present invention, in which the driver does not detect another vehicle53moving in a left lane while driving and finds the other vehicle while changing lanes and stops suddenly, and this may be the first situation which involved an accident likelihood even if the vehicle driven by the driver did not have an actual accident. In this case, the vehicle terminal device100may transmit ADAS data, location data, driving data, and the top-view image before a point in time of the first driving situation as shown inFIG.9Ato the service server300. Here, the accident situation prediction guidance service providing unit331may detect the vehicles52and53near the host vehicle51from the driving image and predict an accident situation that might have occurred in the vehicle in the first driving situation using at least one of the ADAS data and the driving data of the host vehicle51regarding the detected vehicles52and53. Specifically, in the situation ofFIG.9A, the ADAS data may be the executed blind spot notification identification information and the data (distance to the rear left vehicle, etc.) detected in the process of executing the blind spot notification, and since a steering direction of the host vehicle51, which is driving data, is the left, the accident situation prediction guidance service providing unit331may determine that the left vehicle53had an accident likelihood with the host vehicle51in the first driving situation. In addition, the accident situation prediction guidance service providing unit331may predict the accident situation that might have occurred between the host vehicle51and the left vehicle53. Specifically, the accident situation prediction guidance service providing unit331may generate accident situation prediction information of predicting an accident if the host vehicle51had moved as it is by analyzing the speed and the steering data of the host vehicle51and the distance data to the rear left vehicle detected in the process of executing the blind spot notification. For example, in the situation as shown inFIG.9A, the accident situation prediction guidance service providing unit331may generate accident situation prediction information such as “Left rear collision could have occurred after about 3 seconds”. FIG.9Bshows a top-view image of the first driving situation according to another exemplary embodiment of the present invention. A case where the driver of the vehicle51drives without recognizing the vehicle52in front thereof and suddenly stops upon finding the preceding vehicle52may be the first driving situation in which the vehicle51driven by the driver did not have an accident actually but which involved an accident likelihood. In this case, the vehicle terminal device100may transmit ADAS data, location data, driving data, and the top-view image as shown inFIG.9Bbefore a point in time of the first driving situation to the service server300. Also, the accident situation prediction guidance service providing unit331may detect the neighbor vehicles52and53near the host vehicle51from the driving image and predict an accident situation that might have occurred in the vehicle in the first driving situation using at least one of the ADAS data and the driving data of the host vehicle51regarding the detected vehicles52and53. Specifically, in the situation ofFIG.9B, the ADAS data is executed FCWS identification information and data (distance to the preceding vehicle, etc.) detected in the process of executing the FCWS and a steering direction of the host vehicle51as driving data is driving straight, and thus, the accident situation prediction guidance service providing unit331may determine that the preceding vehicle52had an accident likelihood with the host vehicle51in the first driving situation. Also, the accident situation prediction guidance service providing unit331may predict the accident situation that might have occurred between the host vehicle51and the preceding vehicle52. Specifically, the accident situation prediction guidance service providing unit331may generate accident situation prediction information that predicts an accident if the host vehicle51had moved as it is based on the speed and the steering data of the host vehicle51, the distance data to the preceding vehicle detected in the process of executing the FCWS, and the speed of the preceding vehicle. For example, in the situation as shown inFIG.9B, it is possible to generate information for predicting an accident situation such as “Forward collision could have occurred after about 3 seconds.” Meanwhile, the accident situation prediction guidance service providing unit331may predict an accident likelihood of the vehicle using an artificial neural network and provide information on a high accident likelihood to a vehicle expected to meet a specific condition. To this end, the accident situation prediction guidance service providing unit331may further include a learning data collecting unit (not shown), a learning unit (not shown), and a memory storing an accident likelihood evaluation model generated according to learning. The learning data collecting unit may collect driving images, driving data, ADAS data, location data, and accident data for each driving situation from each vehicle. Specifically, the learning data collecting unit may collect various driving related data such as driving images, driving data, ADAS data, location data, accident data, and road condition information from vehicles moving on the entire road using vehicle communication infrastructure such as V2V, V2I, and V2X. The learning unit performs learning using various driving related data, such as driving images, driving data, ADAS data, location data, accident data, road condition information, etc. collected by the learning data collecting unit, and generate an accident likelihood prediction model as a result of learning. Here, the accident likelihood prediction model is an algorithm or program that predicts whether there was a possibility of an accident occurring in the vehicle through the collected data. The accident likelihood prediction model may generate as an output whether there is an accident likelihood and accident situation prediction information, accident likelihood numerical value, etc. when there is an accident likelihood using the data, as an input, received during driving of the vehicle. Specifically, the accident likelihood prediction model may predict an accident likelihood for each driving situation of the vehicle and each road section and generate and provide accident situation prediction information including the accident likelihood to vehicles in a state that meets conditions (vehicle speed, driver's driving habit, driving information of neighbor vehicles, information of a condition of a road in which the vehicle is moving or located in a driving route of the vehicle, etc.) of the predicted result. In addition, the learning unit may further train the accident likelihood prediction model using the output value. In addition, when the output result is an incorrect answer, the driver may input a response to the output result, and the learning unit may train the accident likelihood prediction model based on the driver's response. That is, according to the present invention, the accident likelihood prediction model may be generated by performing machining learning or deep learning, and an accident likelihood of the vehicle may be predicted using the generated model, and a resultant value according to prediction may be provided to the driver. Here, for deep learning, a convolution neural network (CNN) algorithm, which is one of neural network models, may be applied. In this case, deep learning may be performed through augmented data by assuming various conditions of a driving image. Here, the condition defines a condition for converting an image collected as learning data to generate data for learning of the neural network model. Specifically, since various aspects may be exhibited by factors such as shift, rotation, brightness change, blur, etc. of the image, data may be augmented in consideration of the various aspects. Meanwhile, the accident legal guidance service providing unit332may generate accident legal evaluation information based on details of the accident of the vehicle by receiving and analyzing the ADAS data, the location data, the driving data, and the driving image before or after a predetermined period of time from a point in time of the second driving situation in which the vehicle accident occurred. Specifically, the accident legal guidance service providing unit332may detect preceding, subsequent, and side vehicles of the host vehicle from the top-view image, analyze details of the accident using at least one of the ADAS data and the driving data of the host vehicle regarding the detected vehicle, and generate accident legal evaluation information including an accident negligence rate according to the analysis. Here, the driving image may be a top-view image. In addition, the accident legal guidance service providing unit332may additionally display speeds, distances, time, etc. of the vehicles located near the vehicle to the driving image, so that an accident fault may be determined based on the facts. The operation of the accident legal guidance service providing unit332according to an exemplary embodiment of the present invention will be described in more detail with reference toFIGS.10A and10B. FIGS.10A and10Bare diagrams showing a method for generating legal evaluation information according to an exemplary embodiment of the present invention. Referring toFIGS.10A and10B,FIG.10Ais a top-view image of the second driving situation according to an exemplary embodiment of the present invention, illustrating the second driving situation in which, in the process of changing to a left lane by the driver of the vehicle51, the lane change was almost completed but the vehicle53located in the rear collides with the vehicle51to cause an accident because the blind spot notification was not performed. In this case, the vehicle terminal device100may transmit ADAS data, location data, driving data, and the top-view image as shown inFIG.10Ato the service server300before or after a predetermined period of time from a point in time of the second driving situation. Also, the accident legal guidance service providing unit332may detect the vehicles52and53near the vehicle51from the driving image and analyze details of the accident that occurred in the vehicle in the second driving situation using at least one of the ADAS data and driving data of the vehicle51regarding the detected vehicles52and53. Specifically, in the situation ofFIG.10A, the accident legal guidance service providing unit332may detect, as the accident vehicle53, a vehicle closest to the vehicle51among the detected vehicles52and53. Also, in the situation ofFIG.10A, the accident legal guidance service providing unit332may generate accident detail information indicating that the accident occurred due to the fault of the vehicle53in a state where the driver of the vehicle51did not perceive the vehicle53by analyzing data indicating that a blind spot notification was not executed before the lane change of the vehicle51and the fact that the speed of the vehicle51was not reduced until the point in time at which the accident of the vehicle occurred. In addition, the accident legal guidance service providing unit332may generate accident legal evaluation information including an accident fault rate based on the accident detail information. FIG.10Bis a top-view image of the second driving situation according to another exemplary embodiment of the present invention, illustrating the second driving situation in which a blind spot notification was performed while the driver of the vehicle51changes to the left lane, but the vehicle53located at the rear collides with the vehicle51that attempts to change lanes to cause an accident. In this case, the vehicle terminal device100may transmit the ADAS data, the location data, the driving data, and the top-view image as shown inFIG.10Bto the service server300before or after a predetermined period of time from the point in time of the second driving situation. Also, the accident legal guidance service providing unit332may detect the vehicles52and53near the vehicle51from the driving image and analyze details of the accident that occurred in the vehicle in the second driving situation using at least one of the ADAS data and the driving data of the vehicle51regarding the detected vehicles52and53. Specifically, in the situation ofFIG.10B, the accident legal guidance service providing unit332may detect, as the vehicle53, a vehicle closest to the vehicle51among the detected vehicles52and53. Also, in the situation ofFIG.10B, the accident legal guidance service providing unit332may generate accident detail information indicating that the accident occurred due to a fault of the vehicle51, while the driver of the vehicle51was able to perceive the vehicle53until the accident occurred between the accident vehicles51and53based on the data indicating that the blind spot notification included in the ADAS data was executed, distance data between the accident vehicles, and speed data and steering data of the vehicle51up to the point in time of the occurrence of the vehicle accident. In addition, the accident legal guidance service providing unit332may generate accident legal evaluation information including an accident fault rate based on the accident detail information. Meanwhile, when a state of the vehicle is changed from parking to driving, the destination prediction guidance service providing unit333may determine an expected destination of the vehicle driver by comparing a vehicle location data with a previously stored destination data set for each parking location. The storage unit320may store a table in which parking location data of each vehicle driver and destination data oriented from a corresponding parking location are matched. In this case, when the state of the vehicle is changed from parking to driving, the destination prediction guidance service providing unit333may detect a destination candidate group corresponding to the parking location of the vehicle from the storage unit320based on the parking location data of the vehicle and determine an expected destination of the vehicle driver. For example, the destination prediction guidance service providing unit333may determine an expected destination of the vehicle driver from among the candidate destination groups in consideration of time, weather, and the like at a time when the state of the vehicle is changed from parking to driving. The destination prediction guidance service providing unit333may provide a destination prediction guidance service using the determined expected destination. Specifically, the destination prediction guidance service providing unit333may generate destination prediction information such as “Expected destination of the vehicle at the current parking location is the office (work)” and transmit the destination prediction information to the vehicle terminal device100or the user terminal device200. In this case, the vehicle terminal device100or the user terminal device200may perform guidance based on the received destination prediction information. The driver's past driving history guidance service providing unit334may store the ADAS data set for each driving location for the vehicle driver and detect ADAS data corresponding to the location of the vehicle driver by comparing the location data of the vehicle with the previously stored ADAS data set for each driving location. Specifically, the storage unit320may store ADAS data for each driving location of the vehicle driver (e.g., executed ADAS identification information and detection information of executed ADAS). In this case, the driver's past driving history guidance service providing unit334may detect the ADAS data corresponding to the driving location of the vehicle from the storage unit320based on the location data during driving of the vehicle. In this case, the driver's past driving history guidance service providing unit334may provide the driver's past driving history guidance service using the detected ADAS data. Specifically, the driver's past driving history information service providing unit334may generate past ADAS history information such as “Current location is the location where the blind spot notification was performed” and transmit the past ADAS history information to the vehicle terminal device100or the user terminal device200, and in this case, the vehicle terminal device100or the user terminal device200may perform guidance based on the received past ADAS history information. The other driver's past driving history guidance service providing unit335may store ADAS data set for each driving location for another driver and detect ADAS data corresponding to a location of the vehicle driver by comparing location data of the vehicle with the previously stored ADAS data set for each driving location. Specifically, the storage unit320may store ADAS data for each driving location of another driver (e.g., executed ADAS identification information, executed ADAS detection information, and the number of times). In this case, the other driver's past driving history guidance service providing unit335may detect the ADAS data of the other driver corresponding to the driving location of the vehicle from the storage unit320based on the location data during driving of the vehicle. In this case, the other driver's past driving history guidance service providing unit335may provide the other driver's past driving history guidance service using the detected ADAS data. Specifically, the driver's past driving history information service providing unit334may generate past ADAS history information of the other driver such as “Current location is the location where a sharp curve notification was performed from multiple drivers” and transmit the information to the vehicle terminal device100or the user terminal device200, and in this case, the vehicle terminal device100or the user terminal device200may perform guidance based on the received past ADAS history information of the other driver. The controller340controls the overall operation of the service server300. Specifically, the controller340may control all or some of the communication unit310, the storage unit320, and the service providing unit330. In particular, the controller340may provide a driving related guidance service of a vehicle in connection with the vehicle terminal device100and the user terminal device200. Here, the driving related guidance service of the vehicle may include at least one of an accident situation prediction guidance service, an accident legal guidance service, a vehicle driver's destination prediction guidance service, a vehicle driver's past driving history guidance service, and another driver's past driving history guidance service. Hereinafter, a method for providing a driving related guidance service by the service server300according to an exemplary embodiment of the present invention will be described in detail with reference toFIGS.11and12. FIG.11is a flowchart illustrating a method for providing an accident situation prediction guidance service according to an exemplary embodiment of the present invention. Referring toFIG.11, first, the service server300may receive ADAS data, location data, driving data, and a driving image before a point in time of the first driving situation in which an accident did not occur while the vehicle was moving but which involved an accident likelihood (S201). Here, the received driving image may be a top-view image in which a front driving image and a rear driving image are combined and a host vehicle object is placed at a location of the host vehicle in the combined image. Then, the service server300may detect a neighbor vehicle of the host vehicle from the driving image (S202). In addition, the service server300may detect another vehicle which involved an accident likelihood using at least one of ADAS data and driving data of the host vehicle regarding the detected vehicle (S203). In addition, the service server300may predict an accident situation using at least one of ADAS data and driving data for the other vehicle which involved an accident likelihood, and generate accident situation prediction information (S204). Also, the service server300may provide an accident situation prediction guidance service based on the generated accident situation prediction information (S205). FIG.12is a flowchart illustrating a method for providing an accident legal guidance service according to an exemplary embodiment of the present invention. Referring toFIG.12, first, the service server300may receive ADAS data, location data, driving data, and a driving image before or after a predetermined period of time from a point in time of the second driving situation in which an accident occurred in the vehicle (S301). Also, the service server300may detect neighbor vehicles of the host vehicle from the driving image (S302). Also, the service server300may detect the other vehicle in which an accident occurred using at least one of the ADAS data and the driving data of the host vehicle regarding the detected vehicle (S303). In addition, the service server300may analyze details of the accident of the vehicle using at least one of the ADAS data and the driving data of the host vehicle regarding the detected vehicle, and generate accident legal evaluation information (S304). Also, the service server300may provide an accident legal guidance service based on the accident legal evaluation information (S305). Meanwhile, the service providing method according to an exemplary embodiment of the present invention may further include storing a destination data set for each parking location for a driver of a vehicle, determining an expected destination of the vehicle driver by comparing location data of the vehicle with the previously stored destination data set for each parking location when a state of the vehicle is changed from parking to driving, and providing a destination prediction guidance service using the determined expected destination. In addition, the service providing method according to an exemplary embodiment of the present invention may further include storing an ADAS data set for each driving location for a vehicle driver, detecting ADAS data corresponding to a location of the vehicle driver by comparing location data of the vehicle with the previously stored ADAS data set for each driving location, and providing the driver's past driving history guidance service using the detected ADAS data. In addition, the service providing method according to an exemplary embodiment of the present invention may further include storing an ADAS data set for each driving location for another driver, detecting ADAS data corresponding to a location of the vehicle driver by comparing location data of the vehicle with the previously stored ADAS data set for each driving location, and providing the other driver's past driving history guidance service using the detected ADAS data. Meanwhile, the functions according to an exemplary embodiment of the present invention described above may be implemented to be executed by a data processing device implemented as a module. That is, the data processing device according to the present invention may receive and analyze ADAS data, driving data, location data, and a driving image, and perform accident situation prediction, an accident legal evaluation, an expected destination determination, the driver's past driving history determination, and the other driver's past driving history determination according to the analysis. Here, the data processing device may be implemented using software, hardware, or a combination thereof. For example, the hardware may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, micro-processors, and electrical units for performing other functions. The data processing device may be mounted on the vehicle terminal device100, the user terminal device200, and the service server300, and may perform accident situation prediction, an accident legal evaluation, an expected destination determination, the driver's past driving history determination, and the other driver's past driving history determination by analyzing various received data. According to various exemplary embodiments of the present invention, by predicting an accident situation that may occur in a vehicle in the first driving situation which involves an accident likelihood, generating accident situation prediction information, and providing an accident situation prediction service, the driver may remind of a danger and driver's safe driving may be promoted. In addition, according to various exemplary embodiments of the present invention, by analyzing details of an accident in the second driving situation in which the vehicle accident occurred, generating accident legal evaluation information including an accident fault rate according to the analysis, and providing an accident legal guidance service, legal services of financial factors such as vehicle repair costs and advice related to insurance may be automatically supported, thereby increasing the driver's convenience. In addition, according to various exemplary embodiments of the present disclosure, the driver's convenience may be improved through a destination prediction guidance service, the driver's past driving history guidance service, and another driver's past driving history guidance service. Meanwhile, the method for providing a driving related guidance service according to an exemplary embodiment of the present invention may further include storing a destination data set for each parking location for a driver of a vehicle, determining an expected destination of the vehicle driver by comparing location data of the vehicle with the previously stored destination data set for each parking location when a state of the vehicle is changed from parking to driving, and providing a destination prediction guidance service using the determined expected destination. In addition, the method for providing a driving related guidance service according to an exemplary embodiment of the present invention may further include storing an ADAS data set for each driving location for a vehicle driver, detecting ADAS data corresponding to a location of the vehicle driver by comparing location data of the vehicle with the previously stored ADAS data set for each driving location, and providing the driver's past driving history guidance service using the detected ADAS data. In addition, the method for providing a driving related guidance service according to an exemplary embodiment of the present invention may further include storing an ADAS data set for each driving location for another driver, detecting ADAS data corresponding to a location of the vehicle driver by comparing location data of the vehicle with the previously stored ADAS data set for each driving location, and providing the other driver's past driving history guidance service using the detected ADAS data. Meanwhile, in the specification and the claims, terms such as “first”, “second”, “third”, “fourth” and the like, if any, will be used to distinguish similar components from each other and be used to describe a specific sequence or a generation sequence, but is not necessarily limited thereto. It may be understood that these terms are compatible with each other under an appropriate environment so that exemplary embodiments of the present invention to be described below may be operated in a sequence different from a sequence shown or described herein. Likewise, in the present specification, in the case in which it is described that a method includes a series of steps, a sequence of these steps suggested herein is not necessarily a sequence in which these steps may be executed. That is, any described step may be omitted and/or any other step that is not described herein may be added to the method. In addition, in the specification and the claims, terms such as “left”, “right”, “front”, “rear”, “top”, “bottom”, “over”, “under”, and the like do not necessarily indicate relative positions that are not changed, but are used for explanation. It will be understood that these terms are compatible with each other under an appropriate environment so that exemplary embodiments of the present invention set forth herein may be operated in a direction different from a direction illustrated or described herein. The term “connected” as used herein is defined as being connected directly or indirectly in an electrical or non-electrical manner. Here, targets described as being “adjacent to” each other may physically contact each other, be close to each other, or be in the same general range or region, in a context in which the above phrase is used. Here, the presence of phrase “in an exemplary embodiment” means the same exemplary embodiment, but is not necessarily limited thereto. In addition, in the specification and the claims, terms such as “connected”, “connecting”, “linked”, “linking”, “coupled”, “coupling”, and the like, and various modifications of these terms may be used as the meaning including that one component is directly connected to another component or is indirectly connected to another component through the other component. In addition, terms “module” and “unit” for components used in the present specification are used only in order to easily make the specification. Therefore, these terms do not have meanings or roles that distinguish from each other in themselves. Terms used in the present disclosure are for explaining exemplary embodiments rather than limiting the present invention. Unless explicitly described to the contrary, a singular form includes a plural form in the present specification. The word “comprise” and variations such as “comprises” or “comprising,” will be understood to imply the inclusion of stated constituents, steps, operations and/or elements but not the exclusion of any other constituents, steps, operations and/or elements. Hereinabove, the present invention has been described with reference to the exemplary embodiments thereof. All exemplary embodiments and conditional illustrations disclosed in the present specification have been described to intend to assist in the understanding of the principle and the concept of the present invention by those skilled in the art to which the present invention pertains. Therefore, it will be understood by those skilled in the art to which the present invention pertains that the present invention may be implemented in modified forms without departing from the spirit and scope of the present invention. Therefore, the exemplary embodiments disclosed herein should be considered in an illustrative aspect rather than a restrictive aspect. The scope of the present invention is shown in the claims rather than the foregoing description, and all differences within the equivalent range should be interpreted as being included in the present invention. Meanwhile, the method for providing a driving related guidance service according to various exemplary embodiments of the present invention described above may be implemented as programs and be provided to servers or devices. Therefore, the respective apparatuses may access the servers or the devices in which the programs are stored to download the programs. In addition, the method according to various exemplary embodiments of the present invention described above may be implemented as a program and stored in various non-transitory computer readable media and provided. The non-transitory computer readable medium is not a medium that stores data for a short time such as a register, a cache, a memory, or the like, but means a machine readable medium that semi-permanently stores data. Specifically, various applications or programs described above may be stored and provided in the non-transitory computer readable medium such as a compact disk (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a universal serial bus (USB), a memory card, a read only memory (ROM), or the like. Although the exemplary embodiments of the present invention have been illustrated and described hereinabove, the present invention is not limited to the specific exemplary embodiments described above, but may be variously modified by those skilled in the art to which the present invention pertains without departing from the scope and spirit of the present invention as claimed in the claims. These modifications should also be understood to fall within the technical spirit and scope of the present invention.
82,782
11861755
DETAILED DESCRIPTION Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, well-known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments. Reference to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control. Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein. Overview According to at least one example of the present technology, a location beacon system can determine a physical location for transferring a delivery from a delivery person to a customer. Further, the location beacon system can identify a physical beacon associated with the physical location. Also, the location beacon system can detect an interaction between the physical beacon and the delivery person. In some examples, the interaction occurs based on the physical beacon and the delivery person being in physical proximity to each other. As follows, the location beacon system can facilitate transmission of a sensory signal to the delivery person in response to the interaction to provide perceivable direction information associated with the physical location to the delivery person. A system can include one or more processors and at least one computer-readable storage medium storing instructions which, when executed by the one or more processors, cause the one or more processors to determine a physical location for transferring a delivery from a delivery person to a customer. The instructions can also cause the one or more processors to identify a physical beacon associated with the physical location. Also, the instructions can cause the one or more processors to detect an interaction between the physical beacon and the delivery person. In some examples, the interaction can occur based on the physical beacon and the delivery person being in physical proximity to each other. As follows, the instructions can cause the one or more processors to facilitate transmission of a sensory signal to the delivery person in response to the interaction to provide perceivable direction information associated with the physical location to the delivery person. A non-transitory computer-readable storage medium having stored therein instructions which, when executed by one or more processors, cause the one or more processors to determine a physical location for transferring a delivery from a delivery person to a customer. The instructions can also cause the one or more processors to identify a physical beacon associated with the physical location. Also, the instructions can cause the one or more processors to detect an interaction between the physical beacon and the delivery person. In some examples, the interaction can occur based on the physical beacon and the delivery person being in physical proximity to each other. As follows, the instructions can cause the one or more processors to facilitate transmission of a sensory signal to the delivery person in response to the interaction to provide perceivable direction information associated with the physical location to the delivery person. According to another example of the present technology, a location beacon system can determine a collection location for collecting, by an agent, an item. Further, the location beacon system can identify a physical beacon associated with the collection location. Also, the location beacon system can detect an interaction between the physical beacon and the agent. In some examples, the interaction can occur based on the physical beacon and the agent being in physical proximity to each other during a process of the agent collecting the item at the collection location. As follows, the location beacon system can facilitate transmission of a sensory signal to the agent in response to the interaction to provide perceivable direction information associated with the collection location of the item to the agent. The sensory signal can be specific to the agent based on the interaction between the physical beacon and the agent. A system can include one or more processors and at least one computer-readable storage medium storing instructions which, when executed by the one or more processors, cause the one or more processors to determine a collection location for collection, by an agent, an item. The instructions can also cause the one or more processors to identify a physical beacon associated with the collection location. Also, the instructions can cause the one or more processors to detect an interaction between the physical beacon and the agent. In some examples, the interaction can occur based on the physical beacon and the agent being in physical proximity to each other during a process of the agent collecting the item at the collection location. As follows, the instructions can cause the one or more processors to facilitate transmission of a sensory signal to the agent in response to the interaction to provide perceivable direction information associated with the collection location of the item to the agent. The sensory signal can be specific to the agent based on the interaction between the physical beacon and the agent. A non-transitory computer-readable storage medium having stored therein instructions which, when executed by one or more processors, cause the one or more processors to determine a collection location for collection, by an agent, an item. The instructions can also cause the one or more processors to identify a physical beacon associated with the collection location. Also, the instructions can cause the one or more processors to detect an interaction between the physical beacon and the agent based on the physical beacon and the agent being in physical proximity to teach other during a process of the agent collecting the item at the collection location. In some examples, the interaction can occur based on the physical beacon and the agent being in physical proximity to each other during a process of the agent collecting the item at the collection location. As follows, the instructions can cause the one or more processors to facilitate transmission of a sensory signal to the agent in response to the interaction to provide perceivable direction information associated with the collection location of the item to the agent. The sensory signal can be specific to the agent based on the interaction between the physical beacon and the agent. Description As previously described, a large amount of time can be lost due to a deliverer being unable to find a point of delivery (e.g., a residence). This can occur because residence numbers (e.g., at the beginning of an address) are not always displayed prominently or visible, e.g. due to a lack of proper lighting. Delivery data shows a clear increase in time between the deliverer entering a customer geofence and completing the delivery, after sunset. This increase in time delay also corresponds with sundown throughout every season and daylight savings. Some delivery locations are hard to find even during the day as the residence numbers are not readily visible to the deliverer. Further, it is often difficult for a delivery person to quickly and efficiently collect an item for delivery. In particular, when a specific item is at a location with various other items, e.g. items for collection and delivery, it can be difficult for a delivery person to find the specific item at the location. In turn, this can further cause time delays in delivering items. Therefore, there exists a need for systems that facilitate more efficient collection and/or delivery of items by a delivery person. In particular, there exists a need for beacon devices that can facilitate more efficient collection of items from a merchant and/or more efficient delivery of items to a customer. The present technology includes systems, methods, and computer-readable media for solving these problems and discrepancies. Specifically, the present technology involves system, methods, and computer-readable media for using an interaction between a physical beacon and a delivery person to facilitate a delivery by a delivery person to a customer. Additionally, the present technology involves systems, methods, and computer-readable media for using an interaction between a physical beacon and a delivery person to facilitate collection of an item for delivery. FIGS.1A and1Billustrate perspective views of an example location beacon100, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a housing110, attachment extensions112, apertures114, suction cups116, a light emitting diode (LED) strip118, a transceiver122, and a processor124. The location beacon100(e.g., an internet connected lighting appliance) can be installed at a customer destination. For example, the location beacon100can be installed on a window, door, front yard, or anywhere location readily perceivable from a street or by a driver in a car who may be driving by. When the deliverer crosses a customer geofence (e.g., Cx geofence158ofFIG.10), i.e., approaches a delivery destination, the location beacon100can be triggered by a platform server144ofFIG.7to illuminate and indicate its location to the deliverer. It is noted that while visual signals are used in examples described herein, any applicable forms of signals (e.g., visual, audible, etc.) that can be perceived by a human can be used to provide direction information. In some implementations, the location beacon100can be enclosed in a weatherproof housing110(e.g., a weatherproof case) having one or more external facing surfaces of the location beacon100. The housing110of the location beacon100can also include an opaque light diffusing window to sufficiently provide enough illumination to attract the attention of the deliverer. In some examples, the other surfaces of the location beacon100can also be opaque to signal the customer as well as the deliverer. The housing110of the location beacon100can be made of plastic, Polyethylene Terephthalate, High-Density Polyethylene, Polyvinyl Chloride, Low-Density Polyethylene, or any other material suitable for the intended purpose and understood by a person of ordinary skill in the art. In other implementations, the attachment extensions112can include apertures114that can be configured to receive the suction cups116. In some aspects, the location beacon100can support multiple mounting methods, indoors or outdoors, such that the location beacon100can be positioned on a window136as shown inFIG.3(e.g., by utilizing the suction cups116), front yard142as shown inFIG.6(e.g., by utilizing a stake140ofFIG.6), door, or any other location suitable for the intended purpose of being seen from the street or from a car that may be moving. WhileFIGS.1A and1Billustrate the location beacon100as having two sets of attachment extensions112and suction cups116, other combinations and variations are contemplated in this disclosure. For example, a larger sized location beacon100can include more sets of attachment extensions112and suction cups116. In some examples, the LED strip118of the location beacon100can further pulse, show an animated pattern, or display a specific color that is readily visible to the approaching deliverer. By illuminating, the location beacon100eliminates the need for the deliverer to search for the delivery destination address (e.g., a house number). The location beacon100can also simultaneously act as a signal to a customer that a delivery is approaching. For example, when the deliverer enters the customer's geofence, the location beacon100can begin to flash or illuminate, indicating to the customer that the deliverer is nearby. In some implementations, the illuminated indication provided by the LED strip118of the location beacon100can be accomplished by one or more multi-colored LEDs that can be controlled in brightness and color, allowing for multiple colors, sequences, and animations (e.g., in the case of a multi-element display). Animations of a multi-segment display can also convey the readiness of an order or distance from the delivery destination, i.e., used as a progress bar, the more bars that are illuminated, the closer the deliverer is to the location beacon100. Moreover, whileFIG.1Aillustrates the LED strip118as having 24 LED pixels, more or less LED pixels can be utilized. For example, a larger sized location beacon100can have more LED pixels, or an application where a brighter illumination is needed, more LED pixels can be included in the location beacon100. In other examples, the transceiver112of the location beacon100can connect the location beacon100wirelessly, either through a persistent cellular, Wi-Fi, other wireless, or Bluetooth Low Energy (BLE) connection to the internet directly or through an intermediate mobile device. For example, the location beacon100can receive an illumination trigger from the platform server144over the internet from a wireless connection. In some examples, the location beacon100can be connected to the internet via a cell tower, base station, or other country-wide wireless network suitable for the intended purpose and understood by a person of ordinary skill in the art. In some aspects, the processor124as described herein can include a microcontroller, a local processor, or any other processor suitable for the intended purpose and understood by a person of ordinary skill in the art. The processor124of the location beacon100can further be configured to control color, animation, and timing of the illuminations presented by the LED strip118. Once the delivery is completed, the platform144can trigger the location beacon100to disengage and turn off. In other implementations, the location beacon100can be utilized at a merchant location where multiple orders may need to be differentiated from one another. For example, if a restaurant has multiple orders, a location beacon100can be utilized for each order. Each of the location beacons100can then be associated with a corresponding color that differentiates itself from the other order's location beacons100. In this example, when the deliverer arrives to collect the order, the deliverer can be provided with a color on their mobile application via the platform server144that is associated with the order to be picked up. For example, the deliverer can receive the color blue on their mobile application from the platform server144that is associated with the order to be picked up from the corresponding restaurant. When the deliverer enters the geofence of the restaurant, the platform server144can send a signal (e.g., instructions) to the location beacon100to activate the LED strip118with the color blue, which is the color that was provided to the mobile application of the deliverer. In this example, the deliverer can readily determine which order to collect among the many orders sitting on a counter of the restaurant. In some implementations, when many location beacons100are present next to one another, different variations of illumination may be utilized. For example, instead of a solid color, the location beacon100may flash, pulse, stream, intensify from side to side, animate, or any other type of illumination to differentiate itself from other location beacons100proximate to its location. In other examples, the location beacons100can also guide merchants to a drop off location to a customer at a curbside zone. For example, the customer can have a location beacon100positioned on a windshield of their vehicle that can illuminate a signal to the merchant. The location beacons100can also guide merchants to stage items at a locker or a cubby (e.g., a smart shelf). In some implementations, the location beacons100can be utilized by a merchant to signal to a deliverer that an order is ready for pick up at a counter or a locker/cubby (e.g., smart shelf). The location beacon100can also notify a deliverer where to collect items when there are multiple pick up stations. The location beacon100can also assist the deliverer by providing a drop off point of goods to be delivered. In other implementations, multiple location beacons100can be “strung” together (e.g., as breadcrumbs) to lead the deliverer along a predetermined path from parking to collection to indoor navigation. For example, multiple location beacons100can be positioned in a line that leads to the front door of the restaurant that illuminate when the deliverer enters a geofence of the restaurant. A color/pattern can be assigned to a specific order for use at collection through drop off. In some implementations, the location beacon100can guide a customer via a mobile application on a user equipment to collect an order from/within an autonomous vehicle. The location beacon100can further notify a customer when an order is ready for pickup at a counter or a locker/cubby (e.g., smart shelf). The location beacon100can also inform a customer on the status of their order and if it is delayed or canceled. For example, if an order is delayed or canceled, the platform server144can relay a message to a mobile application of the customer's user equipment via the location beacon100or provide the message directly to the customer's user equipment. In some examples, the location beacon100can assist a customer find their order from a locker/cubby (e.g., smart shelf). The location beacon100can also guide a customer to the correct spot at a curbside zone. The location beacon100can further notify a customer if a deliverer is attempting to reach them. For example, the location beacon100can flash red to indicate to the customer that the deliverer is attempting to call or text the customer. In other implementations, the location beacon100can include additional inputs: switch/button to activate the location beacon100; a geofence is entered (e.g., at drop off or at a merchant); customer arrives at a merchant for pick up; a deliverer pick up; a deliverer arrival; a delivery completed; on-time status (e.g., delta to predicted time); deliverer call to customer; location awareness for the location beacon100; and a microphone. The location beacon100can further include software intelligence such as customized parameters for when and what to trigger, when to turn off, etc. In some examples, the location beacon100can include additional outputs: if this, then that (IFTTT) integration with other hardware and external software; LED lights (e.g., color, animation, live progress, etc.); and sound (e.g., alerts). The location beacon100can further include mesh self-organizing networks that can include connecting multiple close proximity devices to a single IoT node. In some implementations, the location beacon100can further include inertial navigation (e.g., inertial measurement unit (IMU)), triangulation spatial awareness grid, 3D location services, inventory management (e.g., 3D scan of produce and software as a service (SaaS)), and scales. FIG.2illustrates an example location beacon100along with a wireless remote control126, according to some aspects of the disclosed technology. In some implementations, location beacon100can be controlled by a wireless remote controller126. For example, where global positioning system (GPS) capabilities are insufficient in a given area to provide a geofence to a customer, the customer may utilize the wireless remote control126to activate the location beacon100. The location beacon100may further include switch (not shown) to manually activate and deactivate the location beacon100. FIG.3illustrates a side view of an example location beacon100attached to a surface136(e.g., a window, a wall, etc.), according to some aspects of the disclosed technology. In some implementations, the location beacon100may be fastened to an interior side of a home window by utilizing the suction cups116on the home window. By positioning the location beacon100as shown inFIG.3, the light138illuminated by the LED strip118will project to the outside of the house to catch the attention of the deliverer. FIG.4illustrates an example location beacon100along with a USB micro connection132, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include an interface130(e.g., a micro USB interface) to connect a battery128of the location beacon100to a power source (e.g., a 110 AC wall power source). In some implementations, the location beacon100can further include and be powered via a battery128that may be a non-rechargeable or rechargeable battery from an internal power source128, an external power source, or a solar cell134ofFIG.5. FIG.5illustrates an example location beacon100along with a solar panel134, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a solar panel/cell134to power the battery128of the location beacon100. As shown inFIG.5, even though the solar panel134is positioned inside of the house, the window136allows sunrays to pass through and charge the solar panel134. FIG.6illustrates an example location beacon100along with a stake140, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a stake140that can be positioned the location beacon100on a lawn142. For example, the front door or front side of the customer's house may not be readily visible from the street. In such an example, the customer can stake the location beacon100further out from their house at a location that is visible to the deliverer. In some examples, the stake140and the housing110of the location beacon100can be one homogenous unit. FIG.7illustrates an example location beacon100having a WAN receiver148, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a wide area network (WAN) receiver148, a local processor124, and an LED display118. The location beacon100can further communicate with the platform server144via a WAN connection146(e.g., a cellular connection, a lower power, WAN protocol (e.g., Lora), etc.). In other implementations, a user or the platform server144can associate a unique appliance serial number of the location beacon100of a user account with the location beacon100, which can be connected to the platform server144. Platform server144can also provide low bandwidth, IoT cell service plans to location beacons100to provide wireless internet access so that the location beacon100can communicate with a user or the platform server144. In some examples, the location beacon100can be connected to the internet via a low bandwidth cellular connection or other country-wide wireless WAN services. A user can onboard the location beacon100by entering a serial number or taking a photo of a QR code on the location beacon100. An appliance serial number of the location beacon100can be paired with an account on the platform server144and establish a control path with the platform server144. A specific appliance ID can also be paired with a specific address within an account with multiple addresses. FIG.8illustrates an example location beacon100having a Bluetooth receiver152, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a Bluetooth receiver152, a local processor124, and an LED display118. The location beacon100can further communicate with the platform server144via a mobile application150on a user equipment of a customer. In other implementations, the location beacon100can be paired with a customer's mobile phone and a mobile application150via Bluetooth low energy (BLE). An intermediate repeater device may also be implemented to extend BLE range. FIG.9illustrates an example location beacon100having a Wi-Fi and Bluetooth receiver122, according to some aspects of the disclosed technology. In some implementations, the location beacon100can include a Wi-Fi/Bluetooth receiver122, a local processor124, and an LED display118. The location beacon100can further communicate with the platform server144via a Wi-Fi local area network (LAN)146. The location beacon100can also utilize the Bluetooth receiver122for an initial Wi-Fi onboarding via a mobile application150on a user equipment of a customer. In some examples, a user's wireless internet connectivity (e.g., Wi-Fi) can provide internet connectivity to the location beacon100. For example, the location beacon100can include an onboarding process involving entering network credentials into the location beacon100, thereby providing internet connectivity to the location beacon100. In other examples, the onboarding process can include entering network credentials via Bluetooth from a mobile application or via USB connection with a user equipment (e.g., a computer, a laptop, or a tablet). In another example, the user can connect to the location beacon100via a mobile application on a user equipment (e.g., a smartphone, computer, tablet) that utilizes Bluetooth Low Energy (BLE). The location beacon100can utilize both types of connectivity (e.g., Wi-Fi and BLE) to maintain a connection with the user equipment or the platform server144. In other examples, a user equipment and a mobile application150can connect to the location beacon100via BLE to specify credentials to connect to a local Wi-Fi network. Once initial onboarding is complete, the user equipment is not necessary and the location beacon100can be connected with full internet access via a Wi-Fi connection. FIG.10illustrates an example diagram of activating a location beacon100, according to some aspects of the disclosed technology. In some implementations, a virtual Cx (e.g., customer) geofence zone158can be established around a customer delivery destination (Cx)156. A deliverer (Dx)154approaching the Cx destination156can cross the geofence158. Crossing of the geofence158by the deliverer154can trigger a signal sent to the location beacon100, which can then illuminate the customer's156location beacon100. The location beacon100can illuminate an animation, blink, or present a specific color, which makes it easier for the deliverer154to spot the customer's location156from their vehicle, thereby guiding the deliverer154to the customer156in a more efficient manner than having to look for a house number or door. The location beacon100can also be illuminated the entire time the deliverer154is within the geofence158, and until the location beacon100is triggered to be turned off by the delivery being marked as “delivered” in the platform144. FIG.11illustrates an example diagram of utilizing location beacons170,172,176,178,182,184in a commercial setting1100, according to some aspects of the disclosed technology. In some implementations, the commercial setting1100can include workers164positioned at two stations to place and retrieve products from shelves166: an induction station160and a pick and pack station162. At induction station160, the workers164can travel along path168by following illuminated location beacon170(e.g., an aisle marker) to illuminated location beacon172(e.g., a shelf marker) to place a product on the shelf166. At pick and pack station162, the workers164can travel along paths174,180by following illuminated location beacons176,182(e.g., an aisle markers) to illuminated location beacons178,184(e.g., shelf markers) to retrieve a product from the shelf166. In other implementations, location devices170,172,176,178,182,184can be utilized throughout a commercial setting (e.g., DashMart) to single out (or track) a specific product/SKU on a shelf for a shopper or deliverer to collect, also referred to in this disclosure as a pick-to-light system. Using the location devices170,172,176,178,182,184in this application can provide a low cost and efficient solution to the problem of being unable to find a product, which can also be maintained by the platform server144. FIG.12illustrates an example diagram of a location beacon system1200including a plurality of location beacons170,172A,172B,172C that are associated with corresponding SKUs, according to some aspects of the disclosed technology. In some implementations, a group of SKUs186can be linked to two types of marker lights, aisle marker170and shelf marker172A/172B/172C. The group of SKUs186can include various SKUs. For example, aisle marker170can be utilized for macro navigation, and shelf markers172A,172B,172C can be positioned within the aisle to provide a more specific location. A subset of SKUs188A/188B/188C also can be linked to each of shelf markers172A,172B,172C. For example, a shelf marker172A,172B,172C can direct a worker164to a general area where a cluster of products (e.g., a subset of SKUs188A,188B,188C) may be located. The location beacon system1200can be based on linking a marker serial number to the SKUs186within a database of the location beacon system1200. In some examples, the location beacon system1200can include leverage hardware such as wireless connectivity (e.g., LAN and WAN), platform integration, and RGB multi-pixel LED (e.g., colors and animations). Having disclosed some example system components and concepts, the disclosure now turns toFIG.13, which illustrates a flowchart of a method1300for utilizing a signaling location beacon (e.g., location beacon100as previous described). The method1300shown inFIG.13is provided by way of example, as there are a variety of ways to carry out the method. Additionally, while the example method1300is illustrated with a particular order of steps, those of ordinary skill in the art will appreciate thatFIG.13and the modules shown therein can be executed in any order and can include fewer or more modules than illustrated. Each module shown inFIG.13represents one or more steps, processes, methods or routines in the method1300. At step1302, method1300can include determining when a user equipment of a deliverer crosses a geofence associated with a customer account. As will be discussed in greater detail later, such geofence crossing can be part of an interaction between a signaling location beacon, otherwise referred to as a “physical beacon,” (e.g., location beacon100as illustrated inFIGS.1A-10) and the deliverer. At step1304, method1300can include providing a first set of instructions to a location beacon to activate a lighting signal to indicate a position of the location beacon. For example and as will be discussed in greater detail later, the location beacon can be illuminated based on the interaction to provide a sensory signal to the delivery person. At step1306, method1300can include providing a second set of instructions to the location beacon to deactivate the lighting signal once delivery has been completed. Alternatively, any sensory signal that is provided to the delivery person based on the interaction can be deactivated. Such deactivation can occur based on passing of a specific amount of time or based on completion of item delivery or pickup. FIG.14illustrates a flowchart for an example method1400of facilitating transmission of a sensory signal of direction information to a delivery person based on an interaction between the delivery person and a physical beacon, according to some aspects of the disclosed technology. The method1400shown inFIG.14is provided by way of example, as there are a variety of ways to carry out the method. Additionally, while the example method1400is illustrated with a particular order of steps, those of ordinary skill in the art will appreciate thatFIG.14and the modules shown therein can be executed in any order and can include fewer or more modules than illustrated. Each module shown inFIG.14represents one or more steps, processes, methods or routines in the method1400. At step1410, a physical location for transferring a delivery, e.g. a delivery location, from a delivery person to a customer is determined. The physical location can be a location of the customer or a rendezvous point for picking up the delivery. For example, the physical location can be a home or office space of a customer. A delivery, as used herein, includes an item that can be transported and transferred to a customer. More specifically, a delivery can include an item that can be picked up, transported to a location, and transferred to a customer at the location. For example, a delivery can include food items that are picked up from a restaurant and transferred to a customer. In another example, a delivery can include a grocery item that is picked up from a merchant and transferred to a customer. At step1420, a physical beacon associated with the physical location is identified. A physical beacon can include an applicable physical device for facilitating the transmission of signals to a delivery person for guiding the delivery person to a location, such as the beacons described here. The physical beacon can be used in facilitating transmission of a sensory signal to a delivery person, as will be described in greater detail later. A sensory signal, as used herein, can include a signal that is capable of being perceived by a human, e.g. the delivery person. Specifically, a sensory signal can include a signal that is capable of being perceived by the delivery person while the delivery person is involved in a step associated with either or both picking up the delivery and transferring the delivery to the customer. For example, a sensory signal can include a visual signal that can be seen by the delivery person while they are delivering a package. In another example, a sensory signal can include an auditory signal that can be heard by the delivery person when they are picking up an item for delivery. The physical beacon can be associated with the physical location based on a position of the beacon in relation to the physical location. Specifically, the physical beacon can be at the physical location. For example, the physical beacon can illuminate to signify presence of the physical location at the physical location. Further, the physical beacon can be in proximity to the physical location but not actually at the physical location. For example, the physical beacon can illuminate to indicate a direction of the physical location. Additionally, the physical beacon can be part of a plurality of physical beacons that are at the physical location or in proximity to the physical location. For example, the physical beacon can be part of a plurality of physical beacons that form a trail to or towards the physical location. Further in the example, the physical beacons can be illuminated to create a visible trail to or towards the physical location. The physical beacon can be physically placed at a position in relation to the physical location as part of being associated with the physical location. For example, the physical beacon can be placed in a front yard of a house in order to facilitate the transfer of deliveries at the house. In another example, the physical beacon can be placed at an entrance of a street to facilitate the transfer of deliveries to one or more houses on the street. Additionally, the physical beacon can be associated with a customer, e.g. for transferring deliveries to the customer. For example, the physical location can be associated with a customer at a home location and used in facilitating transfer of delivers to the customer at the home location. Additionally, the physical beacon can be associated with a plurality of different customers. For example, the physical beacon can be at an office location and associated with different customers at the office location. As follows, when the different customers place orders for deliveries, the physical beacon can be used in facilitating transfer of the deliveries to the different customers at the office location. The physical beacon can be associated with one or more customers based on an anticipated transfer of one or more deliveries to the one or more customers, e.g. at a physical location that is associated with the physical beacon. For example, the physical beacon can be physically placed at a home of a customer and associated with the customer, e.g. based on being physical placed at the home of the customer. In turn, the physical beacon can be used in facilitating transfer of deliveries to the customer at the home based on the association of the beacon with the customer. In being associated with a plurality of customers for facilitating transfer of different deliveries to the customers, the physical beacon can be customer agnostic. Customer agnostic, as used herein, includes that the physical beacon can be associated with a plurality of different customers without being specific to each of the customers. In particular, any of the customers that are associated with the physical beacon can use the physical beacon to facilitate transfer of different deliveries in a shared manner, e.g. potentially at the same time. A peripheral device to the physical beacon can be used in associating the physical beacon with either or both a physical location and a customer. For example, a global GPS-enabled device can be operated in relation to the physical beacon to determine and/or input a physical location of the physical beacon. In another example, a delivery application-enabled device can be operated to scan or otherwise detect the physical beacon and associate the beacon with a customer. More specifically, the physical beacon can be onboarded to a network, e.g. through an applicable connection to the network, and a customer can be associated/paired with the physical beacon, e.g. in a delivery application, through the network. Returning back to the flowchart shown inFIG.14, at step1430, an interaction is detected between the physical beacon and the delivery person. An interaction, as used herein, includes an event that occurs based on the physical beacon and the delivery person being in physical proximity to each other. Physical proximity, as used herein with respect to an interaction, can include the physical beacon and the delivery person being separated by a physical distance such that the physical beacon can be used to guide the delivery person to a specific location. For example, the physical beacon and the delivery person can be close enough so that the delivery person can view the physical beacon as part of the beacon guiding the delivery person to a specific location. In another example, the physical beacon and the delivery person can be close enough so that the beacon can trigger the transmission of a sensory signal through a device used by the delivery person. An interaction occurring between the physical beacon and the delivery person can occur during a delivery process in which the delivery person is transferring a delivery to a customer. Further, an interaction occurring between the physical beacon and the delivery person can occur during a pickup/collection process in which the delivery person is picking up an item to be delivered to a customer. An interaction between the physical beacon and the delivery person can be triggered based on a boundary that is defined with respect to the physical beacon, otherwise referred to as a geofence. Specifically, an interaction between the physical beacon and the delivery person can be initiated when the delivery person crosses a boundary surrounding the physical beacon. For example, when the delivery person is within a boundary defined as 50 meters away from the physical beacon, then an interaction between the physical beacon and the delivery person can be triggered. While reference is made to the interaction occurring between the delivery person and the physical beacon, an interaction can actually occur between a device of the delivery person and the physical beacon. More specifically, an interaction can occur between the physical beacon and a device that is physically near the delivery person to serve as an accurate representation or approximation of the location of the delivery person. For example, a smart device in a car driven by the delivery person can serve to trigger an interaction with the physical beacon. At step1440, transmission of a sensory signal to the delivery person is facilitated in response the interaction occurring between the physical beacon and the delivery person. The sensory signal can provide perceivable direction information associated with the physical location to the delivery person. The direction information can include applicable information that can be perceived by the delivery person to provide directions to the physical location. Specifically, the direction information can include information that provides directions to the physical location in relation to a location of the physical beacon. For example, the direction information can inform the delivery person that the physical beacon is at the delivery location. In another example, the direction information can indicate, to the delivery person, a direction to the physical location, e.g. in relation to the physical beacon. The sensory signal can be transmitted to the delivery person during a delivery process in which the delivery person is transferring a delivery to a customer or during a pickup/collection process in which the delivery person is picking up an item to be delivered to a customer In facilitating transmission of the sensory signal to the delivery person, the physical beacon itself can generate and transmit the sensory signal to the delivery person. The sensory signal can be generated and/or transmitted by the physical beacon in one or more applicable forms. For example, the physical beacon can illuminate to create a visual signal that is viewable by the delivery person. In another example, the physical beacon can generate an auditory signal that the delivery person can hear. Further, a device that is in proximity to the delivery person and the separate from the physical beacon can be controlled to generate and transmit the sensory signal to the delivery person. Specifically, instructions to generate a visual representation of directions to the physical location can be sent to a smart device that is viewable by the delivery person. As follows, the smart device can generate the visual representation to provide the directions to the delivery person. The physical beacon can be used in controlling the device that is in proximity to the delivery person to generate and transmit the sensory signal to the delivery person. For example, the physical beacon can transmit control instructions for generating the sensory signal to a smart phone in proximity to the delivery person. In controlling the device to generate and transmit the sensory signal, the physical beacon can either send the control instructions to the device over a LAN or a WAN. For example, the physical beacon can be connected to the device over a short range wireless connection and transmit the instructions to the device over the short range wireless connection. In another example, the physical beacon can be connected to the device over a WAN and transmit the instructions to the device through the internet. The sensory signal can be transmitted to the delivery person in response to both the interaction occurring between the physical beacon and the delivery person and an association of the physical beacon with a specific customer. In particular, the sensory signal can be transmitted to the delivery person based on the interaction and the physical beacon being associated with a specific customer who is the subject of the delivery. As a result, a number of interaction events can occur between different delivery drivers and the physical beacon. However, the delivery people can be delivering items to customers who are not associated with the physical beacon, e.g. the physical beacon is not indicative of the physical locations of the customers. Accordingly, even though the interactions do occur with the physical beacon, sensory signals are not transmitted based solely on these interactions. This is particularly advantageous in dense traffic environments where a number of delivery people pass in proximity to deployed physical beacons. The sensory signal can be generated and transmitted to the delivery person based on the location of the physical beacon in relation to the physical location. Specifically, variable characteristics of the sensory signal can be selected based on the location of the physical beacon in relation to the physical location. For example, the color of the physical beacon can be adjusted as a delivery person approaches the delivery location. In another example, a specific color can be used to illuminate the physical beacon if the physical beacon is actually at a delivery location. Likewise, a different color can be used to illuminate the physical beacon if the physical beacon is positioned away from the physical location, e.g. as part of a group of physical beacons that form a path towards the physical location. The physical beacon can be location aware. Specifically, the physical beacon can include hardware and/or software for identifying a position or location of the physical beacon, e.g. in relation to the delivery location or location for picking up a delivery item. As follows, the physical beacon can generate and transmit the sensory signal based on the self-identified location of the physical beacon. For example, the physical beacon can identify that the current location of the beacon is 50 feet north of a delivery location. Further in the example, the physical beacon can generate a sensory signal that points south for directing a delivery person to the delivery location. FIG.15illustrates an example schematic diagram of a physical beacon1500, according to some aspects of the disclosed technology. The physical beacon1500can be used in facilitating delivery of an item to a delivery location according to the various embodiments described herein. Further, the physical beacon1500can be used in facilitating collection of an item for delivery according to the various embodiments described herein. The physical beacon1500includes an interaction detector1502, a signal controller1504, and optionally a signal transmitter1506. The interaction detector1502includes an applicable system for detecting an interaction between the physical beacon1500and a delivery person. Specifically, the interaction detector1502includes either or both software and hardware for detecting an interaction that occurs between the physical beacon1500and a delivery person based on proximity of the delivery person to the physical beacon1500. For example, the interaction detector1502can include a sensor configured to sense that a delivery person is within proximity of the physical beacon1500based on a signal received from a device associated with the delivery person. The signal controller1504controls generating and transmission of a sensory signal to the delivery person. Specifically, the signal controller1504controls generating and transmission of a sensory signal to the delivery person based on an interaction between the physical beacon1500and the delivery person. In controlling generation and transmission of a sensory signal to the delivery person, the signal controller1504can control a device in proximity to the delivery person through a WAN or LAN. For example, the signal controller1504can cause a deliver application running on a smart device of the delivery person to generate a sensory signal and transmit the sensory signal to the delivery person. The signal transmitter1506includes one or more systems for generating and transmitting a sensory signal from the physical beacon1500to the delivery person. In particular, the signal transmitter1506includes either or both software and hardware for generating and transmitting a sensory signal from the physical beacon1500to the delivery person. The signal transmitter1506can generate and transmit a sensory signal based on instructions received from the signal controller1504. FIG.16illustrates a flowchart for an example method1600of facilitating transmission of a sensory signal of direction information to an agent based on interaction between the agent and a physical beacon, according to some aspects of the disclosed technology. The method1600shown inFIG.16is provided by way of example, as there are a variety of ways to carry out the method. Additionally, while the example method1600is illustrated with a particular order of steps, those of ordinary skill in the art will appreciate thatFIG.16and the modules shown therein can be executed in any order and can include fewer or more modules than illustrated. Each module shown inFIG.16represents one or more steps, processes, methods or routines in the method1600. At step1610, a collection location for an item that needs to be picked up by an agent can be determined. The item can be for a delivery, which needs to be picked up by an agent and transferred to a customer. In some examples, the item that needs to be picked up can be any merchandise, grocery items, or a food order. In other examples, the item can be a product in a warehouse or a storehouse. As follows, the collection location can be a grocery store, a restaurant, a warehouse, a store, or any business place for a merchant. At step1620, a physical beacon associated with the collection location can be identified. A physical beacon can include an applicable physical device for facilitating the transmission of signals to an agent for guiding the agent to the collection location. More specifically, the physical beacon can include a location beacon as described herein and as illustrated inFIGS.1A-10. The physical beacon can be used in facilitating transmission of a sensory signal to an agent, as will be described in greater detail later. A sensory signal, as previously described with respect toFIG.14, can include a signal that can be perceived by a human, e.g., the agent. Specifically, a sensory signal can include any type of signals, visual (e.g., lights) or audible (e.g., sirens, bells, etc.) that is perceivable by the agent while the agent is involved in a step associated with either or both collecting the item and transferring the item to the customer. The physical beacon can be associated with the collection location based on a position of the beacon in relation to the collection location. Specifically, the physical beacon can be at the collection location. Further, the physical beacon can be in proximity to the collection location but not actually at the collection location. For example, the physical beacon can illuminate to indicate a direction of the physical location. Additionally, the physical beacon can be part of a plurality of physical beacons that are at the collection location or in proximity to the physical location. The plurality of physical beacons can form a trail to or towards the collection location. For example, as illustrated inFIG.12, the physical beacon can be an aisle marker to guide the agent to a general area of where the item for collection is located. In other example, as illustrated inFIG.12, the physical beacon can be a shelf marker where the item is placed to guide the agent to the collection location for the item. In some implementations, the physical beacon can be associated with one or more items with an identical SKU. For example, in a warehouse setting, the physical beacon can be more specifically associated with a specific SKU so that an agent can locate the item with the corresponding SKU based on the direction information provided by the physical beacon. The physical beacon can be linked to SKU within a database that stores the product information such as SKUs. In other implementations, the physical beacon can be associated with a plurality of items. The physical beacon can be used to indicate a general area where more than a single item is located. Further, the sensory signal can be specific to each of the plurality of items. For example, a different color can be used to indicate the item (e.g., red for Item A, yellow for Item B when Items A and B are located close enough, for example, on the same shelf that a single physical beacon is used to indicate their locations). At step1630, an interaction between the physical beacon and the agent can be detected. An interaction, as used herein, can include an event that occurs based on the physical beacon and the agent being in physical proximity to each other. Physical proximity, as previously described with respect toFIG.14, can include the physical beacon and the agent being separated by a physical distance such that the physical beacon can be used to guide the agent to a specific location (e.g., a collection location). For example, the physical beacon and the agent can be close enough so that the agent can view the physical beacon as part of the physical beacon guiding the agent to a specific location. In another example, the physical beacon and the agent be close enough so that the physical beacon can trigger the transmission of a sensory signal through a device used by the agent. An interaction between the physical beacon and the agent can be triggered based on a boundary that is defined with respect to the physical beacon (i.e., a geofence). Specifically, an interaction between the physical beacon and the agent can be initiated when the agent crosses a boundary surrounding the physical beacon, as illustrated inFIG.10. While reference is made to the interaction occurring between the agent and the physical beacon, an interaction can actually occur between a device of the agent and the physical beacon. More specifically, an interaction can occur between the physical beacon and a device that is physically near the agent to serve as an accurate representation or approximation of the location of the agent. For example, a smart device in a car driven by the agent or that is carried by the agent can serve to trigger an interaction with the physical beacon. At step1640, transmission of a sensory signal to the agent is facilitated in response to the interaction between the physical beacon and the agent. As previously described, the sensory signal can provide perceivable direction information associated with the collection location to the agent. The direction information can include any applicable information that can be perceived by the agent to provide directions to the collection location, more specifically, to the collection location in relation to a location of the physical beacon. In facilitating transmission of the sensory signal to the agent, the physical beacon itself can generate and transmit the sensory signal to the agent. The sensory signal can be generated and/or transmitted by the physical beacon in one or more applicable forms. The sensory signal can be specific to one or more agents. In particular, the sensory signal can be specific to one or more agents based on the interaction between physical beacon(s) and the one or more agent. For example, a certain sensory signal can be generated specifically for the agent in response to the interaction between the physical beacon and the agent. In generating a sensory signal that is specific to an agent, characteristics of the sensory signal can be modified and correspond to the agent. For example, signals of certain colors can be displayed for specific agents. This is advantageous in areas with high densities of physical beacons and agents to reduce confusion amongst agents and provide for efficient item collection. Further, a device in proximity to the agent and separate from the physical beacon can be controlled to generate and transmit the sensory signal to the agent. Specifically, instructions to generate a visual/audible representation of directions to the collection location can be sent to a smart device that is associated with the agent (e.g., viewable by the agent or sensed by the agent via a buzz, etc.). As follows, the smart device can generate the representation of the directions to the collection location to the agent. The physical beacon can be used in controlling the device that is in proximity to the agent to generate and transmit the sensory signal to the agent. For example, the physical beacon can transmit control instructions for generating the sensory signal to a smart phone in proximity to the agent. In controlling the device to generate and transmit the sensory signal, the physical beacon can either send the control instructions to the device over a LAN or a WAN as previously described with respect toFIG.14. For example, the physical beacon can be connected to the device over a short range wireless connection and transmit the instructions to the device over the short range wireless connection. In another example, the physical beacon can be connected to the device over a WAN and transmit the instructions to the device through the internet. The sensory signal can be generated and transmitted to the agent based on the location of the physical beacon in relation to the collection location. Specifically, variable characteristics of the sensory signal (e.g., colors, blinking, brightness, a combination of visual and audible signals, etc.) can be selected based on the location of the physical beacon in relation to the physical location. For example, the color of the physical beacon can be adjusted as an agent approaches the collection location. In another example, a specific color can be used to illuminate the physical beacon if the physical beacon is actually at the collection location. Likewise, a different color can be used to illuminate the physical beacon if the physical beacon is positioned away from the physical location, e.g. as part of a group of physical beacons that form a path towards the physical location. Further, the physical beacon can be location aware. Specifically, the physical beacon can include hardware and/or software for identifying a position or location of the physical beacon, e.g. in relation to the collection location. As follows, the physical beacon can generate and transmit the sensory signal based on the self-identified location of the physical beacon as previously illustrated with respect toFIG.14. Additionally, the physical beacon can transmit the sensory signal the entire time the agent is within the geofence. Once it is determined that the item has been picked up by the agent (e.g., via a confirmation signal), the physical beacon is triggered to be turned off or discontinues transmission of the sensory signal. The disclosure now turns toFIG.17which illustrates an example of a bus computing system1700wherein the components of the system are in electrical communication with each other using a bus1705. The computing system1700can include a processing unit (CPU or processor)1710and a system bus1705that may couple various system components including the system memory1715, such as read only memory (ROM)1720and random access memory (RAM)1725, to the processor1710. The computing system1700can include a cache1712of high-speed memory connected directly with, in close proximity to, or integrated as part of the processor1710. The computing system1700can copy data from the memory1715, ROM1720, RAM1725, and/or storage device1730to the cache1712for quick access by the processor1710. In this way, the cache1712can provide a performance boost that avoids processor delays while waiting for data. These and other modules can control the processor1710to perform various actions. Other system memory1715may be available for use as well. The memory1715can include multiple different types of memory with different performance characteristics. The processor1710can include any general purpose processor and a hardware module or software module, such as module 11732, module 21734, and module 31736stored in the storage device1730, configured to control the processor1710as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor1710may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing system1700, an input device1745can represent any number of input mechanisms, such as a microphone for speech, a touch-protected screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device1735can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing system1700. The communications interface1740can govern and manage the user input and system output. There may be no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. The storage device1730can be a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memory, read only memory, and hybrids thereof. As discussed above, the storage device1730can include the software modules1732,1734,1736for controlling the processor1710. Other hardware or software modules are contemplated. The storage device1730can be connected to the system bus1705. In some embodiments, a hardware module that performs a particular function can include a software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor1710, bus1705, output device1735, and so forth, to carry out the function. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on. Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims. Claim language reciting “at least one of” refers to at least one of a set and indicates that one member of the set or multiple members of the set satisfy the claim. For example, claim language reciting “at least one of A and B” means A, B, or A and B.
68,588
11861756
DETAILED DESCRIPTION The following description discloses several embodiments of a computer-implemented system for automatically analyzing large quantities of credit-related consumer data on a daily or other frequent basis in accordance with processing instructions that are customized to suit a client's promotional campaign. In a preferred embodiment, the system is configured to generate a daily list (or multiple lists per day) of consumer names and related data useful for efficiently executing an advertising campaign based on recent inputs to a database of consumer activity. FIG.1Ais a block diagram that depicts a high-level overview of one embodiment of a system for automatically analyzing consumer data to generate prospect notifications based on trigger events. Credit-providers may want to offer credit to consumers with whom they do not currently have a business relationship. In particular, credit providers would like to identify consumers who are both credit-worthy and currently interested in obtaining additional credit. The credit-providers undertake advertising campaigns to reach such consumers in which a specific type of credit offer is promoted. Although a credit-provider can currently purchase or otherwise acquire lists of consumer names and their contact information, these names may not represent likely prospects for the credit-provider's offers if the consumers on the list are not currently interested in obtaining additional credit. However, if the credit-provider is able to identify one or more consumer activities that signal a greater likelihood of interest in obtaining credit on the part of a consumer who is in the market for credit, and if the credit-provider is able to receive timely information about the occurrence of such consumer activities, then the credit-provider can more effectively make use of resources expended during a promotional campaign by targeting consumers identified as having recently been involved in such activities. In other words, automatic recognition of an occurrence of such an identified consumer activity may serve as a trigger to notify a credit-provider or other interested party that an identified consumer is currently a good prospect for their promotional offer. A computer system that is capable of processing massive quantities of data with the speed needed to identify daily triggers provides an important informational advantage to a credit-provider or other client of the system. A credit-provider may possess such a computer system, or, more frequently, may become a client of a business entity that has access to massive computer resources and to credit-related information and that offers such daily prospect trigger notifications. Current government regulations that protect consumers from unwarranted financial surveillance and from unfair use of personal information may impose additional restrictions on the computer-implemented system for providing daily notifications based on prospect triggers notifications. For example, current federal regulations require that a firm offer of credit must be extended to every consumer whose name is included in a file that is generated by monitoring daily credit-related consumer activities. In order to be of commercial value to credit-providers, while at the same time complying with government regulations, the computer-implemented prospect trigger notification system first analyzes stored consumer data, which is frequently about consumers with whom the credit-provider does not currently have a business relationship, in order to exclude those consumers who do not meet a set of criteria that define consumers to whom the client is willing to extend a firm offer of credit. FIG.1Adepicts a trigger notification system100that receives one or more lists140,145which identify consumers who meet a client's set of pre-screen criteria for receiving a firm offer of credit. The trigger notification system100also receives, from an online database120that stores information about the credit-related activities of millions of consumers, a set of recently updates to the online database120. The trigger notification system100compares the list of identified consumers with the list of recent activity updates, as will be described in greater detail in the present disclosure, to generate prospect notifications based on trigger events. As depicted inFIG.1A, an online database120stores data about a large population of consumers, for example, two hundred and sixty million consumers. The online database120dynamically receives and stores input from financial institutions122, from merchants124, from lenders126, and from government entities128around the clock. In other embodiments, the online database120may additionally or alternatively receive input from other sources. The credit-related input may include information associated with credit relationships, credit inquiries, and public records. For example, entries in the online database120may include information about: changes to account balances, account payment histories including notices of overdue accounts, credit rating inquiries, new lines of credit opened, credit line limit increases, credit line over-limits, address changes, judgments, liens, and bankruptcies. In one embodiment, the database120serves as a primary source of information for generating consumer credit ratings. The online database120may be implemented using one or more mainframe computers, mini-computers, personal computers configured as a server farm, or other suitably configured set of computers with sufficient storage and processing capacities. In a preferred embodiment, the online database120is configured as a relational database comprising a plurality of tables, as will be described in greater detail with reference toFIG.1B. Information from the online database120is processed and used to generate a data warehouse130for a population of consumers. The information may represent a “snapshot” of the information in the online database120and may be periodically updated, such as monthly, weekly, twice weekly, or according to another desired schedule. The data warehouse130may process the data from the online database120, and may include additional data not found in the online database120, in order to facilitate in-depth analysis of the data for a variety of credit-related and other purposes without disturbing normal functioning of online database120. For example, some or all of the data from the online database120may be verified for accuracy before being entered into the data warehouse130. Additional information associated with individual consumers, such as demographic information, employment information, and other information of interest for credit-related purposes may be added to the data warehouse130. In a preferred embodiment, the data warehouse130is implemented as a relational database, although data in the data warehouse130may be organized differently than data in the online database120. The data warehouse130may be implemented using one or more mainframe computers or other suitably configured set of computers with sufficient storage and processing capacities. Furthermore, although the online database120and the data warehouse130have each been depicted inFIG.1Aas a single, unified database, in various embodiments, one or both of the repositories120,130may be distributed across a variety of hardware and/or software components that may be located in one location or in a plurality of physical locations. One embodiment of the data warehouse130is described in the co-owned and co-pending U.S. patent application Ser. No. 11/103,659, filed on Apr. 11, 2005, and entitled SYSTEMS AND METHODS FOR OPTIMIZING DATABASE QUERIES, the disclosure of which is hereby incorporated herein by reference in its entirety. As described above, a client160may be a business entity that wishes to undertake a sales campaign or other promotional activity. In order to generate a list of consumers with whom a credit-provider does not currently have a business relationship, but to whom the credit-provider is willing to make a firm offer of credit, a sub-population of interest is identified from the records of the data warehouse130. In some embodiments, the sub-population of interest may be identified in order to generate a list of existing customers with whom a new credit relationship is desired. For example, in a preferred embodiment, the client identifies a set of “pre-screen” criteria that define consumers who qualify for a firm offer of credit, such as for a car, home equity or other type of loan from the client. To continue the example, the client's pre-screen criteria may specify that consumers with credit ratings above a threshold value and who have had no repossessions on automobile loans are eligible for a firm offer of credit on an automobile loan. More frequently, clients may wish to specify pre-screen criteria that are much more complex in order to identify a desired target sub-set of the population. Applying the client's pre-screen criteria to records in the data warehouse130generates a subset list140that includes a subset of consumer names from the data warehouse130, for example, fifty million consumers out of two hundred and thirty million, who meet the client's pre-screen criteria. The subset list140may be regenerated monthly, or bi-weekly, or according to another periodic or other type of schedule, and may be based on an updated set of pre-screen criteria provided by the client160. In some jurisdictions, government regulations require that pre-screen lists140be updated at a minimum frequency or more, such as at least every thirty or ninety days, in order to ensure that consumers are being selected for credit or other types of offers based on credit-related data that is current. The trigger notification system100, which is preferably implemented as a software package that is configured to run on general purpose computer equipment and to access records within the online database120, receives a copy of a client's pre-screen subset list140and a set of one or more trigger events146of interest to the client160. A trigger event is typically an event or occurrence that is logged in the online database120of daily consumer activities and that the client160wishes to use to identify consumers who may be actively shopping for specific products and/or services. For example, an inquiry regarding a consumer's credit score from a home mortgage provider may be an indication that the consumer is actively shopping for a mortgage. The trigger notification system100uses the subset list140and the client-provided set of trigger events146to monitor updates to the daily credit-related activity database120that are associated with consumers included in the pre-screen subset list140. A business entity that operates the trigger notification system100preferably serves many different clients160, each interested in conducting its own promotional campaign(s), with its own pre-screen criteria and resultant subset list140, as well as each with its own set of trigger events146and other campaign-related instructions. For ease of description, however, the descriptions of the systems and methods provided herein frequently refer to the client160in the singular. It will be appreciated that the business entity operating the trigger notification system100may provide the services described herein to a plurality of clients160at the same time. The trigger notification system100monitors updates to the online database120associated with consumers on the subset list140, as is described in greater detail with reference toFIGS.6A and6B. In particular, the trigger notification system100compares the updates from the online database120to the pre-screen subset list140and identifies those consumers from the subset list140who have been associated with a trigger event since a last monitoring of the online database120. In some embodiments, the trigger notification system100makes use of date-stamps on records in the online database120in order to identify newly-occurring trigger events. In other embodiments, the trigger notification system100maintains records for consumers on the subset list140, so that changes to a consumer's record may be noted. The trigger notification system100compiles a list150of consumers from the subset list140whose associated records in the online database120indicate a current trigger event. The trigger notification system100preferably provides the client160with a daily, or more frequent, list of names150triggered within a recent short period of time, such as within the last twenty-four hours, so that the client160may quickly make use of the information that the identified consumers are currently good prospects for an offer of credit. In various embodiments, if requested by the client, the daily list of triggered names150may include, in addition to consumer names, identification of the one or more trigger events that occurred with respect to each consumer, as well as other identifying and/or contact information for the consumers on the list150. In some jurisdictions, legal regulations may further stipulate that, along with the daily list of triggered names150, the trigger notification system100provides the client160with a consumer statement file, containing consumer statements of explanation associated with the contents of their credit files for any consumers included on the daily list of triggered names150. If requested by the client160, the trigger notification system100may filter, sort, or otherwise modify the daily list of triggered names150. For example, if a client160requests monitoring of more than one event trigger for the trigger notification system100to monitor, the trigger notification system100may group the identified consumer names150by the associated trigger. In situations where a given consumer is associated with more than one trigger event on the same day, the trigger notification system100may list the consumer in all appropriate groups or may list the consumer in only one group. For example, the client160may identify a hierarchy or other prioritized list of the trigger events and may request that triggered consumers be listed only with the trigger event of highest rank with which the consumer is associated on that day. As another example, the client160may request that the trigger notification system100filter the daily set of triggered consumer names to exclude or identify a consumer who is repeatedly associated with a same trigger events within a given time span. For example, the client160may request that the trigger notification system100include the consumer's name in the daily list150only once or only once per week for a given trigger. Thus, if the online database120includes multiple inquiries associated with car loans for a given consumer over the span of two weeks, the consumer's name may appear on the daily list of triggered names150only the first time. Furthermore, the client160may request that the trigger notification system100limit the daily list of triggered names150to only a pre-determined number of names, such as, for example, if the client160does not have the capacity to contact or otherwise make use of the full set of names in a timely manner. These and other modifications to the operation of the trigger notification system100will be appreciated by a skilled artisan as being within the scope of the invention as described herein. In some embodiments, the client160communicates with the trigger notification system100via computer network, such as the Internet, and may be provided with a secure user interface, such as one or more web pages of a secure website that allow the client to input and/or modify triggers for use by the trigger notification system100. In some embodiments, the client160may additionally or alternatively use a secure user interface, such as one or more web pages of a secure website to input and/or update the pre-screen criteria. In some embodiments, the client160may also receive the daily list of triggered names150via secure Internet connection. In other embodiments, the client160and the trigger notification system100may communicate using T-1 or T-3 lines, or other dedicated or non-dedicated high-speed communications lines. Alternatively, clients160and the trigger notification system100may communicate using other data transmission systems and protocols. For example, clients160may receive their daily list of triggered names150as a text document or as a comma-delimited file transport protocol (FTP) transmission that may be downloaded into a spreadsheet application. In some embodiments, a portion of the communications between the client160and the trigger notification system100may be conducted in person, in writing, via telephone, or using other communication methods. In some embodiments, the client160may provide a list of prospect names145for use by the trigger notification system100. For example, the client160may provide a list145of current customers for whom the client160would like to identify additional credit relationship possibilities using the trigger notification system120. As another example, the client160may provide a list145of consumers who have recently contacted them with credit-related questions but who have not entered into any business relationship with the client160. As a third example, the client160may provide a list145that the client has purchased or otherwise acquired from another vendor. The trigger notification system100may use the client-provided list of prospect names145in addition to or as an alternative to the pre-screen subset list140as the list of names for whom triggered monitoring of the online database120is requested. Government and other regulations may specify that consumers who wish not to be contacted for advertising purposes must be left off of contact lists generated for advertising purposes. In some jurisdictions, such consumers may express their desire by adding their name to an “opt-out/pander list” of people explicitly requesting not to be contacted with advertising offers. In various embodiments of the systems and methods described herein, verifying that such consumer names do not appear on the daily list of triggered names150supplied to the client160may be carried out by the trigger notifications system100and/or as part of the generation of the pre-screen subset list140. Similarly, compliance with other regulations and legal requirements may be carried out by the trigger notification system100and/or by other components described herein. In addition to or as an alternative to event-based triggers, the client160may identify other types of trigger occurrences of interest that may appear in the records of the online database120. For example, the client160may be interested in identifying consumers whose credit balance is within a given amount or percentage of their credit limit or whose debt ratio has reached a threshold value. The client160may be interested in identifying consumers whose credit score has changed in value by a certain number of points or by a pre-determined percentage within a given time. Furthermore, in some embodiments, the client160may categorize consumers according to “credit score bands,” to which they belong by virtue of their credit score, such that a consumer may belong to the “600-650” band or to the “650-700” band based on their credit score. In such embodiments, the client160may wish to be notified of consumers who have moved from one credit score band to another within the last twenty-four hours or other recent period. In some embodiments, information obtained as a result of the trigger notifications system's100monitoring of the online database120may be used as an input to an automated decisioning or learning system, in addition to or as an alternative to being used to provide the client160with a daily list of triggered names150. In such embodiments, the automated decisioning or learning may be carried out by the trigger notification system100or another system component in communication with the trigger notification system100or by the client160or by another entity associated with the client160, or by a combination of the above. For example, in one embodiment, the clients160provide feedback data to the trigger notification system100regarding the success rates of their consumer contact campaigns that are based on trigger notifications150. The feedback data provided by a given client160is preferably time-stamped or segmented to permit the success rate information to be correlated with the trigger criteria, and possibly with the pre-screening criteria, used by that client160to generate the associated list or lists of prospects150. A software-based analysis component of the trigger notification system100analyzes the collected feedback data, collectively and/or on a client-specific basis, to identify the trigger criteria, and optionally the pre-screen criteria, that produces the “best” results (e.g., the highest success rate, as measured based on the percentage of the contacted prospects that accept the associated offer from the client). Results of this analysis may be disseminated to the clients160via auto-generated, periodic reports to assist the clients160in modifying their trigger criteria and/or pre-screen criteria over time so as to improve their respective success rates. The reports may, for example, separately identify, for each of a plurality of different products and services, those criteria that tend to produce the highest success rates, as determined based on a collective analysis of all the feedback data and associated criteria settings of many different clients (e.g., tens to hundreds of different clients). The reports may also include client-specific recommendations based on the feedback data provided by the particular client. The business entity that operates the trigger notification system100may also provide a computer-implemented service for enabling clients to request and obtain mutually exclusive lists of prospects, such that two competing clients160will not be notified of the same prospect at the same time. This feature may, for example, be implemented using a round robin protocol in which each time a consumer matches the trigger criteria of multiple competing clients, that consumer is added only to the prospect list of the next client in sequence. This feature, which may be implemented within the software of the trigger notification system100, can significantly increase the success rates of the clients' campaigns, while reducing the likelihood that the consumers will be overwhelmed by the volume of contacts made by clients160. FIG.1Bdepicts one example of a set of relations or tables121-127that store consumer credit-related data in a relational online database120. In the example depicted inFIG.1B, some or all of the tables121-127of the relational database120may be linked to one another using a unique personal identification number (PIN) that is assigned to each consumer in the database120. A consumer table127of the consumer activity database120includes identifying and other personal information for each consumer in the database120. The consumer's record may include, by way of example, the consumer's PIN and full legal name, driver's license information, and the like. A trade table121stores information about existing credit card relationships associated with each consumer. For example, inFIG.1B, the consumer with PIN number ‘0001’ has one Sears charge account and one Visa credit card, and up-to-date information about those accounts is stored in the table121. An inquiry history table122stores information about credit score inquiries that have been submitted to the online database120. For example, inFIG.1B, credit inquiries regarding the consumer with PIN number ‘0001’ have been made within the last few years by Midtown Bank, Chevron Credit Card, and First USA. An address table123stores information about known addresses, which may be indexed by an Address Identification Number (AIN), that are associated with consumers. A public records table126stores information about consumers that may be relevant to a consumer's credit rating and that is typically available to the public. For example, information about bankruptcies, liens, property titles, and the like may be stored in the public record table126. An employment table125stores information about a consumer's employment history. In other embodiments, other tables may be additionally or alternatively used to store data about the consumers' credit-related activities. As depicted inFIG.1B, many of the tables121-127of the relational credit-activities database120use the consumer PIN number as a primary key to link the tables121-127and to facilitate various database query and sorting operations, both simple and complex, that are implemented to carry out the functions of the trigger notification system100. As will be familiar to one of skill in the design and use of relational databases, the information stored in the tables121-127of the database120may be organized as a relational database according to a wide variety of other organizational schema. Furthermore, in other embodiments, the database120may be organized as a type of information repository different from a relational database. FIG.2is a block diagram that provides a more detailed view of one embodiment of a trigger notification system100. As shown inFIG.2, a selection system200of the trigger notification system100may receive several types of information, including: a set of daily credit-related occurrences210; campaign pre-screen lists230for individual clients, which may be combined into a master pre-screen list240; client campaign criteria220, one or more opt-out/pander lists250; and a historical log270of generated prospects triggers. These types of information will be described in greater detail below. The selection system200processes the information210,220,240,250,270, and a prospect list generation system260prepares a daily list of prospect trigger names150to send to clients160for each client campaign. A pre-screen list230of consumers who meet a client's criteria for a firm offer of credit is obtained for each client campaign. As was described with reference toFIG.1A, the client160may compile and provide the pre-screen list145to the trigger notification system100, or the client may request that the pre-screen list140be compiled from suitable consumer names identified in the database of consumer files130. Either of these types of lists, or a combination of the two, may be used as the pre-screen list230of consumers for use by the selection system200of the trigger notification system100. For efficiently serving many clients160simultaneously, the trigger notification system100may compile a master pre-screen list240that may be compiled from the various campaign pre-screen lists220received from the clients160. The master pre-screen list240advantageously takes into account the fact that a given consumer may meet the campaign criteria for more than one client and/or for more than one campaign. Thus, by combining the various campaign pre-screen lists220, the trigger notification system100is able to more efficiently monitor the daily credit-related occurrence information210received from the online database120and to provide the list of prospect triggered names150to clients160in a timely, preferably daily, schedule. An example of a master pre-screen list240is described in greater detail with reference toFIG.3. The trigger notification system100receives information about daily credit-related occurrences210that were reported to and logged in the online database120. In a preferred embodiment, the online database120receives information about credit-related activities around-the-clock and seven-days-a-week. In general, client campaigns that make use of prospect triggers are especially interested in credit-related inquiries associated with a given consumer. For example, the client may wish to be notified when information in the database120indicates that the consumer has made an inquiry about a home equity loan, a car loan, or a mortgage. However, some campaigns may be interested in events such as credit balance changes, and the like. Information about inquiries newly-logged in the database120may be provided to the selection system200of the trigger notification system100once daily or at more frequent intervals. When the daily occurrences information210is provided to the trigger notification system100two or more times during the day, the selection system200may process the available portion of the incoming occurrence information210at various times throughout the day, and may provide the information to the prospect list generation system260, in order to compile a daily prospect trigger list150, as will be described in greater detail below. In one simple embodiment, the selection system200simply collects information about credit inquiry occurrences associated with consumers on the master pre-screen list240, and sends the information to the prospect list generation system260for separation according to individual client campaigns and for transmission to the appropriate clients160. In another embodiment, the prospect list generation system260simply forwards a list received from the selection system200to an applicable client. The prospect list generation system260may also send a record to the historical log270of the list150that was sent to the client160, as will be described in greater detail with reference toFIG.5. In other preferred embodiments, the selection system200accesses additional information before forwarding the triggered consumer names and other information to be sent to the client, in order to provide additional screening of the occurrence information210. This additional processing may advantageously provide additional assurance that the list of consumer names150sent to client160contains only bona fide qualified consumers, which is of especial advantage to clients in jurisdictions in which government regulations specify that every consumer whose name is received by the client160on a prospect trigger list150must be extended a firm offer of credit. In various embodiments, therefore, the selection system200receives further information about client campaign criteria220, about consumer opt-out/pander lists250and/or the historical log270of previous trigger notifications sent to clients160for a given consumer. The opt-out/pander list250includes the names of consumers who have specifically requested that they not be contacted with advertisements of various types, including credit-related offers. In some jurisdictions, consumers may register with one or more government programs that maintain consumer opt-out/pander lists250. Government agencies may further undertake to enforce compliance with the opt-out/pander lists250by levying fines on businesses that contact consumers on the list(s). Although opt-out/pander lists250are frequently consulted in compiling a client's original campaign pre-screen list220, clients may request that consumers on a given day's prospect trigger list be again compared to the opt-out/pander lists250, in part to verify that the consumer has not been placed on the opt-out/pander list250since the campaign pre-screen list220was compiled. Consumers whose names appear on the master pre-screen list240, who are associated with a daily trigger, and who are identified by the selection system200as appearing on an opt-out/pander list250, will frequently be removed from the daily prospect list150before the list150is sent to the client160. As has been described above, clients160may also specify additional types of campaign-specific criteria220to be applied to consumers associated with daily occurrences that serve to filter the daily set of prospect triggers being compiled by the trigger notification system100. For example, because a client's pre-screen list230is frequently regenerated only monthly or even quarterly, some consumer data of interest to the client160may have changed in the interim, and the client may wish to have critical data re-verified before a consumer's name is placed on the daily prospect trigger list150that will be supplied to the client160. For example, a client160may wish to have one or more of the consumer's credit scores re-calculated using up-to-date information before being sent the consumer's name and contact information. Furthermore, in an effort to avoid creating a negative impression for a consumer by making multiple offers of the same credit product or service within a short time period, a client may specify that a consumer who has been contacted by the client based on a prospect trigger notification should not be included on another prospect triggers list150for a specified period of time, such as for thirty, sixty, or ninety days. Such a period of non-contact may be known as a “cool-off” period. The selection system200may consult the historical log270of notification triggers activity to determine if the consumer is still within a cool-off period based on a previous contact by the client. The selection system200may also receive additional information from the client160as part of the client campaign criteria220. For example, in additional to information about the trigger events about which the client is interested, the client may send information about any desired hierarchy of campaigns, such that a consumer for whom a trigger event is identified for more than one campaign, may be put on a list of triggered names150for a campaign with a higher ranking and not put on a list150for a campaign with a lower ranking. For example, a client who is a credit card provider may instruct the trigger notification system100to implement a hierarchy that includes a rule stipulating: if a pre-screened consumer is triggered for a “Platinum Card” campaign and for a “Gold Card” campaign, put the consumer name on the “Platinum Card” list only. The campaign criteria220may also include a request to append additional data to each consumer name included on the trigger notification list150, as will be described in greater detail with reference toFIG.3. Furthermore, the campaign criteria220may provide additional instructions to the selection system200for processing names identified as being associated with trigger events. Once the selection system200has processed the information210,220,240,250,270, the selection system260sends the resulting data to the prospect list generation system260for further processing and for generating the individual lists of triggered names150for making available to the clients160. FIG.3depicts an example of a master pre-screen list240. In the example shown, a record is generated for each consumer whose name appears on at least one client campaign pre-screen list230. In the example shown, each consumer name in the master pre-screen list240is associated with a consumer identifier used by the trigger notification system100, such as the PIN described with reference toFIG.1B. Indications are also entered into each consumer record to identify client campaigns for which the consumer meets the pre-screen criteria. Finally, in some embodiments, each client record stores information about additional data requested by clients. For example, a client who wishes to carry out a telephone advertising campaign may wish to have a contact telephone number for each consumer on the client's pre-screen list230. The requested information may be appended onto the consumer's record, and may be delivered to the client160together with the consumer name, if and when the consumer's name is triggered by a credit-related occurrence identified by the trigger notification system100. Although, for ease of description, each record in the master pre-screen list240ofFIG.3is associated with one set of appended consumer data, in other embodiments, separate sets of appended consumer data may be stored for the individual campaigns. Furthermore, in some embodiments, the master pre-screen list240may identify consumers by personal identification number (PIN) without including the consumer names. Thus, the master pre-screen list240forms a master list of consumers for whom the trigger notification system100is requested to monitor daily credit-related occurrences210. FIG.4depicts a very simplified example of a list of daily occurrences210. In one embodiment, the online database120transmits the list of daily occurrences210to the trigger notification system100on a daily basis or more frequently. The list of daily occurrences210lists new consumer credit-related activity that has been reported to the online database120by financial institutions122, merchants,124, lenders, governments128, or other informants. In the example shown inFIG.4, each row, or record, represents an occurrence, and the records are organized according to the source of the information, or informant. In order to facilitate processing by the trigger notification system100, the records are further organized by type of occurrence, such as by type of credit-product associated with each credit score inquiry for the consumers. In other embodiments, the daily occurrences list210may be organized according to any of a variety of other schemes. For example, records may be ordered in simple chronological order according to the time at which they were reported to the online database120. Furthermore, in other embodiments, the daily occurrences list210may additionally or alternatively include any of a variety of other types of information that allows the trigger notification system100to identify trigger events that have occurred for consumers who are on one or more client pre-screen lists230. FIG.5depicts a simplified example of a prospect triggers historical log270. As shown inFIG.5, the prospect triggers historical log270keeps a record of prospect trigger notifications150that have been sent to clients160. The information stored in the prospect triggers historical log270may be used to verify whether a consumer has been previously included in a prospect trigger list150for a given client campaign, and if so, when. As described in greater detail with reference toFIG.2andFIG.6B, the prospect triggers historical log270may be used to provide additional filtering to a list of names from a client's pre-screen list230for whom a trigger event has occurred. In particular, the prospect triggers historical log270may assist the prospect list generation system260to implement a “cool-off” period, if requested by the client160. In other embodiments, other methods of implementing a historical log for prospect trigger notifications may be used by the trigger notification system100. FIG.6Ais a flow chart that depicts one embodiment of a process600for generating prospect trigger notifications150. InFIG.6A, the prospect trigger notifications150are generated on a daily basis, based on information that has been received by the online database120within the last twenty-four hours. In other embodiments, the prospect trigger notifications150may be generated at another frequency and/or may be based on data received by the online database120or other source of information within another recent period of time. As depicted inFIG.6A, the process600begins in Block610with the generation of a client's pre-screen list220. As described with reference toFIG.2, the client's pre-screen list230may comprise a list of names of consumers who match a set of criteria provided by the client160. For example, a client160may wish to offer home equity loans to consumers who: (a) have a credit score over a threshold value, (b) do not have a foreclosure on their record, and (c) live in the greater Chicago area, and (d) have been at the same job for over three years. A client160may alternatively wish to specify a much more complex set of pre-screen criteria for identifying consumers qualified to receive the client's offer of credit or other products or services. In some embodiments, the client160may request that the business entity offering the prospect trigger notification service also generate the pre-screen list220of consumers that match the client's specified criteria. As was described with reference toFIG.1A, the business entity may search a data warehouse130to identify consumers that meet the client's pre-screen criteria. Alternatively or additionally, the client160may generate, purchase, or otherwise acquire a list145of consumers that are deemed to be acceptable for a firm offer of credit and may provide the list145for use by the trigger notification system100. In Block620, a master pre-screen list240is created for use by the trigger notification system100. The master pre-screen list240combines information from a plurality of client campaign pre-screen lists230, as exemplified in the sample master pre-screen list ofFIG.3. As was described with reference toFIG.3, combining the various pre-screen lists230from the clients160that using the trigger notification system100on a given day allows the trigger notification system100to more efficiently process the incoming daily occurrences data210and therefore more quickly provide the clients160with their requested daily list of triggered names150. In Block630, the trigger notification system100receives the list of new occurrences210that have been reported and entered into the online database120within a recent period of time, such as within the last day. A simplified example of a daily occurrences list210is depicted and described with reference toFIG.4. In some embodiments, the daily occurrences list210may be processed before being sent to the triggers notification system100, for example to reduce the processing burden involved in monitoring the daily occurrences list210. For example, the set of all reported occurrences may be filtered to include only occurrences of interest to the clients160using the triggers notification system100, for example, only credit score inquiries. As another example, the set of all reported occurrences may be filtered to include only occurrences associated with consumers on the pre-screen list140,145. As was described with reference toFIG.4, the set of all reported occurrences may additionally or alternatively be sorted or otherwise organized in a manner so as to allow for efficient processing on the part of the triggers notification selection system200. In Block640, the selection system200of the trigger notification system100filters the list of daily occurrences210to identify, for each client campaign, the consumers who (a) meet the client's pre-screen criteria, (b) are associated with a trigger event of interest to the client that occurred within a recent time period of interest to the client, and (c) also meet any additional criteria220for the campaign that has been specified by the client, as was described with reference toFIG.2. For example, a client160may specify that some or all of the pre-screen criteria that allowed the consumer to be placed on the pre-screen list230should be verified as still being accurate. The client160may request that the trigger notification system100implement a “cool-off” period and/or that the trigger notification system100confirm that the consumers listed in the daily list of occurrences210are currently not on an opt-out/pander list250. In some embodiments, as has been described with reference toFIG.6A, filtering the day's master event list210is carried out by the selection system200. In other embodiments, the process of filtering the day's master event list210in may be carried out, in whole or in part, by the prospect list generation system200.FIG.6Bwill provide a more detailed flowchart that depicts in greater detail a sample implementation of the process for filtering of Block640, as carried out by the selection system200, the prospect list generation system260, or by another system included in or associated with the trigger notification system100. Once the triggered consumer names have been filtered according the client's campaign criteria220, the daily list of prospect triggered names150may be compiled by the prospect list generation system260, together with any appended data220requested by the client, and sent, or otherwise made available, to the client160. The prospect list generation system260may also notify the historical log270of the list of triggered names150sent to the client. FIG.6Bis a flow chart that depicts a more detailed view of an embodiment of a process for filtering a master event list for generating a list of daily prospect triggered names. The flowchart depicts a filtering process that may be applied to each trigger event occurrence that has been reported for a consumer name on the master pre-screen list240, and, since campaign criteria are frequently different for different client campaigns, may be applied for each applicable client campaign, as well. The process640, as depicted inFIG.6B, begins in Block641, where the selection system200determines if the consumer name is on an opt-out/pander list250, as has been earlier described with reference toFIG.2and elsewhere in the disclosure. If the consumer name is on an opt-out/pander list250, the process moves to Block646where the selection system200deletes this trigger event occurrence from the client's daily list of triggered names150for this campaign. In some embodiments, if the consumer name is on an opt-out/pander list250, in Block646the selection system200deletes the consumer name is deleted from all client campaigns. If the consumer name is not on an opt-out/pander list250, the process move to Block642, where consumer names that are not on the client's pre-screen list for the campaign are deleted from, or not included in the list of triggered names150for the campaign. In Block643, any campaign-specific criteria220with regard to campaign hierarchies or “cool-off” periods provided by the client is used to further process the daily list of occurrences210. For example, if the consumer name has already been added to a list150for a campaign with a higher ranking in the provided hierarchy of campaigns, the consumer name may be deleted, in Block646, from the list150for any lower-ranking campaign of the client's. As another example, if information from the historical log270indicates that the consumer's name has been put on a prospect trigger list150within a recent period designated by the client as a “cool-off” period for the campaign, then the selection system200may, in Block646, delete the consumer name from the list of triggered names150for this campaign. In Block644, the selection system200may check the consumer's trigger event with regard to one or more additional filters provided with the campaign criteria220. For example, the client's credit score, address information, or employment information may be re-checked for accuracy. If this occurrence of a trigger event for the consumer passes all of the above tests, in Blocks641-644, then the consumer name for this trigger event may be included in a processed version of the list of triggered names150for the campaign. As noted above, in various embodiments, the process640may be carried out, in whole or in part, by the selection system200and/or by the prospect list generation system260. Thus, although the process640is described as being carried out by the selection200, various embodiments of the trigger notification system100may carry out the functions of the process640in a variety of different ways. Although the foregoing systems and methods have been described in terms of certain preferred embodiments, other embodiments will be apparent to those of ordinary skill in the art from the disclosure herein. On a very basic level, although many of the lists, repositories, and various data sets have been described herein as including consumer names, it will be readily apparent to one of skill in the art that the lists, repositories, various data sets and other applicable components, may be alternatively and even preferably implemented using one or more identifiers for the consumers other than their names. As another example, while the embodiments described herein have been described with respect to an online database120and a data warehouse130, in other embodiments, the two databases120,130may be implemented as a single database configured to provide the functionality described herein with reference to the online database120and the data warehouse130. Furthermore, while the trigger notification system100has been described as monitoring updates to the online database120, in other embodiments the trigger notification system100additionally or alternatively monitors updates to the data warehouse130. Additionally, other combinations, omissions, substitutions and modifications will be apparent to the skilled artisan in view of the disclosure herein. While certain embodiments of the inventions have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Accordingly, the accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the invention.
49,434
11861757
The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements. DETAILED DESCRIPTION Embodiments for providing a self representation of a user in an artificial reality environment based on an identified self portion of images of the user are described herein. A self representation is a representation of a user, in an artificial reality environment, that the user can themselves see. For example, when a user looks down or at her hands when in an artificial reality environment and sees a representation of herself, that is the user's self representation. The self representation may or may not be visible to other users in the artificial reality environment. A self portion of an image is a portion of an image that depicts the user from that user's perspective, excluding other parts of the image that do not depict that user. For example, in an image taken from near a user's perspective, including that user's hands and chest, another user, and a table, the self portion is just the depicted user's hands and chest. An artificial reality system can generate the self representation by capturing images of the user, in real time, and applying a machine learning model to classify a self portion of each of the images. The artificial reality system can display a version of the self portions as a self representation in the artificial reality environment by positioning the version in the artificial reality environment relative to the user's perspective view into the artificial reality environment. In some implementations, an artificial reality system can contemporaneously capture images, in real time, using one or more cameras. As used herein, contemporaneous means events that occur at the same time or within a threshold time of each other. For example, contemporaneously captured images refers to images captured at the same time or within a threshold time of each. As also used herein, images captured in real time are images that are captured, processed according to the algorithms described herein, and whose results are provided to create a self representation within a particular time limit that permits the self representation to accurately reflect the user's current body posture to a threshold level. Examples of this time limit are A) a set number of nano-seconds, B) the amount of time to produce one frame of video, or C) other time limits that keep the lag of the self representation below a threshold level. The artificial reality system can merge the contemporaneously captured images into a single image and adjust them to be from the user's perspective. This can include determining distances between the center of the user's eye and each of the cameras and using these distances to warp the images to be from the user's perspective, instead of from the viewpoint of the cameras that captured them. The artificial reality system can also match features between the images to determine overlap and stitch the images together. These two steps, which may be performed in either order, produce a single image of the real-world environment from the user's perspective. In some instances only a single camera is used, in which case no image stitching is used but perspective warping may still be applied. Depending on the angle of the camera(s), the resulting image may include a self portion depicting at least part of the artificial reality system user. The artificial reality system can identify the self portion by applying a machine learning model to the image. This machine learning model can be of various types such as a type of neural network, a support vector machine, Bayes classifier, decision tree, etc. The machine learning model can be trained to identify self portions in images based on a set of training images, with portions (e.g., set areas, pixels, etc.) tagged as either depicting a user from a self-perspective or not. The model can be trained by applying these training images to the model and adjusting the model based on how close the model output is to the correct output for each portion of the image. For example, where the machine learning model is a neural network, parameters or edge weights can be adjusted such that the output of the model more closely matches the correct classifications for the image portions. Once trained, this machine learning model can then be applied to new images to classify which parts of the image depict the user of the artificial reality system. The artificial reality system can use the classifications from the machine learning model to create a mask, which then can be applied to the original image to extract the self portion from the image. The artificial reality system can then display this self portion relative to the perspective of the user in the artificial reality environment, e.g., below the user's perspective, creating a self representation of the user in the artificial reality environment. As an example of the disclosed processes and systems in use, a user may be wearing an artificial reality headset of an artificial reality system with five front and side facing cameras. Within a 2 ms timeframe, the cameras can each capture an image, which the artificial reality system can warp to be from the user's perspective based on the distance of each camera from the user's eye and can stitch these five images into a single image. The artificial reality system can then identify a self portion of the image that depicts part of the user's torso, hands, arms, legs and feet by applying a trained machine learning model. The area of the identified self portion can be used as a mask to extract the self portion from the image. The artificial reality system can then display the extracted self portion in the artificial reality system relative to the user's point of view, thus allowing the user to see a self representation showing her real-world torso, hands, arms, legs and feet in the artificial reality environment. The artificial reality system can also identify movements of the user, e.g., by tracking a controller or a body part of the user. Based on this movement, instead of having to capture a new self portion of the user and create a new self representation, the artificial reality system can adjust the self representation to match the user's movement. This can provide more accurate self representations. For example, a controller may be able to report its position to an artificial reality system headset more quickly than the artificial reality system can capture images and create a new self representation. By warping the existing self representation to match the movement until a new self representation can be created from more current captured self portions of images, the artificial reality system can keep the self representation spatially accurate according to the user's body position. Embodiments of the disclosed technology may include or be implemented in conjunction with an artificial reality system. Artificial reality or extra reality (XR) is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured content (e.g., real-world photographs). The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may be associated with applications, products, accessories, services, or some combination thereof, that are, e.g., used to create content in an artificial reality and/or used in (e.g., perform activities in) an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a head-mounted display (HMD) connected to a host computer system, a standalone HMD, a mobile device or computing system, a “cave” environment or other projection system, or any other hardware platform capable of providing artificial reality content to one or more viewers. “Virtual reality” or “VR,” as used herein, refers to an immersive experience where a user's visual input is controlled by a computing system. “Augmented reality” or “AR” refers to systems where a user views images of the real world after they have passed through a computing system. For example, a tablet with a camera on the back can capture images of the real world and then display the images on the screen on the opposite side of the tablet from the camera. The tablet can process and adjust or “augment” the images as they pass through the system, such as by adding virtual objects. “Mixed reality” or “MR” refers to systems where light entering a user's eye is partially generated by a computing system and partially composes light reflected off objects in the real world. For example, a MR headset could be shaped as a pair of glasses with a pass-through display, which allows light from the real world to pass through a waveguide that simultaneously emits light from a projector in the MR headset, allowing the MR headset to present virtual objects intermixed with the real objects the user can see. “Artificial reality,” “extra reality,” or “XR,” as used herein, refers to any of VR, AR, MR, or any combination or hybrid thereof. Existing artificial reality systems fail to accurately display representations of a user in an artificial reality environment. Representations of an artificial reality system user created by existing artificial reality systems are based on tracking body parts of the user and mapping those to parts of an avatar created in the artificial reality environment. This analysis of captured images to identify parts of a user, determine spatial relationships of the user, and generate an avatar accordingly positioned is a computationally expensive procedure. In addition, such existing systems tend to lag behind the user's actual movements and/or, due to inaccuracies in body tracking, do not correctly position parts the avatar in the artificial reality environment to match the user's movements. Further, due to graphic system limitations, computer generated avatars often fail to provide rich detail and can distract users from their artificial reality experience. Yet, users tend to find artificial reality environments without a representation of the user to be disconcerting and can even make some users nauseous. The real-world self representation system and processes described herein overcome these problems associated with existing artificial reality systems and are expected to provide self representations that are less computationally expensive, more accurate, and more detailed than those provided by existing systems. Specifically, the process of capturing images, applying a machine learning model to extract a self portion, and displaying the self portion as a self representation can be performed with significantly less computing power than that required by existing systems to track part of a user, map determined body positions into a virtual space, and render an avatar positioned according to the determined body positions. Further, by taking images of the user and using them directly as the self representation, this process more accurately reflects the user's movements and doesn't rely on inaccurate position tracking systems, making the disclosed artificial reality system much more accurate than existing artificial reality systems. Finally, by using real world images of the user instead of computer generated avatars, the self representations provided by the disclosed system can be much more detailed than those provided by existing artificial reality systems, while still being malleable, e.g., through the use of filters and composites. Several implementations are discussed below in more detail in reference to the figures.FIG.1is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology can operate. The devices can comprise hardware components of a computing system100that can prove real-world self representations of a user in an artificial reality environment. In various implementations, computing system100can include a single computing device103or multiple computing devices (e.g., computing device101, computing device102, and computing device103) that communicate over wired or wireless channels to distribute processing and share input data. In some implementations, computing system100can include a stand-alone headset capable of providing a computer created or augmented experience for a user without the need for external processing or sensors. In other implementations, computing system100can include multiple computing devices such as a headset and a core processing component (such as a console, mobile device, or server system) where some processing operations are performed on the headset and others are offloaded to the core processing component. Example headsets are described below in relation toFIGS.2A and2B. In some implementations, position and environment data can be gathered only by sensors incorporated in the headset device, while in other implementations one or more of the non-headset computing devices can include sensor components that can track environment or position data. Computing system100can include one or more processor(s)110(e.g., central processing units (CPUs), graphical processing units (GPUs), holographic processing units (HPUs), etc.) Processors110can be a single processing unit or multiple processing units in a device or distributed across multiple devices (e.g., distributed across two or more of computing devices101-103). In some implementations, computing system100can execute instructions, stored on a non-transitory computer-readable storage medium, causing computing system100to perform operations for providing a user self representation in an artificial reality environment, as described further herein. Computing system100can include one or more input devices120that provide input to the processors110, notifying them of actions. The actions can be mediated by a hardware controller that interprets the signals received from the input device and communicates the information to the processors110using a communication protocol. Each input device120can include, for example, a mouse, a keyboard, a touchscreen, a touchpad, a wearable input device (e.g., a haptics glove, a bracelet, a ring, an earring, a necklace, a watch, etc.), a camera (or other light-based input device, e.g., an infrared sensor), a microphone, or other user input devices. Processors110can be coupled to other hardware devices, for example, with the use of an internal or external bus, such as a PCI bus, SCSI bus, or wireless connection. The processors110can communicate with a hardware controller for devices, such as for a display130. Display130can be used to display text and graphics. In some implementations, display130includes the input device as part of the display, such as when the input device is a touchscreen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: an LCD display screen, an LED display screen, a projected, holographic, or augmented reality display (such as a heads-up display device or a head-mounted device), and so on. Other I/O devices140can also be coupled to the processor, such as a network chip or card, video chip or card, audio chip or card, USB, firewire or other external device, camera, printer, speakers, CD-ROM drive, DVD drive, disk drive, etc. Computing system100can include a communication device capable of communicating wirelessly or wire-based with other local computing devices or a network node. The communication device can communicate with another device or a server through a network using, for example, TCP/IP protocols. Computing system100can utilize the communication device to distribute operations across multiple network devices. The processors110can have access to a memory150, which can be contained on one of the computing devices of computing system100or can be distributed across of the multiple computing devices of computing system100or other external devices. A memory includes one or more hardware devices for volatile or non-volatile storage, and can include both read-only and writable memory. For example, a memory can include one or more of random access memory (RAM), various caches, CPU registers, read-only memory (ROM), and writable non-volatile memory, such as flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, and so forth. A memory is not a propagating signal divorced from underlying hardware; a memory is thus non-transitory. Memory150can include program memory160that stores programs and software, such as an operating system162, artificial reality self-presence module164, and other application programs166. Memory150can also include data memory170that can include, for example, trained machine learning models, user images, extracted self portions, warping models, configuration data, settings, user options or preferences, etc., which can be provided to the program memory160or any element of the computing system100. Some implementations can be operational with numerous other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, XR headsets, personal computers, server computers, handheld or laptop devices, cellular telephones, wearable electronics, gaming consoles, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, or the like. FIG.2Ais a wire diagram of a virtual reality head-mounted display (HMD)200, in accordance with some embodiments. The HMD200includes a front rigid body205and a band210. The front rigid body205includes one or more electronic display elements of an electronic display245, an inertial motion unit (IMU)215, one or more position sensors220, locators225, and one or more compute units230. The position sensors220, the IMU215, and compute units230may be internal to the HMD200and may not be visible to the user. In various implementations, the IMU215, position sensors220, and locators225can track movement and location of the HMD200in the real world and in a virtual environment in three degrees of freedom (3DoF) or six degrees of freedom (6DoF). For example, the locators225can emit infrared light beams which create light points on real objects around the HMD200. One or more cameras (not shown) integrated with the HMD200can detect the light points. Compute units230in the HMD200can use the detected light points to extrapolate position and movement of the HMD200as well as to identify the shape and position of the real objects surrounding the HMD200. The electronic display245can be integrated with the front rigid body205and can provide image light to a user as dictated by the compute units230. In various embodiments, the electronic display245can be a single electronic display or multiple electronic displays (e.g., a display for each user eye). Examples of the electronic display245include: a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), a display including one or more quantum dot light-emitting diode (QOLED) sub-pixels, a projector unit (e.g., microLED, LASER, etc.), some other display, or some combination thereof. In some implementations, the HMD200can be coupled to a core processing component such as a personal computer (PC) (not shown) and/or one or more external sensors (not shown). The external sensors can monitor the HMD200(e.g., via light emitted from the HMD200) which the PC can use, in combination with output from the IMU215and position sensors220, to determine the location and movement of the HMD200. In some implementations, the HMD200can be in communication with one or more other external devices, such as controllers (not shown) which a user can hold in one or both hands. The controllers can have their own IMU units, position sensors, and/or can emit further light points. The HMD200or external sensors can track these controller light points. The compute units230in the HMD200or the core processing component can use this tracking, in combination with IMU and position output, to monitor hand positions and motions of the user. The controllers can also include various buttons a user can actuate to provide input and interact with virtual objects. In various implementations, the HMD200can also include additional subsystems, such as an eye tracking unit, an audio system, various network components, etc. In some implementations, instead of or in addition to controllers, one or more cameras included in the HMD200or external to it can monitor the positions and poses of the user's hands to determine gestures and other hand and body motions. FIG.2Bis a wire diagram of a mixed reality HMD system250which includes a mixed reality HMD252and a core processing component254. The mixed reality HMD252and the core processing component254can communicate via a wireless connection (e.g., a 60 GHz link) as indicated by link256. In other implementations, the mixed reality system250includes a headset only, without an external compute device or includes other wired or wireless connections between the mixed reality HMD252and the core processing component254. The mixed reality HMD252includes a pass-through display258and a frame260. The frame260can house various electronic components (not shown) such as light projectors (e.g., LASERs, LEDs, etc.), cameras, eye-tracking sensors, MEMS components, networking components, etc. The projectors can be coupled to the pass-through display258, e.g., via optical elements, to display media to a user. The optical elements can include one or more waveguide assemblies, reflectors, lenses, mirrors, collimators, gratings, etc., for directing light from the projectors to a user's eye. Image data can be transmitted from the core processing component254via link256to HMD252. Controllers in the HMD252can convert the image data into light pulses from the projectors, which can be transmitted via the optical elements as output light to the user's eye. The output light can mix with light that passes through the display258, allowing the output light to present virtual objects that appear as if they exist in the real world. Similarly to the HMD200, the HMD system250can also include motion and position tracking units, cameras, light sources, etc., which allow the HMD system250to, e.g., track itself in 3DoF or 6DoF, track portions of the user (e.g., hands, feet, head, or other body parts), map virtual objects to appear as stationary as the HMD252moves, and have virtual objects react to gestures and other real-world objects. FIG.3is a block diagram illustrating an overview of an environment300in which some implementations of the disclosed technology can operate. Environment300can include one or more client computing devices305A-D, examples of which can include computing system100. In some implementations, some of the client computing devices (e.g., client computing device305B) can be the HMD200or the HMD system250. Client computing devices305can operate in a networked environment using logical connections through network330to one or more remote computers, such as a server computing device. In some implementations, server310can be an edge server which receives client requests and coordinates fulfillment of those requests through other servers, such as servers320A-C. Server computing devices310and320can comprise computing systems, such as computing system100. Though each server computing device310and320is displayed logically as a single server, server computing devices can each be a distributed computing environment encompassing multiple computing devices located at the same or at geographically disparate physical locations. Client computing devices305and server computing devices310and320can each act as a server or client to other server/client device(s). Server310can connect to a database315. Servers320A-C can each connect to a corresponding database325A-C. As discussed above, each server310or320can correspond to a group of servers, and each of these servers can share a database or can have their own database. Though databases315and325are displayed logically as single units, databases315and325can each be a distributed computing environment encompassing multiple computing devices, can be located within their corresponding server, or can be located at the same or at geographically disparate physical locations. Network330can be a local area network (LAN), a wide area network (WAN), a mesh network, a hybrid network, or other wired or wireless networks. Network330may be the Internet or some other public or private network. Client computing devices305can be connected to network330through a network interface, such as by wired or wireless communication. While the connections between server310and servers320are shown as separate connections, these connections can be any kind of local, wide area, wired, or wireless network, including network330or a separate public or private network. FIG.4is a block diagram illustrating components400which, in some implementations, can be used in a system employing the disclosed technology. Components400can be included in one device of computing system100or can be distributed across multiple of the devices of computing system100. The components400include hardware410, mediator420, and specialized components430. As discussed above, a system implementing the disclosed technology can use various hardware including processing units412, working memory414, input and output devices416(e.g., cameras, displays, IMU units, network connections, etc.), and storage memory418. In various implementations, storage memory418can be one or more of: local devices, interfaces to remote storage devices, or combinations thereof. For example, storage memory418can be one or more hard drives or flash drives accessible through a system bus or can be a cloud storage provider (such as in storage315or325) or other network storage accessible via one or more communications networks. In various implementations, components400can be implemented in a client computing device such as client computing devices305or on a server computing device, such as server computing device310or320. Mediator420can include components which mediate resources between hardware410and specialized components430. For example, mediator420can include an operating system, services, drivers, a basic input output system (BIOS), controller circuits, or other hardware or software systems. Specialized components430can include software or hardware configured to perform operations for creating and updating a self representation, in an artificial reality environment, based on images depicting part of a user. Specialized components430can include perspective adjuster434, image combiner436, self portion classifier and extractor438, self representation creator440, self representation adjuster442, and components and APIs which can be used for providing user interfaces, transferring data, and controlling the specialized components, such as interfaces432. In some implementations, components400can be in a computing system that is distributed across multiple computing devices or can be an interface to a server-based application executing one or more of specialized components430. Perspective adjuster434can determine one or more distances and directions between a user's eye and a camera (or averages with multiple cameras) and use these to adjust an image to be from the user's perspective. In some implementations, perspective adjuster434can accomplish this by applying a vector transformation, to the image, that warps the image based on the determined distances and/or directions. In other implementations, perspective adjuster434can accomplish this by converting the image to a 3D representation (e.g., using machine learning models) and modifying the image to be a view of the 3D representation with a virtual camera moved according to the determined distances and/or directions. Additional details on adjusting an image to be from the user's perspective are provided below in relation to block504ofFIG.5, blocks606and608ofFIG.6, andFIG.10C. Image combiner436can identify overlaps or matching features between multiple images and combine them into a single image. In some implementations, this includes modifications to the images such as resizing, warping, rotating, etc., to get the images to form a cohesive single image. Additional details on combining multiple images are provided below in relation to block504ofFIG.5, block604ofFIG.6, andFIG.10B. Self portion classifier and extractor438can use a machine learning model to classify areas of an image as either depicting a self portion of a user or not. In some implementations, the machine learning classifier can also label parts of the self portion with body part identifiers (e.g., fingers, hands, forearms, upper arms, chest, stomach, upper legs, lower legs, feet, etc.). The machine learning classifier can be a type of neural network (e.g., a traditional neural network, a deep neural network, a convolutional neural network, a recurrent neural network, combinations of these, etc.) or can be another type of machine learning model. The machine learning model can be trained using images where at least some of the images have portions labeled as either being part of a self portion or not and/or with body part identifiers. Using the result of the machine learning model classification of an image, the self portion classifier and extractor438can extract the self portion from the image. For example, the self portion classifier and extractor438can do this by using the identified areas as a mask for the image and filtering out the portions of the image that are not covered by the mask. Additional details on classifying a self portion of an image and extracting it are provided below in relation to block506ofFIG.5,FIG.7,FIGS.9A-9C, andFIGS.11A-11B. Self representation creator440can take the self portion extracted by the self portion classifier and extractor438and insert it into an artificial reality environment as a self representation. In some instances, the self representation creator440can do this by overwriting the self representation onto a portion of a frame buffer to which an application controlling the artificial reality environment is writing. The overwritten portion can correspond to an area in the artificial reality environment that is below the current virtual camera position representing the user's point of view. In other implementations, the self representation can be provided to the application controlling the artificial reality environment to include in the artificial reality environment as a normal virtual object. Additional details on adding a self representation to an artificial reality environment based on a self portion extracted from an image are provided below in relation to block508ofFIG.5andFIG.9C. Self representation adjuster442can identify a user movement (e.g., direction and distance) and can adjust a self representation in the artificial reality environment to match the movement. In some implementations, self representation adjuster442can achieve this by applying a warping algorithm to the self representation to move and/or resize a portion of the self representation that matches the part of the user that moved. In some implementations, the adjustment can be based on the user body parts identified by the self portion classifier and extractor438. For example, if the artificial reality system identifies that the user's leg has moved out and up by seven inches, the self representation adjuster442can warp the portion of the self representation showing the leg to be slightly longer and larger, accounting for more of the leg being shown and being closer to the user's viewpoint. In other implementations, the adjustment to the self representation can include converting the self representation into a 3D object in the artificial reality environment, which can then be moved to match the user's movement. Additional details on adjusting an existing self representation based on user movements are provided below in relation toFIG.8andFIGS.12A and12B. Those skilled in the art will appreciate that the components illustrated inFIGS.1-4described above, and in each of the flow diagrams discussed below, may be altered in a variety of ways. For example, the order of the logic may be rearranged, substeps may be performed in parallel, illustrated logic may be omitted, other logic may be included, etc. In some implementations, one or more of the components described above can execute one or more of the processes described below. FIG.5is a flow diagram illustrating a process500used in some implementations for displaying a self representation in an artificial reality environment based on one or more images of a user. In some implementations, process500can be performed repeatedly during use of an artificial reality system. For example, process500can be performed by an operating system of an artificial reality system or by another application that is able to insert virtual objects into the artificial reality environment. In some implementations, process500can be distributed between multiple applications, e.g., where capturing images and extracting the self portions is performed by an operating system of the artificial reality system, which passes them to an application which creates the self representation and inserts the self representation into the artificial reality environment. At block502, process500can receive one or more images from the hardware of the artificial reality system. In various implementations, this can be a single image or multiple images. Where there are multiple images, these can be images that the artificial reality system captured contemporaneously. These one or more images can be captured in real time. In some implementations, a subset of the artificial reality system's cameras can be used to capture these one or more images, e.g., only using cameras that are at least partially front facing. In some implementations, the cameras used to capture these one or more images can be based on a determination of which cameras are pointing at least partially forward and/or downward relative to the user's body, e.g., based on rotation and/or orientation sensors in the artificial reality system and/or position sensors in other artificial reality system controllers. At block504, process500can adjust at least part of the one or more images to be from the user's perspective. In addition, where there are multiple captured images, these images can also be stitched together to form a single image. Adjusting the image(s) to be from the user's perspective can be based on determined distances between the user's eyes and the cameras that captured the image(s). Stitching the images together into a single image can be performed by matching features between the images to determine overlap and combining the images at the determined overlaps. Additional details on adjusting images for user perspective and stitching images together are described below in relation toFIGS.6and10A-C. At block506, process500can classify a self portion of the single image or the combined image by applying a machine learning model trained to identify user self portions to the image. The model can be a neural network or other machine learning model. The model can be trained using images with portions labeled as either depicting the user of the artificial reality system or not depicting that user, where the labels are comparison factors used to adjust model parameters to minimize a loss function. In various implementations, the machine learning model can be applied to the image on a pixel by pixel basis or to groups of pixels. The machine learning model can provide an identification of an area or mask specifying the part of the image that is a self portion. Process500can use this mask to extract the self portion from the image. Additional details on using machine learning results to create a mask and extract a self portion from an image are described below in relation toFIGS.7,9A,9B,11A, and11B. At block508, process500can display the self portion of the image, extracted at block506, as a self representation in the artificial reality environment. In some implementations, the self representation can be displayed in the artificial reality environment at a location relative to, e.g., a specified amount below, the perspective of the user of the artificial reality system. This amount can be a typical amount distance between the typical user's viewpoint and the top of where they can usually see their own body. In some implementations, this distance can be set based on the height of the user. In some implementations, the self portion can be displayed as the self representation by overwriting data for the self portion into a portion of a frame buffer that is being written to by an application controlling part of the artificial reality environment. This prevents the application controlling part the of the artificial reality environment (e.g., a third party application) from having access the data for the self portion, i.e., it does not have access to real images of the user. In some implementations, the artificial reality system can modify the self portion before it is displayed in the artificial reality environment. For example, the artificial reality system can apply a filter to the self portion to make it match a genre of the artificial reality environment. For example, the artificial reality system can apply a filter to change the color scheme or shading, a filter that modifies dimensions of the self portion, a filter that changes the drawing style of the self portion (e.g., making it a line drawing, a cartoon, etc.), a filter that warps the self portion or applies a distortion field to the self portion, etc. In some implementations, the machine learning model used at block506or another machine learning model can further classify the parts of the self image as parts of a user's body. An application controlling the artificial reality environment can specify an effect to apply to these parts individually, e.g., by mapping a composite layer over each part (or over the self portion as a whole) or applying one of the filters to the indicated part of the self portion. In some implementations, the artificial reality system can modify the self representation after it is initially displayed in the artificial reality environment. For example, the artificial reality system can identify a motion of a user based on tracking a body part of the user or tracking hardware controllers of the artificial reality system. When a user motion is identified before a new self portion is ready to be used to update the self representation, a distance and direction of the movement can be determined. These values can be used to warp the self representation to conform to the current user position. In this manner, the self representation can be kept consistent with user movements, even when the image capture and extraction processes lag behind the movements. Additional details on warping a self representation to conform to user movements are described below in relation toFIGS.8,12A, and12B. FIG.6is a flow diagram illustrating a process600used in some implementations for merging multiple user images and adjusting them to be from a user's perspective. In some implementations, process600can be performed as a sub-process of process500, e.g., called at block504. At block602, process600can obtain multiple contemporaneously taken images. These images can be taken by a single camera of the artificial reality system (e.g., a successive burst of image captures) or by multiple cameras positioned at different locations on the artificial reality system (e.g., multiple cameras mounted on a headset of the artificial reality system). In some implementations, these can be the images obtained at block502. In some implementations only a single image is taken, in which case process600can begin at block606instead of602. An example of taking multiple contemporaneous images is described below in relation toFIG.10A. At block604, process600can match features between the multiple images and, based on matched features indicating overlaps between the images, merge the images into a single image. An example of matching features between images and merging them is described below in relation toFIG.10B. At block606, process600can determine one or more distances between each camera that captured one of the multiple images and one of the user's eyes (e.g., the eye closest to that camera) or to a center of vision between the user's eyes. These distances can also include direction (e.g., vectors) specifying a shift from where the camera is located to where the user's eye is. At block608, process600can warp the single image created at block604, or an individual one of the multiple images obtained at block602, based on the determined distances. This warping can occur using known procedures to modify the image to be from a perspective offset from the actual point at which the camera is from the user's eye, modifying the image to be from the user's perspective. In some implementations, this can be accomplished by warping the single combined image by an average of the distances and directions determined at block606. In some implementations, each part of the combined single image can be warped based on the source image that part was obtained from and the distance and direction obtained for that source image. In some implementations, images are first warped to be from the user's perspective, performing blocks606and608for each image, before matching and merging the warped images at block604. The single image, now from the user's perspective, can then be returned by process600and process600can end. An example of determining distances between cameras and a user's eye and using the distances to warp an image to be from the user's perspective is described below in relation toFIG.10C. FIG.7is a flow diagram illustrating a process700used in some implementations for obtaining a self portion of an image. In some implementations, process700can be performed as a sub-process of process500, e.g., called at block506. At block702, process700can receive output of a machine learning (ML) model classifying which parts of an image are a self portion. In various implementations, this classification can be provided on a pixel by pixel basis or for another quantity of the image, such as for a square millimeter, for five or ten square pixels, etc. In various implementations, this classification can be accomplished by applying a machine learning model, such as a deep neural network, trained to identify a self portion of an image. In some implementations, the classification of each area can be based on analysis of areas around it and/or for areas of larger or smaller portions of the image, e.g., by using a recurrent and/or convolutional neural network or another progressive type of machine learning model. One way to train such models is by using a set of training images where a portion of at least some of the images are labeled as a self portion. In some implementations, the machine learning output classification can be more specific than just a binary indication of being or not being part of a self portion. In these cases, the classification can also specify a body part to which each part of the self portion corresponds. An example of classifying portions of an image as self portions is described below in relation toFIG.11A. At block704, process700can generate an image mask based on the self portion classification(s) from block702. The image mask can be one or more identified areas for which each internal part of those areas have been classified as a self portion. In some implementations, generating the mask can include further processing, such as by excluding contiguous areas of the image, each classified as a self portion, that together are below a threshold size; excluding all but the largest contiguous set of areas classified as self portions; and/or, where the classifications identify specific body parts, excluding any areas that are not connected to areas classified as hands, feet, legs, or a torso of the user. An example of generating a mask based on identified self portions is described below in relation toFIG.11A. At block706, process700can obtain the self portion of the image by applying to it the mask generated at block704. Applying a mask can extract any portion of the image with which the mask overlaps. Process700can then return the extracted self portion and end. An example of applying a mask to extract a self portion from an image is described below in relation toFIG.11B. FIG.8is a flow diagram illustrating a process800used in some implementations for updating a self representation based on an identified user movement. In some implementations, process800can be performed in between executions of process500, e.g., to update a self representation from process500before process500can be performed a second time to produce a new self representation. At block802, process800can identify a user movement. In some implementations, process800can accomplish this by tracking a user body part, e.g., a hand, arm, leg, or torso, using a camera of the artificial reality system. In other implementations, process800can accomplish this by tracking a controller of the artificial reality system. For example, the controller can have a positioning system, e.g., based on tracked light points emitted by the controller, inertial motion or position sensors, etc. An example of identifying a user movement is described below in relation toFIG.12A. At block804, process800can warp the self representation to conform to the identified movement. Warping algorithms can change an image based on an identified positional change of a part of the image. For example, the positional change can be represented by a set of vectors, which can be applied to move portions of the image. As another example, the movement can correspond to a particular body part. In this case, the image can be analyzed to identify body parts in the image (or can be previously identified, e.g., as discussed above in relation to blocks702and704) and the parts of the image corresponding to the identified moved body part can be modified (e.g., rotated, re-sized, or otherwise warped), to correspond to the user movement. In some implementations, only user motions below a threshold size are used to warp a self representation, as larger warping can cause the self representation to no longer resemble the user, creating a jarring experience for the user. Process800can show the modified self representation in the artificial reality environment and can then end. FIGS.9A-9Care conceptual diagrams illustrating an example of extracting a self portion from an image and using it to create a self representation in an artificial reality environment. Referring toFIGS.9A-9Ctogether, this example begins at900showing an input image902taken by a camera mounted on a headset of an artificial reality system. This example continues at930where the shaded self portion932of image902has been classified by a machine learning model (as shown in the example inFIG.11). Next, at960, the self portion932has been incorporated into an artificial reality environment as a self representation. FIGS.10A-10Care conceptual diagrams illustrating an example of an artificial reality system capturing multiple images, merging them, and warping them to be from the user's perspective. Referring toFIGS.10A-10Ctogether, at1000, the example shows a headset1002of an artificial reality system with multiple cameras capturing multiple contemperanous images1004-1012. These images depict features1014-1028, some of which are at least partially depicted in more than one of the images1004-1012. At1050, the example continues where the features1014-1028are matched between the images1004-1012. In particular, at1052feature1016A is matched with the partial feature1016B, at1054feature1018A is matched with the partial feature1018B, at1056feature1022A is matched with the partial feature1022B, and at1058feature1026A is matched with the partial feature1026B. As indicated by arrow1060, based on these matched features, the images1004-1012are combined into a single image1062, including each of the features1014-1028. At1070, the example illustrates the artificial reality system determining distances1083-1090(with directions) between eye1072and cameras1074-1082. These distances (e.g., vectors) are provided to the perspective adjuster1092. Also provided to the perspective adjuster1092is the combined image1062. The perspective adjuster1092modifies the combined image1062, based on the determined distances with directions1083-1090, to be an image1094, which is from the perspective of the user's eye1072. FIGS.11A and11Bare conceptual diagrams illustrating an example of creating an image mask for an image and using the image mask to extract a self portion from the image. Referring toFIGS.11A and11Btogether, this example begins at1100, where a machine learning model classifies portions of image902(the portions shown as a grid of areas of image902), such as portions1102and1104as either being part of a self portion (the shaded portions such as portion1104) or not (the unshaded portions such as portion1102). As indicated by arrow1106, the portions classified as self portions are used to create a mask1108. Next, at1150, the example continues by applying the mask1108to the image902, as shown by arrow1152. Applying the mask1108to the image902extracts the self portion1156from the image902, as shown by arrow1154. FIGS.12A and12Bare conceptual diagrams illustrating an example of warping a self representation based on an identified user movement. Referring toFIGS.12A and12Btogether, this example starts at1200where the movements of a user1202are being monitored. As shown by arrow1206, the artificial reality system identifies that the user1202's arm1204has moved up and out. Next, at1250, the example continues by modifying, as indicated by arrow1254, a representation of the user's arm1252(which is part of a self representation in an artificial reality environment) to show the user's arm1256adjusted for the movement1206. Thus, the example modifies the self representation to conform to the movement1206quickly, without having to capture new user images, extract self portions, and create new self representations for each small user movement. Reference in this specification to “implementations” (e.g., “some implementations,” “various implementations,” “one implementation,” “an implementation,” etc.) means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. The appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation, nor are separate or alternative implementations mutually exclusive of other implementations. Moreover, various features are described which may be exhibited by some implementations and not by others. Similarly, various requirements are described which may be requirements for some implementations but not for other implementations. As used herein, being above a threshold means that a value for an item under comparison is above a specified other value, that an item under comparison is among a certain specified number of items with the largest value, or that an item under comparison has a value within a specified top percentage value. As used herein, being below a threshold means that a value for an item under comparison is below a specified other value, that an item under comparison is among a certain specified number of items with the smallest value, or that an item under comparison has a value within a specified bottom percentage value. As used herein, being within a threshold means that a value for an item under comparison is between two specified other values, that an item under comparison is among a middle-specified number of items, or that an item under comparison has a value within a middle-specified percentage range. Relative terms, such as high or unimportant, when not otherwise defined, can be understood as assigning a value and determining how that value compares to an established threshold. For example, the phrase “selecting a fast connection” can be understood to mean selecting a connection that has a value assigned corresponding to its connection speed that is above a threshold. As used herein, the word “or” refers to any possible permutation of a set of items. For example, the phrase “A, B, or C” refers to at least one of A, B, C, or any combination thereof, such as any of: A; B; C; A and B; A and C; B and C; A, B, and C; or multiple of any item such as A and A; B, B, and C; A, A, B, C, and C; etc. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Specific embodiments and implementations have been described herein for purposes of illustration, but various modifications can be made without deviating from the scope of the embodiments and implementations. The specific features and acts described above are disclosed as example forms of implementing the claims that follow. Accordingly, the embodiments and implementations are not limited except as by the appended claims. Any patents, patent applications, and other references noted above are incorporated herein by reference. Aspects can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further implementations. If statements or subject matter in a document incorporated by reference conflicts with statements or subject matter of this application, then this application shall control.
55,225
11861758
DETAILED DESCRIPTION FIG.1illustrates example components of a system100that uses a graphics processing unit (“GPU”) to process data received (e.g., over a network106) in packets Pinputand output processed data (e.g., over the network106) in packets Poutput. The system100includes a first computing device102having a first central processing unit (“CPU”)110, first system memory112, and a first network interface controller (“NIC”)114. By way of non-limiting examples, the first NIC114may be implemented as a network interface card, a network adapter, a Local Area Network (“LAN”) adapter, a physical network interface, a host channel adapter (“HCA”), an Ethernet NIC, and the like. By way of another non-limiting example, the first NIC114may be implemented as a Mellanox network card, such as a Mellanox network card CX4 or newer. The first NIC114is connected to a first Parallel Processing Unit (“PPU”), such as a first GPU device115, which includes a first GPU116connected to first GPU memory118. Each of the first GPU116and the first GPU memory118may be a component of the first computing device102or may be an external component connected thereto. By way of a non-limiting example, the first GPU device115may be implemented as a NVIDIA GPU Kepler or a newer GPU device. A signal transmission distance between the first GPU116and the first GPU memory118may be shorter than a signal transmission distance between the first CPU110and the first GPU memory118. Further, a signal travel time between the first GPU116and the first GPU memory118may be shorter than a signal travel time between the first CPU110and the first GPU memory118. Likewise, a signal travel time between the first CPU110and the first system memory112may be shorter than a signal travel time between the first GPU116and the first system memory112. The first NIC114is connected to the first GPU116by a connection120, such as a bus, a serial computer expansion bus, a Peripheral Component Interconnect Express (“PCIe”) bus, and the like. The connection120may also connect the first GPU116to the first CPU110. Further, the connection120may be connected directly to the first system memory112and/or the first GPU memory118. Optionally, the connection120may include a direct PCIe external switch (not shown) positioned between the first NIC114and the first GPU116but this is not a requirement of the system100. The first computing device102may be connected to a second computing device104over a wired and/or wireless connection108(e.g., including the network106). The second computing device104may be implemented as any device capable of transmitting the packets Pinputto the first NIC114over the connection108and/or receiving the packets Poutputfrom the first NIC114over the connection108. For ease of illustration, inFIG.1, the second computing device104has been illustrated as being a computing device, but this is not a requirement of the system100. Further, the second computing device104has been illustrated as both transmitting the packets Pinputto the first NIC114and/or receiving the packets Poutputfrom the first NIC114but one of these tasks may be performed by a different computing device (not shown). For example, the second computing device104may transmit the packets Pinputto the first NIC114and the first NIC114may transmit the packets Poutputto a different third computing device (not shown). In the embodiment illustrated, the second computing device104includes a second CPU130, second system memory132, and a second NIC134. By way of a non-limiting example, the second CPU130, the second system memory132, and the second NIC134may be substantially identical to the first CPU110, the first system memory112, and the first NIC114, respectively. The second NIC134is capable of transmitting the packets Pinputto the first NIC114over the connection108and receiving the packets Poutputfrom the first NIC114over the connection108. The second NIC134is connected to a second PPU, such as a second GPU device135, which includes a second GPU136connected to second GPU memory138. Each of the second GPU136and the second GPU memory138may be a component of the second computing device104or may be an external component connected thereto. The second NIC134is connected to the second GPU136by a connection140(e.g., a PCIe connection). The connection140may also connect the second GPU136to the second CPU130. Further, the connection140may be connected directly to the second system memory132and/or the second GPU memory138. Optionally, the connection140may include a direct PCIe external switch (not shown) positioned between the second GPU136and the second NIC134but this is not a requirement of the system100. By way of a non-limiting example, the second GPU136, the second GPU memory138, and the connection140may be substantially identical to the first GPU116, the first GPU memory118, and the connection120, respectively. In the embodiment illustrated, a portion of the first GPU memory118, referred to as a GPU memory pool142, may be visible to both the first GPU116and devices connected to the connection120, such as the first CPU110and the first NIC114. In other words, data stored in the GPU memory pool142is exposed over the connection120. Thus, the first GPU116may exchange data stored in the GPU memory pool142with the first NIC114over the connection120. For example, the first GPU116may store data in the GPU memory pool142. The first NIC114may obtain such data from the GPU memory pool142and share that data with another device (e.g., the second computing device104) over the connection108and/or other components of the first computing device102, such as the first system memory112and/or the first CPU110. When the connection120is a PCIe bus, the first GPU device115is assigned a first physical address that identifies Base Address Registers (“BARs”) stored by the first GPU device115in a BAR region of the first GPU device115. One or more memory addresses associated with the GPU memory pool142may be assigned to one or more of the BARs by the first CPU110. For example, the first CPU110may allocate a block of memory (e.g., using a cudaMalloc function) in the first GPU memory118and store a memory address of the block of memory (returned by the allocation) in a particular BAR. Registration is a process whereby the first CPU110stores the assignment of the memory address in the particular BAR thereby associating the memory address with the particular BAR. The first NIC114may access the block of memory by sending commands to the physical address of the first GPU device115and indicating that the command is to be applied to the memory address associated with the particular BAR. For example, the command may be sent in a message addressed to the first physical address, which includes bits that identify the particular BAR. When the first GPU device115receives the message, the first GPU device115reads the particular BAR to obtain the memory address and applies the command to the memory address stored in the particular BAR. Thus, the GPU memory pool142may be exposed to the first NIC114by the BARs. For example, the first NIC114may access the GPU memory pool142by issuing a read operation and/or a write operation to the BAR(s) associated with the memory address(es) of the GPU memory pool142. Optionally, data stored in the GPU memory pool144may be exposed over the connection140in a manner that is substantially identical to the manner in which data stored in the GPU memory pool142is exposed over the connection120. Thus, the GPU memory pool144may be visible to both the second GPU136and devices connected to the connection140, such as the second CPU130and the second NIC134. For ease of illustration, the first NIC114will be described as receiving the packets Pinputand storing GPU input data150obtained from the packets Pinputin the GPU memory pool142for processing by a GPU function, process, or kernel204(seeFIGS.2and4-8) executed by the first GPU116in a GPU processing stream210(seeFIGS.2,4, and6). The GPU kernel204may be implemented as a Compute Unified Device Architecture (“CUDA”) kernel. The GPU input data150may include the entire contents of each of the packets Pinputor a portion of the contents of each of the packets Pinput. By way of a non-limiting example, the GPU input data150may include data obtained based at least in part on the contents of the packets Pinput. Optionally, the first CPU110obtains input data from the packets Pinputthat includes both the GPU input data150and host input data152. The GPU input data150may be stored in the GPU memory pool142and the host input data152may be stored in a host memory pool146. For example, the host input data152may include a header portion of each of the packets Pinputand/or a first portion of a payload transported by each of the packets Pinput. The GPU input data150may include a second portion of the payload transported by each of the packets Pinput. By way of a non-limiting example, referring toFIG.2, the GPU kernel204may be implemented as a persistent GPU kernel, which is a function that was launched previously and is presently executing in the GPU processing stream210. Referring toFIG.1, the persistent kernel waits for the GPU input data150and, after the GPU input data150is received, processes the GPU input data150stored in the GPU memory pool142. After the first GPU116processes the GPU input data150to produce GPU output data154and stores the GPU output data154in the GPU memory pool142, the first NIC114retrieves the GPU output data154from the GPU memory pool142and forwards the GPU output data154to a data recipient, such the device from which the packets Pinputwere received. For example, the first NIC114may transmit the GPU output data154to the second computing device104in the packets Poutput. The first GPU116and/or the first CPU110may provide any information to the first MC114that is necessary for the first NIC114to prepare and/or transmit the packets Poutput. The packets Poutputmay include the entire contents of the GPU output data154. Referring toFIG.2, the first system memory112stores computer-executable instructions230that cause the first computing device102to perform operations described below. In at least one embodiment, the computer-executable instructions230may be stored in one or more non-transitory computer readable media. The computer-executable instructions230may include a NIC driver module232, a GPU driver module233, a Remote Direct Memory Access (“RDMA”) module234, a packet processing module236, and an application module238. When executed by the first CPU110, the NIC driver module232enables communication between the first CPU110and the first NIC114. For example, the NIC driver module232may translate communications between the first NIC114and an operating system being executed by the first CPU110. Thus, the NIC driver module232allows the first CPU110to send commands to the first NIC114, which the first NIC114performs. The NIC driver module232may implement an Application Programming Interface (“API”). By way of a non-limiting example, the NIC driver module232may include a Mellanox Open Fabrics Enterprise Distribution (“MOFED”) (e.g., version 4.x or newer), for example, when the first NIC114is a Mellanox network card. The first CPU110may implement a NIC driver by executing at least a portion of the NIC driver module232. When executed by the first CPU110, the GPU driver module233enables communication between the first CPU110and the first GPU116. For example, the GPU driver module233may translate communications between the first GPU116and the operating system being executed by the first CPU110. Thus, the GPU driver module233allows the first CPU110to send commands to the first GPU116, which the first GPU116performs. The GPU driver module233may implement an API. By way of a non-limiting example, the GPU driver module233may at least partially implement CUDA. By way of another non-limiting example, the GPU driver module233may include the CUDA Toolkit (e.g., version 8.0 or newer). By way of yet another non-limiting example, the GPU driver module233may be at least partially implemented by a NVIDIA kernel module. The first CPU110may implement a GPU driver and/or a GPU process driver (e.g., a CUDA driver) by executing at least a portion of the GPU driver module233. The RDMA module234may include a set of libraries and/or drivers that implement a peer-to-peer data path between the first GPU memory118directly to and from any devices connected to the connection120, such as the first NIC114. In other words, the RDMA module234makes the first GPU memory118visible to those devices (e.g., the first NIC114) that are connected to the first GPU memory118via the connection120. Further, the RDMA module234provides a bridge between the NIC driver (e.g., implemented by the NIC driver module232) and the GPU process driver (e.g., implemented by the GPU driver module233). For example, the first GPU device115may include a register region280with a plurality of registers281-286. The first CPU110(e.g., via the GPU process driver) allocates the GPU memory pool142(e.g., using the cudaMalloc function) and instructs the GPU process driver to initiate execution of the GPU kernel204. After the first CPU110allocates the GPU memory pool142, the first CPU110registers the memory address of the GPU memory pool142to make it visible to and accessible by the first NIC114. The first CPU110registers the memory address by storing it in one or more of the registers281-286. For ease of illustration, the first CPU110will be described as storing the memory address of the GPU memory pool142in the register282(e.g., BAR1). Then, the first NIC114may use the register282(e.g., BAR1) to access the GPU memory pool142. The GPU process driver associates the register282with a virtual address (e.g., a memory mapped I/O (“MMIO”) address) in a user or kernel address space. The GPU process driver may pass the virtual address to the GPU kernel204that the first CPU110uses to refer to the memory address of the GPU memory pool142. To access the memory address of the GPU memory pool142, communications sent to the first GPU device115include a reference to the register282, which stores the memory address of the GPU memory pool142. But, devices outside the first GPU device115use the virtual address to refer to the GPU memory pool142. Therefore, when the first NIC114receives the packets Pinput, they are addressed to the virtual address. To store the packets Pinputin the GPU memory pool142, the first NIC114must identify the register282. The NIC driver may contact a Memory Management Unit (“MMU”) component of the first CPU110, which may be able to contact the GPU process driver and identify the register282associated with the virtual address. However, some operating systems cannot exchange the virtual addresses between the GPU process driver and the NIC driver. For such operating systems, the RDMA module234may identify the register282by contacting the GPU process driver and the RDMA module234may provide the identification to the NIC driver. Then, the NIC driver instructs the first NIC114to write the GPU input data150(seeFIGS.1,4, and6) in the packets Pinputto the register282. When the first GPU device115receives the GPU input data150, the first GPU device115looks up the memory address associated with the register282and writes the data to that memory location, which is the GPU memory pool142. For example, when the first NIC114is a Mellanox network card, the first CPU110may call an IB Verbs function in the NIC driver module232(e.g., implemented as a MOFED software (“SW”) stack), which in turn queries the RDMA module234(e.g., implemented as the nv_peer_memory kernel module). The RDMA module234may return an opaque handle (e.g., a BAR1 “virtual address”) that will be used by the first NIC114to access the GPU memory pool142. By way of a non-limiting example, when the connection120is a PCIe bus, the register282is a particular BAR (e.g., BAR1). The first NIC114may issue read operations and/or write operations to the BARs of the first GPU device115in the same way the first NIC114may issue these operations to the first system memory112. The GPU process driver associates the particular BAR with a virtual address (e.g., a MMIO address) and may return that virtual address, which can be used by devices outside the first GPU device115to access the GPU memory pool142. As mentioned above, some operating systems may not be able to exchange MMIO addresses between the NIC driver and the GPU process driver. For such operating systems, the RDMA module234may identify the particular BAR by contacting the GPU process driver, which identifies the particular BAR to the RDMA module234. Then, the RDMA module234may provide the identification to the NIC driver. For example, the second GPU device135may send the packets Pinputaddressed to the virtual address (e.g., a MMIO address) corresponding to the GPU memory pool142to the first NIC114. The NIC driver may contact the RDMA module234to determine which BAR (e.g., BAR1) corresponds to the MMIO address. The RDMA module234may contact the GPU process driver to identify the particular BAR and return the identification of the particular BAR to the NIC driver. Then, the NIC driver instructs the first NIC114to write the GPU input data150(seeFIGS.1,4, and6) in the packets Pinputto the particular BAR. When the first GPU device115receives the GPU input data150, the first GPU device115looks up the memory address associated with the particular BAR and writes the data to that memory location, which is the GPU memory pool142. For example, the first GPU device115may store a page table that is used to translate each BAR to a memory address in the first GPU memory118. By using the RDMA module234, the first NIC114may read the GPU output data154(seeFIG.1) from and/or write the GPU input data150(seeFIGS.1,4, and6) to the GPU memory pool142without intervention by the first CPU110or the first system memory112. For example, when executed by the first CPU110, the application module238may implement an application260that may monitor the first NIC114for newly received packets. For example, the first NIC114may receive the packets Pinputin a receive queue160and the application260may poll the receive queue160for newly received packets. When the application260detects the packets Pinputhave been received, the application260may use a Message Passing Interface (“MPI”) function (e.g., a MPI_Send function) or similar function to instruct the first NIC114to send the GPU input data150to the virtual address of the of the GPU memory pool142identified in the packets Pinput. By way of a non-limiting example, the first NIC114may retrieve the GPU output data154and place the packets Poutputin a send queue162. The application260may poll the GPU memory pool142and, when the application260detects the GPU output data154is ready to retrieve, the application260may use a MPI function (e.g., a MPI_Recv function) or similar function to instruct the first NIC114to retrieve the GPU output data154from the virtual address of the of the GPU memory pool142. The RDMA module234may implement an API. By way of a non-limiting example, the RDMA module234may include an nv_peer_memory kernel module. The RDMA module234may be characterized as being a bridge between the first NIC113and the first GPU memory118because the RDMA module234exposes the first GPU memory118through the registers281-286(e.g., BAR1) and provides a virtual address that the first NIC113may use to access the first GPU memory118. The packet processing module236may include a set of libraries and/or drivers that implement fast packet processing and communication directly with the first NIC114. For example, the packet processing module236may allow the application260(e.g., implemented by the application module238) to communicate directly with (e.g., send commands to) the first NIC114and bypass operating system routines and kernel routines during such communication. The packet processing module236may be characterized as providing a middle layer between the software application level and the driver level. By way of a non-limiting example, the packet processing module236may include a Data Plane Development Kit (“DPDK”) or similar instructions. The packet processing module236may implement an API. When executed by the first CPU110, the packet processing module236may receive the virtual address of the memory region (e.g., the GPU memory pool142) from the packets Pinput. The packet processing module236may configure the memory region (e.g., allocated by the GPU kernel204) to function as the GPU memory pool142. By way of a non-limiting example, the GPU memory pool142may be implemented as a ring buffer. The GPU memory pool142may be divided into a plurality of GPU memory buffers, referred to as GPU mbufs240A-240D, that each include a GPU content portion244. The GPU content portion244stores the GPU input data150(seeFIGS.1,4, and6) obtained from the payload transmitted by the packets Pinput. For example, in the embodiment illustrated, the packets Pinputinclude packets PA-PD. In such embodiments, the GPU mbufs240A-240D may store at least a portion of the payload transmitted by the packets PA-PD, respectively. When executed by the first CPU110, the packet processing module236may implement a memory pool handler that stores the GPU input data150(e.g., the packets Pinputor portions thereof) in the GPU mbufs240A-240D, respectively. The first GPU116may process the GPU input data150stored by the GPU mbufs240A-240D in parallel. Thus, the first GPU116may be characterized as processing the packets PA-PD or portions thereof in parallel. The packet processing module236may configure a portion of the first system memory112to function as the host memory pool146. By way of a non-limiting example, the host memory pool146may be implemented as a ring buffer. The host memory pool146may be divided into a plurality of host memory buffers, referred to as host mbufs250A-250D. Each of the host mbufs250A-250D may include a host metadata portion252and a host content portion254. The host metadata portion252of the host mbufs250A-250D may store information about the packets PA-PD, respectively, such as packet size, one or more timestamps, and the like. The host metadata portion252may store information that is useful when handling a packet but such information does not necessary travel through the network106. The host content portion254of each of the host mbufs250A-250D stores a payload of the host mbuf (e.g., the host input data152illustrated inFIG.1). For example, the host content portion254may store the locations (e.g., virtual addresses) where the first NIC114and/or the first CPU110stored the host input data152(seeFIG.1). In other words, the host content portion254may store links to the host input data152. The host metadata portion252and/or the host content portion254stores links (e.g., virtual addresses) to the GPU mbufs240A-240D that store the GPU input data150(seeFIG.1). The GPU memory pool142may be characterized as being a list of the GPU mbufs240A-240D and the GPU kernel204and the first CPU110may each be characterized as knowing the location (e.g., the virtual address) of item zero in the list. Therefore, the GPU kernel204and the first CPU110may communicate with one another using the GPU memory pool142and/or one or more memory flags270that each reside in a shared memory or shared memory location that is shared between the first GPU116and the first CPU110. For example, the memory flag(s)270may include a ready flag272(e.g., a GDRCopy pinned flag) and a done flag274. The first GPU116may poll the ready flag272. The ready flag272may reside in the first GPU memory118so that the first GPU116may poll the ready flag272without having to access the connection120(seeFIG.1). This reduces an amount of traffic on the connection120and reduces processing time by reducing the time required to poll the ready flag272. In such embodiments, the first CPU110may access the connection120only once each time the first CPU110sets the ready flag272to TRUE. Further, a signal travel time between the first GPU116and the ready flag272may be shorter than a signal travel time between the first CPU110and the ready flag272. Additionally or alternatively, a signal transmission distance between the first GPU116and the ready flag272stored in the first GPU memory118may be shorter than a signal transmission distance between the first CPU110and the ready flag272. While the done flag274is illustrated as residing in the first system memory112, both the ready flag272and the done flag274may instead reside in the first GPU memory118. The first CPU110may poll the done flag274. When the done flag274resides in the first system memory112, processing time is reduced by reducing the time required to poll the done flag274. In such embodiments, the first GPU116may access the connection120only once each time the first GPU116sets the done flag274to TRUE. Additionally or alternatively, a signal travel time between the first CPU110and the done flag274may be shorter than a signal travel time between the first GPU116and the done flag274. When executed by the first CPU110, the application module238implements the application260in a CPU processing stream261. The application260in turn implements a receive thread262and/or a transmit thread264. The receive thread262calls a receive function implemented by the packet processing module236. When called, the receive function queries the first NIC114for packets. For example, as mentioned above, the first NIC114may receive the packets Pinputin the receive queue160. When the first NIC114has received packets PA-PD, the receive function detects them in the receive queue160, and instructs the first NIC114to obtain the GPU input data150from the packets PA-PD, and write the GPU input data150into the GPU memory pool142. For example, the first NIC114may write at least a portion of each of the packets PA-PD into the GPU content portions244of the GPU mbufs240A-240D, respectively. The receive function also writes information to the host metadata portions252of the host mbufs250A-250D. The receive function writes in the host metadata portions252and/or the content portions254the virtual addresses of the host mbufs250A-250D where the receive function is storing the GPU input data150(seeFIGS.1,4, and6) obtained from the packets PA-PD, respectively. Optionally, the receive function obtains the host input data152(seeFIG.1) from the packets PA-PD and writes the host input data152into the host content portions254of the host mbufs250A-250D, respectively, of the host memory pool146. The host input data152may include at least a portion of the packets PA-PD. For example, the receive function may write headers of the packets PA-PD into the content portions254of the host mbufs250A-250D, respectively, and payloads of the packets PA-PD into the GPU content portions244of the GPU mbufs240A-240D, respectively. Thus, the application260may use the first CPU110, instead of the first GPU116, to make decisions (e.g., perform protocol termination) using the packet headers, which may improve performance. The receive function creates descriptors DA-DD in a receive queue266in the first system memory118identifying the packets PA-PD and linking (e.g., using pointers) the descriptors DA-DD to the host mbufs250A-250D, respectively. As mentioned above, the virtual addresses where the receive function stored the GPU input data150(seeFIGS.1,4, and6) obtained from the packets PA-PD are stored in the host mbufs250A-250D, respectively. The receive function provides information to the application260indicating that the packets PA-PD have been received and providing the descriptors DA-DD to the application260, which allows the first CPU110to process the packets PA-PD using the links (e.g., pointers) to the host mbufs250A-250D and the virtual addresses stored by the host metadata portions252and/or the host content portions254of the host mbufs250A-250D, respectively. The transmit thread264may call a send function implemented by the packet processing module236. The send function may instruct the first NIC114to retrieve the GPU output data154from the GPU mbufs240A-240D (e.g., store the GPU output data154in the send queue162). The send function may instruct the first NIC114to transmit the GPU output data154(e.g., to the second computing device104illustrated inFIGS.1,4, and6) in the packets Poutput. FIG.3is a flow diagram of a method300that may be performed by the first computing device102(seeFIGS.1,2,4, and6) of the system100(seeFIG.1). For example, the first CPU110may perform the method300when executing the application260(seeFIG.2). In first block302, the first NIC114(seeFIGS.1,2,4, and6) receives the packets Pinput(seeFIGS.1,2,4, and6). Referring toFIG.2, the receive thread262, (seeFIG.2) may query or poll the first NIC114until the receive thread262discovers the packets Pinput(e.g., in the receive queue). Referring toFIG.3, in block304, the receive thread262(seeFIGS.2and5-7) instructs the first NIC114to obtain the GPU input data150(seeFIG.1) from the packets Pinput(seeFIGS.1,2,4, and6). As mentioned above, the GPU input data150may include the entire contents of the packets Pinputor a portion thereof. Optionally, the receive thread262may instruct the first NIC114to obtain the host input data152(seeFIG.1) from the packets Pinput. The host input data152may include the entire contents of the packets Pinputor a portion thereof. For example, the host input data152may include the headers of the packets Pinputand the GPU input data150may include portions of the packets Pinputother than the headers. Then, in block306, the receive thread262instructs the first NIC114to write the GPU input data150into the GPU memory pool142. Referring toFIG.2, the receive thread262writes information into the host metadata portions252of the host mbufs250A-250D and the virtual addresses of the GPU mbufs240A-240D in the host metadata portions252and/or the host content portions254of the host mbufs250A-250D, respectively. Optionally, the receive thread262may write or instruct the first NIC114to write the host input data152into the host content portions254of the host mbufs250A-250D. The receive thread262creates the descriptors DA-DD in the receive queue266and links (e.g., using pointers) the descriptors DA-DD to the host mbufs250A-250D, respectively. The receive thread262may process the packets PA-PD using the links (e.g., pointers) in the descriptors DA-DD to the host mbufs250A-250D, respectively, and the addresses of the GPU mbufs240A-240D, respectively, stored by the host metadata portions252and/or the host content portions254of the host mbufs250A-250D, respectively. In block308(seeFIG.3), the receive thread262causes the GPU kernel204(seeFIGS.2and4-8) to process the GPU input data150stored in the GPU mbufs240A-240D and produce the GPU output data154, which the GPU kernel204stores in the GPU mbufs240A-240D. The GPU kernel204may process the GPU input data150stored in the GPU mbufs240A-240D in parallel. In block310(seeFIG.3), the transmit thread264instructs the first NIC114to retrieve the GPU output data154from the GPU mbufs240A-240D. The transmit thread264may instruct the first NIC114to transmit the GPU output data154(e.g., to the second computing device104illustrated inFIGS.1,4, and6) in the packets Poutput. Then, the method300returns to block302. As mentioned above, the first NIC114performs one or more write operations that write the GPU input data150into the GPU memory pool142. For example, after the first NIC114receives the packets Pinput, the first NIC114may issue a series of write operations to the first GPU memory118. Some GPUs may not complete the write operation(s) before the GPU kernel204reads the data stored in the GPU mbufs240A-240D when the GPU kernel204is a persistent GPU kernel that was executing before the write operation(s) were initiated. If, before the write operation(s) are performed, the GPU kernel204reads the memory whereat the packets are supposed to be stored, the GPU kernel204may read the incorrect information. This is referred to as a memory consistency issue because the write operation(s) performed by the first NIC114were not completed before the GPU kernel204attempted to read the GPU input data150from the GPU mbufs240A-240D. The reason for this is that write-after-write (“WaW”) ordering (e.g., PCIe WaW ordering) may not be guaranteed inside the first GPU device115. Thus, after the first NIC114performs the write operation and writes the GPU input data150to the GPU mbufs240A-240D, the first CPU110cannot be certain that the GPU kernel204will obtain and process the GPU input data150. The memory consistency issue may be avoided by using an explicit I/O consistency mechanism, such as triggering a CUDA operation (e.g., launching the GPU kernel204) that reads the GPU memory after being triggered, or reading GPU memory (e.g., the video memory) from a device (e.g., an external PCIe device, such as the first CPU110) that is external to the first GPU device115before the first GPU116attempts to read the GPU memory. One approach to avoiding the memory consistency issue is to launch the GPU kernel204after the GPU input data150is written to the GPU mbufs240A-240D.FIG.4is a block diagram illustrating such a process, which may be performed in block308(seeFIG.3) of the method300(seeFIG.3), in at least one embodiment. Referring toFIG.4, in block404, the receive thread262launches the GPU kernel204(e.g., using the GPU process driver), which, in block406, processes the GPU input data150stored in the GPU mbufs240A-240D. For example, the receive thread262may execute the receive function (e.g., a function gpu_trigger_workload( )) that launches the GPU kernel204. The transmit thread264(seeFIGS.2and5-7) may execute the send function (e.g., a function gpu_wait_workload( )), which waits for an event or other GPU synchronization mechanism that indicates completion of the GPU kernel204before retrieving (e.g., in a burst) the GPU output data154(seeFIG.1) from the GPU mbufs240A-240D. FIG.5is a diagram500illustrating sub-processes that may be performed by the receive thread262, the GPU kernel204, and the transmit thread264during the method300illustrated inFIG.3, in at least one embodiment. Referring toFIG.5, the receive thread262detects that the first NIC114has received the packets Pinputand instructs the first NIC114to obtain the GPU input data150in block502, which is performed during blocks302and304(seeFIG.3) of the method300(seeFIG.3). Then, in block504, the receive thread262instructs the first NIC114to store the GPU input data150in the GPU mbufs240A-240D, which is performed during block306(seeFIG.3) of the method300(seeFIG.3). Next, in block506, the receive thread262launches the GPU kernel204, which is performed during block308(seeFIG.3) of the method300(seeFIG.3). In block508, the GPU kernel204processes the GPU input data150, produces the GPU output data154, and stores the GPU output data154in the GPU mbufs240A-240D, which occurs during block308(seeFIG.3) of the method300(seeFIG.3). As the GPU kernel204processes the GPU input data150, in block510, the transmit thread264waits for the GPU kernel204to complete the processing. Finally, in block512, the transmit thread264instructs the first NIC114to retrieve the GPU output data154from the GPU mbufs240A-240D, which occurs during block310(seeFIG.3) of the method300(seeFIG.3). The transmit thread264may instructs the first NIC114to forward the GPU output data154to a data recipient, such the device from which the packets Pinputwere received. For example, the first NIC114may transmit the GPU output data154in the packets Poutput(seeFIGS.1and2) to the second computing device104. Referring toFIG.2, another approach to avoiding the memory consistency issue is to flush the GPU memory (e.g., video memory, VRAM, and the like) on the first GPU device115after the first NIC114writes the GPU input data150to the GPU mbufs240A-240D. For example, the first CPU110or the first NIC114may flush the GPU memory (e.g., video memory, VRAM, and the like) on the first GPU device115. One way to accomplish this in GPUs in which read-after-write (“RaW”) ordering is guaranteed is to have the first CPU110or the first NIC114perform a read operation after the first NIC114writes the GPU input data150to the GPU mbufs240A-240D. Flushing the GPU memory may be used when the GPU kernel204is a persistent kernel, which is a function launched and executing in the GPU processing stream210before the first NIC114writes the GPU input data150to the GPU mbufs240A-240D. In other words, the persistent kernel is waiting to obtain data from the GPU memory pool142. For GPUs in which simply notifying the persistent kernel that the GPU input data150has been written to the GPU mbufs240A-240D is insufficient to ensure the write operation has actually been performed, the GPU memory may be flushed before the notification is provided to the persistent kernel. Optionally, a gdrcopy library, which may be used to transfer files between the first system memory112and the first GPU memory118, may be used to flush the GPU memory. FIG.6is a block diagram illustrating a process600that flushes the GPU memory (e.g., video memory, VRAM, and the like) on the first GPU device115, and may be performed in block308(seeFIG.3) of the method300(seeFIG.3), in at least one embodiment. For example, the process600may be performed when the GPU kernel204is a persistent kernel. Referring toFIG.6, in block602, the receive thread262flushes the GPU memory (e.g., video memory, VRAM, and the like) on the first GPU device115. For example, the receive thread262may perform or instruct the first NIC114to perform a read operation on the GPU memory pool142. Then, in block604, the receive thread262unblocks the GPU kernel204by setting the ready flag272to TRUE. In other words, the receive function (e.g., the function gpu_trigger_workload( )) executed by the receive thread262may notify the GPU kernel204of the arrival of the Pinputthat have yet to be processed. In block606, the GPU kernel204polls the ready flag272to determine when the GPU input data150is ready for processing (e.g., the ready flag272is set to TRUE). In block608, when the GPU kernel204determines the GPU input data150is ready for processing, the GPU kernel204processes the GPU input data150in the GPU mbufs240A-240D. The GPU kernel204may also toggle the ready flag272to FALSE (which is illustrated by arrow610) to indicate that data stored in the GPU memory pool142is no longer ready for processing. After the GPU kernel204finishes processing the GPU input data150and writes the GPU output data154to the GPU mbufs240A-240D (seeFIG.2), the GPU kernel204sets the done flag274to TRUE, which is illustrated by arrow612. The send function (e.g., the function gpu_wait_workload( )) executed by the transmit thread264polls the done flag274(which is illustrated by arrow614) to determine when the GPU kernel204has completed processing the Pinputand has stored the GPU output data154to the GPU mbufs240A-240D. After the transmit thread264instructs the first NIC114to retrieve the GPU output data154and the first NIC114retrieves the GPU output data154, the transmit thread264may toggle the done flag274to FALSE (which is illustrated by arrow616) to indicate that data stored in the GPU memory pool142is no longer waiting to be retrieved. FIG.7is a block diagram700illustrating sub-processes performed by the receive thread262, the GPU kernel204, and the transmit thread264during the method300illustrated inFIG.3, in at least one embodiment. Referring toFIG.5, in block702, the receive thread262detects that the first NIC114has received the packets Pinputand instructs the first NIC114to obtain the GPU input data150. Block702is performed during blocks302and304(seeFIG.3) of the method300(seeFIG.3). Then, in block704, the receive thread262instructs the first NIC114to store the GPU input data150in the GPU mbufs240A-240D, which is performed during block306(seeFIG.3) of the method300(seeFIG.3). Next, in block706, the receive thread262flushes or instructs the first NIC114to flush the GPU memory (e.g., video memory, VRAM, and the like) on the first GPU device115, which may be performed during block306or block308(seeFIG.3) of the method300(seeFIG.3). Then, in block708, the receive thread262notifies the GPU kernel204that the GPU input data150is ready for processing by setting the ready flag272to TRUE, which is performed during block308(seeFIG.3) of the method300(seeFIG.3). In block714, the GPU kernel204is busy or waiting. During block714, the GPU kernel204monitors (e.g., polls) the ready flag272. Thus, CPU-GPU communication710occurs in blocks708and714. When the GPU kernel204is no longer busy and determines (e.g., via polling) that the ready flag272has been set to TRUE, the GPU kernel204begins processing the GPU input data150, and stores the GPU output data154in the GPU mbufs240A-240D in block716. Block716occurs during block308(seeFIG.3) of the method300(seeFIG.3). After the GPU kernel204writes the GPU output data154to the GPU mbufs240A-240D (seeFIG.2), in block718, the GPU kernel204sets the done flag274to TRUE. In block724, the transmit thread264is busy or waiting. During block724, the transmit thread264monitors (e.g., polls) the done flag274. Thus, CPU-GPU communication720occurs in blocks718and724. Finally, in block526, the transmit thread264instructs the first NIC114to retrieve the GPU output data154from the GPU mbufs240A-240D, which occurs during block310(seeFIG.3) of the method300(seeFIG.3). The transmit thread264may instructs the first NIC114to forward the GPU output data154to a data recipient, such the device from which the packets Pinputwere received. For example, the first NIC114may transmit the GPU output data154in the packets Poutput(seeFIGS.1and2) to the second computing device104. FIG.8is a diagram800illustrating sub-processes performed by a CPU thread 1 (e.g., the receive thread262), the GPU kernel204, and a CPU thread 1 (e.g., the transmit thread264) during the method300illustrated inFIG.3, in at least one embodiment. In block802, the GPU kernel204waits for the ready flag272to be set to TRUE. In block804, the CPU thread 1 prepares the GPU input data150that the GPU kernel204should process (e.g. accumulate a burst of network packets) and stores the GPU input data150in a packet list805(e.g., the GPU mbufs240A-240D). In block806, the CPU thread 1 flushes the GPU memory (e.g., the video memory), for example, using a GDRCopy function. In block808, the CPU thread 1 updates the ready flag272(e.g., a GDRCopy pinned flag in GPU memory) to TRUE to unblock the GPU kernel204(e.g., to notify the GPU kernel204that the GPU input data150obtained from the new burst of packets is ready to be processed). In block810, the CPU thread 2 waits for the done flag274(e.g., in CPU pinned memory) to be updated to TRUE. In the GPU kernel204, at least one CUDA thread detects the ready flag272has been updated and propagates the information to the other CUDA threads. By way of a non-limiting example, each CUDA thread may be used to process a different one of the GPU mbufs240A-240D and, therefore, data received in a different packet. The CUDA threads synchronize in block812, perform their work in parallel in block814, and synchronize again in block816. In block818, one of the CUDA threads updates the done flag274to notify the first CPU110that processing has completed. In block820, the CPU thread 2 detects the done flag274has been updated to TRUE, which indicates to the CPU thread 2 that the processing performed by the GPU kernel204has been completed. Thus, the CPU thread 2 may perform other tasks, such as cleanup operations. Then, the process may return to block802. A group of threads is commonly referred to as a CUDA block. Multiple CUDA blocks may be grouped together into a CUDA grid. The GPU kernel204may be executed as a CUDA grid of blocks of threads. When the GPU kernel204includes multiple CUDA blocks, synchronization among the CUDA threads (e.g., in blocks812and816) may be performed via inter-block barrier synchronization using atomics. The first CPU110and the GPU kernel204(e.g., CUDA kernel) may share a list830of items (e.g., a CPU pinned memory list) that may be treated as ring buffer. InFIG.8, the CPU thread 1 performs an iteration (e.g., Iteration 0, Iteration 1, etc.) for each plurality of packets received. The CPU thread 2 performs a subsequent iteration (e.g., Iteration 0, Iteration 1, etc.) for each iteration performed by the CPU thread 1. The GPU kernel204also performs a subsequent iteration (e.g., Iteration 0, Iteration 1, etc.) for each iteration performed by the CPU thread 1. Further, each iteration performed by the CPU thread 1 results in a new item (e.g., Item 0, Item 1, etc.) stored in the shared list830. The items each store the packet list805as well as the values of the ready flag272and the done flag274. The packet list805, referring toFIG.2, may be implemented as the GPU mbufs240A-240D, the descriptors DA-DD in the receive queue266, the host mbufs250A-250D, and the like. Optionally, a new item in the shared list830may overwrite a previous item. Referring toFIG.2, yet another approach to avoiding the memory consistency issue is to use a CUDA graph created a setup time. The CUDA graph may be composed of a plurality of CUDA kernels, each dedicated to processing a different set (e.g., burst) of packets. Each CUDA kernel in the same CUDA graph may wait for a specific ready flag dedicated to a different set of packets to be set to TRUE. When all of the CUDA kernels in the same CUDA graph have been executed, a new instance of the CUDA graph may be launched. In the following description, numerous specific details are set forth to provide a more thorough understanding of at least one embodiment. However, it will be apparent to one skilled in the art that the inventive concepts may be practiced without one or more of these specific details. Data Center FIG.9illustrates an exemplary data center900, in accordance with at least one embodiment. In at least one embodiment, data center900includes, without limitation, a data center infrastructure layer910, a framework layer920, a software layer930and an application layer940. In at least one embodiment, as shown inFIG.9, data center infrastructure layer910may include a resource orchestrator912, grouped computing resources914, and node computing resources (“node C.R.s”)916(1)-916(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.s916(1)-916(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (“FPGAs”), data processing units (“DPUs”) in network devices, graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s916(1)-916(N) may be a server having one or more of above-mentioned computing resources. In at least one embodiment, grouped computing resources914may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). Separate groupings of node C.R.s within grouped computing resources914may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination. In at least one embodiment, resource orchestrator912may configure or otherwise control one or more node C.R.s916(1)-916(N) and/or grouped computing resources914. In at least one embodiment, resource orchestrator912may include a software design infrastructure (“SDI”) management entity for data center900. In at least one embodiment, resource orchestrator912may include hardware, software or some combination thereof. In at least one embodiment, as shown inFIG.9, framework layer920includes, without limitation, a job scheduler932, a configuration manager934, a resource manager936and a distributed file system938. In at least one embodiment, framework layer920may include a framework to support software952of software layer930and/or one or more application(s)942of application layer940. In at least one embodiment, software952or application(s)942may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer920may be, but is not limited to, a type of free and open-source software web application framework such as Apache Spark™ (hereinafter “Spark”) that may utilize distributed file system938for large-scale data processing (e.g., “big data”). In at least one embodiment, job scheduler932may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center900. In at least one embodiment, configuration manager934may be capable of configuring different layers such as software layer930and framework layer920, including Spark and distributed file system938for supporting large-scale data processing. In at least one embodiment, resource manager936may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system938and job scheduler932. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource914at data center infrastructure layer910. In at least one embodiment, resource manager936may coordinate with resource orchestrator912to manage these mapped or allocated computing resources. In at least one embodiment, software952included in software layer930may include software used by at least portions of node C.R.s916(1)-916(N), grouped computing resources914, and/or distributed file system938of framework layer920. One or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software. In at least one embodiment, application(s)942included in application layer940may include one or more types of applications used by at least portions of node C.R.s916(1)-916(N), grouped computing resources914, and/or distributed file system938of framework layer920. In at least one or more types of applications may include, without limitation, CUDA applications. In at least one embodiment, any of configuration manager934, resource manager936, and resource orchestrator912may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center900from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center. In at least one embodiment, the data center900may be used to implement the system100(seeFIG.1). For example, the first and second computing devices102and104(seeFIGS.1,4, and6) may each be implemented by one or more of grouped computing resources914or node C.R.916(1)-916(N). Computer-Based Systems The following figures set forth, without limitation, exemplary computer-based systems that can be used to implement at least one embodiment. FIG.10illustrates a processing system1000, in accordance with at least one embodiment. In at least one embodiment, processing system1000includes one or more processors1002and one or more graphics processors1008, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors1002or processor cores1007. In at least one embodiment, processing system1000is a processing platform incorporated within a system-on-a-chip (“Sort”) integrated circuit for use in mobile, handheld, or embedded devices. In at least one embodiment, processing system1000can include, or be incorporated within a server-based gaming platform, a game console, a media console, a mobile gaming console, a handheld game console, or an online game console. In at least one embodiment, processing system1000is a mobile phone, smart phone, tablet computing device or mobile Internet device. In at least one embodiment, processing system1000can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In at least one embodiment, processing system1000is a television or set top box device having one or more processors1002and a graphical interface generated by one or more graphics processors1008. In at least one embodiment, one or more processors1002each include one or more processor cores1007to process instructions which, when executed, perform operations for system and user software. In at least one embodiment, each of one or more processor cores1007is configured to process a specific instruction set1009. In at least one embodiment, instruction set1009may facilitate Complex Instruction Set Computing (“CISC”), Reduced Instruction Set Computing (“RISC”), or computing via a Very Long Instruction Word (“VLIW”). In at least one embodiment, processor cores1007may each process a different instruction set1009, which may include instructions to facilitate emulation of other instruction sets. In at least one embodiment, processor core1007may also include other processing devices, such as a digital signal processor (“DSP”). In at least one embodiment, processor1002includes cache memory (“cache”)1004. In at least one embodiment, processor1002can have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory is shared among various components of processor1002. In at least one embodiment, processor1002also uses an external cache (e.g., a Level 3 (“L3”) cache or Last Level Cache (“LLC”)) (not shown), which may be shared among processor cores1007using known cache coherency techniques. In at least one embodiment, register file1006is additionally included in processor1002which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). In at least one embodiment, register file1006may include general-purpose registers or other registers. In at least one embodiment, one or more processor(s)1002are coupled with one or more interface bus(es)1010to transmit communication signals such as address, data, or control signals between processor1002and other components in processing system1000. In at least one embodiment interface bus1010, in one embodiment, can be a processor bus, such as a version of a Direct Media Interface (“DMI”) bus. In at least one embodiment, interface bus1010is not limited to a DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., “PCI,” PCI Express (“PCIe”)), memory buses, or other types of interface buses. In at least one embodiment processor(s)1002include an integrated memory controller1016and a platform controller hub1030. In at least one embodiment, memory controller1016facilitates communication between a memory device and other components of processing system1000, while platform controller hub (“PCH”)1030provides connections to Input/Output (“I/O”) devices via a local I/O bus. In at least one embodiment, memory device1020can be a dynamic random access memory (“DRAM”) device, a static random access memory (“SRAM”) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as processor memory. In at least one embodiment memory device1020can operate as system memory for processing system1000, to store data1022and instructions1021for use when one or more processors1002executes an application or process. In at least one embodiment, memory controller1016also couples with an optional external graphics processor1012, which may communicate with one or more graphics processors1008in processors1002to perform graphics and media operations. In at least one embodiment, a display device1011can connect to processor(s)1002. In at least one embodiment display device1011can include one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In at least one embodiment, display device1011can include a head mounted display (“HMD”) such as a stereoscopic display device for use in virtual reality (“VR”) applications or augmented reality (“AR”) applications. In at least one embodiment, platform controller hub1030enables peripherals to connect to memory device1020and processor1002via a high-speed I/O bus. In at least one embodiment, I/O peripherals include, but are not limited to, an audio controller1046, a network controller1034, a firmware interface1028, a wireless transceiver1026, touch sensors1025, a data storage device1024(e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage device1024can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as PCI, or PCIe. In at least one embodiment, touch sensors1025can include touch screen sensors, pressure sensors, or fingerprint sensors. In at least one embodiment, wireless transceiver1026can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (“LTE”) transceiver. In at least one embodiment, firmware interface1028enables communication with system firmware, and can be, for example, a unified extensible firmware interface (“UEFI”). In at least one embodiment, network controller1034can enable a network connection to a wired network. In at least one embodiment, a high-performance network controller (not shown) couples with interface bus1010. In at least one embodiment, audio controller1046is a multi-channel high definition audio controller. In at least one embodiment, processing system1000includes an optional legacy I/O controller1040for coupling legacy (e.g., Personal System 2 (“PS/2”)) devices to processing system1000. In at least one embodiment, platform controller hub1030can also connect to one or more Universal Serial Bus (“USB”) controllers1042connect input devices, such as keyboard and mouse1043combinations, a camera1044, or other USB input devices. In at least one embodiment, an instance of memory controller1016and platform controller hub1030may be integrated into a discreet external graphics processor, such as external graphics processor1012. In at least one embodiment, platform controller hub1030and/or memory controller1016may be external to one or more processor(s)1002. For example, in at least one embodiment, processing system1000can include an external memory controller1016and platform controller hub1030, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with processor(s)1002. In at least one embodiment, the processing system1000may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, processor(s)1002(seeFIG.10) may implement the first CPU110, memory device1020may implement the first system memory112, network controller1034may implement the first NIC114, graphics processor(s)1008may implement the first GPU device115, and interface bus1010may implement the connection120. In at least one embodiment, the processing system1000(seeFIG.10) may be used to implement the second computing device104. In such embodiments, processor(s)1002(seeFIG.10) may implement the second CPU130, memory device1020may implement the second system memory132, network controller1034may implement the second NIC134, graphics processor(s)1008may implement the second GPU device135, and interface bus1010may implement the connection140. FIG.11illustrates a computer system1100, in accordance with at least one embodiment. In at least one embodiment, computer system1100may be a system with interconnected devices and components, an SOC, or some combination. In at least on embodiment, computer system1100is formed with a processor1102that may include execution units to execute an instruction. In at least one embodiment, computer system1100may include, without limitation, a component, such as processor1102to employ execution units including logic to perform algorithms for processing data. In at least one embodiment, computer system1100may include processors, such as PENTIUM® Processor family, Xeon™, Itanium®, XScale™ and/or StrongARM™, Intel® Core™, or Intel® Nervana™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and like) may also be used. In at least one embodiment, computer system1100may execute a version of WINDOWS' operating system available from Microsoft Corporation of Redmond, Wash., although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. In at least one embodiment, computer system1100may be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (“PDAs”), and handheld PCs. In at least one embodiment, embedded applications may include a microcontroller, a digital signal processor (DSP), an SoC, network computers (“NetPCs”), set-top boxes, network hubs, wide area network (“WAN”) switches, or any other system that may perform one or more instructions. In at least one embodiment, computer system1100may include, without limitation, processor1102that may include, without limitation, one or more execution units1108that may be configured to execute a Compute Unified Device Architecture (“CUDA”) (CUDA® is developed by NVIDIA Corporation of Santa Clara, CA) program. In at least one embodiment, a CUDA program is at least a portion of a software application written in a CUDA programming language. In at least one embodiment, computer system1100is a single processor desktop or server system. In at least one embodiment, computer system1100may be a multiprocessor system. In at least one embodiment, processor1102may include, without limitation, a CISC microprocessor, a RISC microprocessor, a VLIW microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. In at least one embodiment, processor1102may be coupled to a processor bus1110that may transmit data signals between processor1102and other components in computer system1100. In at least one embodiment, processor1102may include, without limitation, a Level 1 (“L1”) internal cache memory (“cache”)1104. In at least one embodiment, processor1102may have a single internal cache or multiple levels of internal cache. In at least one embodiment, cache memory may reside external to processor1102. In at least one embodiment, processor1102may also include a combination of both internal and external caches. In at least one embodiment, a register file1106may store different types of data in various registers including, without limitation, integer registers, floating point registers, status registers, and instruction pointer register. In at least one embodiment, execution unit1108, including, without limitation, logic to perform integer and floating point operations, also resides in processor1102. Processor1102may also include a microcode (“ucode”) read only memory (“ROM”) that stores microcode for certain macro instructions. In at least one embodiment, execution unit1108may include logic to handle a packed instruction set1109. In at least one embodiment, by including packed instruction set1109in an instruction set of a general-purpose processor1102, along with associated circuitry to execute instructions, operations used by many multimedia applications may be performed using packed data in a general-purpose processor1102. In at least one embodiment, many multimedia applications may be accelerated and executed more efficiently by using full width of a processor's data bus for performing operations on packed data, which may eliminate a need to transfer smaller units of data across a processor's data bus to perform one or more operations one data element at a time. In at least one embodiment, execution unit1108may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system1100may include, without limitation, a memory1120. In at least one embodiment, memory1120may be implemented as a DRAM device, an SRAM device, flash memory device, or other memory device. Memory1120may store instruction(s)1119and/or data1121represented by data signals that may be executed by processor1102. In at least one embodiment, a system logic chip may be coupled to processor bus1110and memory1120. In at least one embodiment, the system logic chip may include, without limitation, a memory controller hub (“MCH”)1116, and processor1102may communicate with MCH1116via processor bus1110. In at least one embodiment, MCH1116may provide a high bandwidth memory path1118to memory1120for instruction and data storage and for storage of graphics commands, data and textures. In at least one embodiment, MCH1116may direct data signals between processor1102, memory1120, and other components in computer system1100and to bridge data signals between processor bus1110, memory1120, and a system I/O1122. In at least one embodiment, system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, MCH1116may be coupled to memory1120through high bandwidth memory path1118and graphics/video card1112may be coupled to MCH1116through an Accelerated Graphics Port (“AGP”) interconnect1114. In at least one embodiment, computer system1100may use system I/O1122that is a proprietary hub interface bus to couple MCH1116to I/O controller hub (“ICH”)1130. In at least one embodiment, ICH1130may provide direct connections to some I/O devices via a local I/O bus. In at least one embodiment, local I/O bus may include, without limitation, a high-speed I/O bus for connecting peripherals to memory1120, a chipset, and processor1102. Examples may include, without limitation, an audio controller1129, a firmware hub (“flash BIOS”)1128, a wireless transceiver1126, a data storage1124, a legacy I/O controller1123containing a user input interface1125and a keyboard interface, a serial expansion port1127, such as a USB, and a network controller1134. Data storage1124may comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. In at least one embodiment,FIG.11illustrates a system, which includes interconnected hardware devices or “chips.” In at least one embodiment,FIG.11may illustrate an exemplary SoC. In at least one embodiment, devices illustrated inFIG.11may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of system1100are interconnected using compute express link (“CXL”) interconnects. In at least one embodiment, the computer system1100may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, processor1102(seeFIG.11) may implement the first CPU110, memory1120may implement the first system memory112, network controller1134may implement the first NIC114, graphics/video card1112may implement the first GPU device115, and processor bus1110may implement the connection120. In at least one embodiment, the computer system1100(seeFIG.11) may be used to implement the second computing device104. In such embodiments, processor1102(seeFIG.11) may implement the second CPU130, memory1120may implement the second system memory132, network controller1134may implement the second NIC134, graphics/video card1112may implement the second GPU device135, and processor bus1110may implement the connection140. FIG.12illustrates a system1200, in accordance with at least one embodiment. In at least one embodiment, system1200is an electronic device that utilizes a processor1210. In at least one embodiment, system1200may be, for example and without limitation, a notebook, a tower server, a rack server, a blade server, an edge device communicatively coupled to one or more on-premise or cloud service providers, a laptop, a desktop, a tablet, a mobile device, a phone, an embedded computer, or any other suitable electronic device. In at least one embodiment, system1200may include, without limitation, processor1210communicatively coupled to any suitable number or kind of components, peripherals, modules, or devices. In at least one embodiment, processor1210is coupled using a bus or interface, such as an I2C bus, a System Management Bus (“SMBus”), a Low Pin Count (“LPC”) bus, a Serial Peripheral Interface (“SPI”), a High Definition Audio (“HDA”) bus, a Serial Advance Technology Attachment (“SATA”) bus, a USB (versions 1, 2, 3), or a Universal Asynchronous Receiver/Transmitter (“UART”) bus. In at least one embodiment,FIG.12illustrates a system which includes interconnected hardware devices or “chips.” In at least one embodiment,FIG.12may illustrate an exemplary SoC. In at least one embodiment, devices illustrated inFIG.12may be interconnected with proprietary interconnects, standardized interconnects (e.g., PCIe) or some combination thereof. In at least one embodiment, one or more components ofFIG.12are interconnected using CXL interconnects. In at least one embodiment,FIG.12may include a display1224, a touch screen1225, a touch pad1230, a Near Field Communications unit (“NFC”)1245, a sensor hub1240, a thermal sensor1246, an Express Chipset (“EC”)1235, a Trusted Platform Module (“TPM”)1238, BIOS/firmware/flash memory (“BIOS, FW Flash”)1222, a DSP1260, a Solid State Disk (“SSD”) or Hard Disk Drive (“HDD”)1220, a wireless local area network unit (“WLAN”)1250, a Bluetooth unit1252, a Wireless Wide Area Network unit (“WWAN”)1256, a Global Positioning System (“GPS”)1255, a camera (“USB 3.0 camera”)1254such as a USB 3.0 camera, or a Low Power Double Data Rate (“LPDDR”) memory unit (“LPDDR3”)1215implemented in, for example, LPDDR3 standard. These components may each be implemented in any suitable manner. In at least one embodiment, other components may be communicatively coupled to processor1210through components discussed above. In at least one embodiment, an accelerometer1241, an Ambient Light Sensor (“ALS”)1242, a compass1243, and a gyroscope1244may be communicatively coupled to sensor hub1240. In at least one embodiment, a thermal sensor1239, a fan1237, a keyboard1236, and a touch pad1230may be communicatively coupled to EC1235. In at least one embodiment, a speaker1263, a headphones1264, and a microphone (“mic”)1265may be communicatively coupled to an audio unit (“audio codec and class d amp”)1262, which may in turn be communicatively coupled to DSP1260. In at least one embodiment, audio unit1262may include, for example and without limitation, an audio coder/decoder (“codec”) and a class D amplifier. In at least one embodiment, a SIM card (“SIM”)1257may be communicatively coupled to WWAN unit1256. In at least one embodiment, components such as WLAN unit1250and Bluetooth unit1252, as well as WWAN unit1256may be implemented in a Next Generation Form Factor (“NGFF”). In at least one embodiment, the system1200may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, processor1210may implement the first CPU110(seeFIGS.1,2,4, and6). The system1200may be used to implement the second computing device104(seeFIGS.1,4, and6). In such embodiments, processor1210may implement the second CPU130(seeFIG.1). FIG.13illustrates an exemplary integrated circuit1300, in accordance with at least one embodiment. In at least one embodiment, exemplary integrated circuit1300is an SoC that may be fabricated using one or more IP cores. In at least one embodiment, integrated circuit1300includes one or more application processor(s)1305(e.g., CPUs, DPUs), at least one graphics processor1310, and may additionally include an image processor1315and/or a video processor1320, any of which may be a modular IP core. In at least one embodiment, integrated circuit1300includes peripheral or bus logic including a USB controller1325, a UART controller1330, an SPI/SDIO controller1335, and an I2S/I2C controller1340. In at least one embodiment, integrated circuit1300can include a display device1345coupled to one or more of a high-definition multimedia interface (“HDMI”) controller1350and a mobile industry processor interface (“MIPI”) display interface1355. In at least one embodiment, storage may be provided by a flash memory subsystem1360including flash memory and a flash memory controller. In at least one embodiment, a memory interface may be provided via a memory controller1365for access to SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits additionally include an embedded security engine1370. In at least one embodiment, the integrated circuit1300may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, application processor(s)1305(seeFIG.13) may implement the first CPU110, and graphics processor1310may implement the first GPU device115. In at least one embodiment, the integrated circuit1300(seeFIG.13) may be used to implement the second computing device104. In such embodiments, application processor(s)1305(seeFIG.13) may implement the second CPU130, and graphics processor1310may implement the second GPU device135. FIG.14illustrates a computing system1400, according to at least one embodiment; In at least one embodiment, computing system1400includes a processing subsystem1401having one or more processor(s)1402and a system memory1404communicating via an interconnection path that may include a memory hub1405. In at least one embodiment, memory hub1405may be a separate component within a chipset component or may be integrated within one or more processor(s)1402. In at least one embodiment, memory hub1405couples with an I/O subsystem1411via a communication link1406. In at least one embodiment, I/O subsystem1411includes an I/O hub1407that can enable computing system1400to receive input from one or more input device(s)1408. In at least one embodiment, I/O hub1407can enable a display controller, which may be included in one or more processor(s)1402, to provide outputs to one or more display device(s)1410A. In at least one embodiment, one or more display device(s)1410A coupled with I/O hub1407can include a local, internal, or embedded display device. In at least one embodiment, processing subsystem1401includes one or more parallel processor(s)1412coupled to memory hub1405via a bus or other communication link1413. In at least one embodiment, communication link1413may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCIe, or may be a vendor specific communications interface or communications fabric. In at least one embodiment, one or more parallel processor(s)1412form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core processor. In at least one embodiment, one or more parallel processor(s)1412form a graphics processing subsystem that can output pixels to one of one or more display device(s)1410A coupled via I/O Hub1407. In at least one embodiment, one or more parallel processor(s)1412can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s)1410B. In at least one embodiment, a system storage unit1414can connect to I/O hub1407to provide a storage mechanism for computing system1400. In at least one embodiment, an I/O switch1416can be used to provide an interface mechanism to enable connections between I/O hub1407and other components, such as a network adapter1418and/or wireless network adapter1419that may be integrated into a platform, and various other devices that can be added via one or more add-in device(s)1420. In at least one embodiment, network adapter1418can be an Ethernet adapter or another wired network adapter. In at least one embodiment, wireless network adapter1419can include one or more of a Wi-Fi, Bluetooth, NFC, or other network device that includes one or more wireless radios. In at least one embodiment, computing system1400can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, that may also be connected to I/O hub1407. In at least one embodiment, communication paths interconnecting various components inFIG.14may be implemented using any suitable protocols, such as PCI based protocols (e.g., PCIe), or other bus or point-to-point communication interfaces and/or protocol(s), such as NVLink high-speed interconnect, or interconnect protocols. In at least one embodiment, one or more parallel processor(s)1412incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (“GPU”). In at least one embodiment, one or more parallel processor(s)1412incorporate circuitry optimized for general purpose processing. In at least embodiment, components of computing system1400may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, one or more parallel processor(s)1412, memory hub1405, processor(s)1402, and I/O hub1407can be integrated into an SoC integrated circuit. In at least one embodiment, components of computing system1400can be integrated into a single package to form a system in package (“SIP”) configuration. In at least one embodiment, at least a portion of the components of computing system1400can be integrated into a multi-chip module (“MCM”), which can be interconnected with other multi-chip modules into a modular computing system. In at least one embodiment, I/O subsystem1411and display devices1410B are omitted from computing system1400. In at least one embodiment, the computing system1400may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, processor(s)1402(seeFIG.14) may implement the first CPU110, system memory1404may implement the first system memory112, I/O subsystem1411may implement the first NIC114, parallel processor(s)1412may implement the first GPU device115, and communication link1406and/or communication link1413may implement the connection120. In at least one embodiment, the computing system1400(seeFIG.14) may be used to implement the second computing device104. In such embodiments, processor(s)1402(seeFIG.14) may implement the second CPU130, system memory1404may implement the second system memory132, I/O subsystem1411may implement the second NIC134, parallel processor(s)1412may implement the second GPU device135, and communication link1406and/or communication link1413may implement the connection140. Processing Systems The following figures set forth, without limitation, exemplary processing systems that can be used to implement at least one embodiment. FIG.15illustrates an accelerated processing unit (“APU”)1500, in accordance with at least one embodiment. In at least one embodiment, APU1500is developed by AMD Corporation of Santa Clara, CA In at least one embodiment, APU1500can be configured to execute an application program, such as a CUDA program. In at least one embodiment, APU1500includes, without limitation, a core complex1510, a graphics complex1540, fabric1560, I/O interfaces1570, memory controllers1580, a display controller1592, and a multimedia engine1594. In at least one embodiment, APU1500may include, without limitation, any number of core complexes1510, any number of graphics complexes1550, any number of display controllers1592, and any number of multimedia engines1594in any combination. For explanatory purposes, multiple instances of like objects are denoted herein with reference numbers identifying the object and parenthetical numbers identifying the instance where needed. In at least one embodiment, core complex1510is a CPU, graphics complex1540is a GPU, and APU1500is a processing unit that integrates, without limitation,1510and1540onto a single chip. In at least one embodiment, some tasks may be assigned to core complex1510and other tasks may be assigned to graphics complex1540. In at least one embodiment, core complex1510is configured to execute main control software associated with APU1500, such as an operating system. In at least one embodiment, core complex1510is the master processor of APU1500, controlling and coordinating operations of other processors. In at least one embodiment, core complex1510issues commands that control the operation of graphics complex1540. In at least one embodiment, core complex1510can be configured to execute host executable code derived from CUDA source code, and graphics complex1540can be configured to execute device executable code derived from CUDA source code. In at least one embodiment, core complex1510includes, without limitation, cores1520(1)-1520(4) and an L3 cache1530. In at least one embodiment, core complex1510may include, without limitation, any number of cores1520and any number and type of caches in any combination. In at least one embodiment, cores1520are configured to execute instructions of a particular instruction set architecture (“ISA”). In at least one embodiment, each core1520is a CPU core. In at least one embodiment, each core1520includes, without limitation, a fetch/decode unit1522, an integer execution engine1524, a floating point execution engine1526, and an L2 cache1528. In at least one embodiment, fetch/decode unit1522fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine1524and floating point execution engine1526. In at least one embodiment, fetch/decode unit1522can concurrently dispatch one micro-instruction to integer execution engine1524and another micro-instruction to floating point execution engine1526. In at least one embodiment, integer execution engine1524executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine1526executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit1522dispatches micro-instructions to a single execution engine that replaces both integer execution engine1524and floating point execution engine1526. In at least one embodiment, each core1520(i), where i is an integer representing a particular instance of core1520, may access L2 cache1528(i) included in core1520(i). In at least one embodiment, each core1520included in core complex1510(j), where j is an integer representing a particular instance of core complex1510, is connected to other cores1520included in core complex1510(j) via L3 cache1530(j) included in core complex1510(j). In at least one embodiment, cores1520included in core complex1510(j), where j is an integer representing a particular instance of core complex1510, can access all of L3 cache1530(j) included in core complex1510(j). In at least one embodiment, L3 cache1530may include, without limitation, any number of slices. In at least one embodiment, graphics complex1540can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, graphics complex1540is configured to execute graphics pipeline operations such as draw commands, pixel operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, graphics complex1540is configured to execute operations unrelated to graphics. In at least one embodiment, graphics complex1540is configured to execute both operations related to graphics and operations unrelated to graphics. In at least one embodiment, graphics complex1540includes, without limitation, any number of compute units1550and an L2 cache1542. In at least one embodiment, compute units1550share L2 cache1542. In at least one embodiment, L2 cache1542is partitioned. In at least one embodiment, graphics complex1540includes, without limitation, any number of compute units1550and any number (including zero) and type of caches. In at least one embodiment, graphics complex1540includes, without limitation, any amount of dedicated graphics hardware. In at least one embodiment, each compute unit1550includes, without limitation, any number of SIMD units1552and a shared memory1554. In at least one embodiment, each SIMD unit1552implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each compute unit1550may execute any number of thread blocks, but each thread block executes on a single compute unit1550. In at least one embodiment, a thread block includes, without limitation, any number of threads of execution. In at least one embodiment, a workgroup is a thread block. In at least one embodiment, each SIMD unit1552executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in the warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory1554. In at least one embodiment, fabric1560is a system interconnect that facilitates data and control transmissions across core complex1510, graphics complex1540, I/O interfaces1570, memory controllers1580, display controller1592, and multimedia engine1594. In at least one embodiment, APU1500may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric1560that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to APU1500. In at least one embodiment, I/O interfaces1570are representative of any number and type of I/O interfaces (e.g., PCI, PCI-Extended (“PCI-X”), PCIe, gigabit Ethernet (“GBE”), USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces1570In at least one embodiment, peripheral devices that are coupled to I/O interfaces1570may include, without limitation, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth. In at least one embodiment, display controller AMD92 displays images on one or more display device(s), such as a liquid crystal display (“LCD”) device. In at least one embodiment, multimedia engine240includes, without limitation, any amount and type of circuitry that is related to multimedia, such as a video decoder, a video encoder, an image signal processor, etc. In at least one embodiment, memory controllers1580facilitate data transfers between APU1500and a unified system memory1590. In at least one embodiment, core complex1510and graphics complex1540share unified system memory1590. In at least one embodiment, APU1500implements a memory subsystem that includes, without limitation, any amount and type of memory controllers1580and memory devices (e.g., shared memory1554) that may be dedicated to one component or shared among multiple components. In at least one embodiment, APU1500implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches1628, L3 cache1530, and L2 cache1542) that may each be private to or shared between any number of components (e.g., cores1520, core complex1510, SIMD units1552, compute units1550, and graphics complex1540). In at least one embodiment, the APU1500may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, core complex1510(seeFIG.15) may implement the first CPU110, unified system memory1590may implement the first system memory112, I/O interfaces1570may implement the first NIC114, graphics complex1540may implement the first GPU device115, and I/O interfaces1570and/or fabric1560may implement the connection120. In at least one embodiment, the APU1500(seeFIG.15) may be used to implement the second computing device104. In such embodiments, core complex1510(seeFIG.15) may implement the second CPU130, unified system memory1590may implement the second system memory132, I/O interfaces1570may implement the second NIC134, graphics complex1540may implement the second GPU device135, and I/O interfaces1570and/or fabric1560may implement the connection140. FIG.16illustrates a CPU1600, in accordance with at least one embodiment. In at least one embodiment, CPU1600is developed by AMD Corporation of Santa Clara, CA In at least one embodiment, CPU1600can be configured to execute an application program. In at least one embodiment, CPU1600is configured to execute main control software, such as an operating system. In at least one embodiment, CPU1600issues commands that control the operation of an external GPU (not shown). In at least one embodiment, CPU1600can be configured to execute host executable code derived from CUDA source code, and an external GPU can be configured to execute device executable code derived from such CUDA source code. In at least one embodiment, CPU1600includes, without limitation, any number of core complexes1610, fabric1660, I/O interfaces1670, and memory controllers1680. In at least one embodiment, core complex1610includes, without limitation, cores1620(1)-1620(4) and an L3 cache1630. In at least one embodiment, core complex1610may include, without limitation, any number of cores1620and any number and type of caches in any combination. In at least one embodiment, cores1620are configured to execute instructions of a particular ISA. In at least one embodiment, each core1620is a CPU core. In at least one embodiment, each core1620includes, without limitation, a fetch/decode unit1622, an integer execution engine1624, a floating point execution engine1626, and an L2 cache1628. In at least one embodiment, fetch/decode unit1622fetches instructions, decodes such instructions, generates micro-operations, and dispatches separate micro-instructions to integer execution engine1624and floating point execution engine1626. In at least one embodiment, fetch/decode unit1622can concurrently dispatch one micro-instruction to integer execution engine1624and another micro-instruction to floating point execution engine1626. In at least one embodiment, integer execution engine1624executes, without limitation, integer and memory operations. In at least one embodiment, floating point engine1626executes, without limitation, floating point and vector operations. In at least one embodiment, fetch-decode unit1622dispatches micro-instructions to a single execution engine that replaces both integer execution engine1624and floating point execution engine1626. In at least one embodiment, each core1620(i), where i is an integer representing a particular instance of core1620, may access L2 cache1628(i) included in core1620(i). In at least one embodiment, each core1620included in core complex1610(j), where j is an integer representing a particular instance of core complex1610, is connected to other cores1620in core complex1610(j) via L3 cache1630(j) included in core complex1610(j). In at least one embodiment, cores1620included in core complex1610(j), where j is an integer representing a particular instance of core complex1610, can access all of L3 cache1630(j) included in core complex1610(j). In at least one embodiment, L3 cache1630may include, without limitation, any number of slices. In at least one embodiment, fabric1660is a system interconnect that facilitates data and control transmissions across core complexes1610(1)-1610(N) (where N is an integer greater than zero), I/O interfaces1670, and memory controllers1680. In at least one embodiment, CPU1600may include, without limitation, any amount and type of system interconnect in addition to or instead of fabric1660that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to CPU1600. In at least one embodiment, I/O interfaces1670are representative of any number and type of I/O interfaces (e.g., PCI, PCI-X, PCIe, GBE, USB, etc.). In at least one embodiment, various types of peripheral devices are coupled to I/O interfaces1670In at least one embodiment, peripheral devices that are coupled to I/O interfaces1670may include, without limitation, displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth. In at least one embodiment, memory controllers1680facilitate data transfers between CPU1600and a system memory1690. In at least one embodiment, core complex1610and graphics complex1640share system memory1690. In at least one embodiment, CPU1600implements a memory subsystem that includes, without limitation, any amount and type of memory controllers1680and memory devices that may be dedicated to one component or shared among multiple components. In at least one embodiment, CPU1600implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 caches1628and L3 caches1630) that may each be private to or shared between any number of components (e.g., cores1620and core complexes1610). In at least one embodiment, the CPU1600may be used to implement the first CPU110(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second CPU130(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). The system memory1690may be used to implement the first system memory112and/or the second system memory132. FIG.17illustrates an exemplary accelerator integration slice1790, in accordance with at least one embodiment. As used herein, a “slice” comprises a specified portion of processing resources of an accelerator integration circuit. In at least one embodiment, the accelerator integration circuit provides cache management, memory access, context management, and interrupt management services on behalf of multiple graphics processing engines included in a graphics acceleration module. The graphics processing engines may each comprise a separate GPU. Alternatively, the graphics processing engines may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In at least one embodiment, the graphics acceleration module may be a GPU with multiple graphics processing engines. In at least one embodiment, the graphics processing engines may be individual GPUs integrated on a common package, line card, or chip. An application effective address space1782within system memory1714stores process elements1783. In one embodiment, process elements1783are stored in response to GPU invocations1781from applications1780executed on processor1707. A process element1783contains process state for corresponding application1780. A work descriptor (“WD”)1784contained in process element1783can be a single job requested by an application or may contain a pointer to a queue of jobs. In at least one embodiment, WD1784is a pointer to a job request queue in application effective address space1782. Graphics acceleration module1746and/or individual graphics processing engines can be shared by all or a subset of processes in a system. In at least one embodiment, an infrastructure for setting up process state and sending WD1784to graphics acceleration module1746to start a job in a virtualized environment may be included. In at least one embodiment, a dedicated-process programming model is implementation-specific. In this model, a single process owns graphics acceleration module1746or an individual graphics processing engine. Because graphics acceleration module1746is owned by a single process, a hypervisor initializes an accelerator integration circuit for an owning partition and an operating system initializes accelerator integration circuit for an owning process when graphics acceleration module1746is assigned. In operation, a WD fetch unit1791in accelerator integration slice1790fetches next WD1784which includes an indication of work to be done by one or more graphics processing engines of graphics acceleration module1746. Data from WD1784may be stored in registers1745and used by a memory management unit (“MMU”)1739, interrupt management circuit1747and/or context management circuit1748as illustrated. For example, one embodiment of MMU1739includes segment/page walk circuitry for accessing segment/page tables1786within OS virtual address space1785. Interrupt management circuit1747may process interrupt events (“INT”)1792received from graphics acceleration module1746. When performing graphics operations, an effective address1793generated by a graphics processing engine is translated to a real address by MMU1739. In one embodiment, a same set of registers1745are duplicated for each graphics processing engine and/or graphics acceleration module1746and may be initialized by a hypervisor or operating system. Each of these duplicated registers may be included in accelerator integration slice1790. Exemplary registers that may be initialized by a hypervisor are shown in Table 1. TABLE 1Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description Register Exemplary registers that may be initialized by an operating system are shown in Table 2. TABLE 2Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptor In one embodiment, each WD1784is specific to a particular graphics acceleration module1746and/or a particular graphics processing engine. It contains all information required by a graphics processing engine to do work or it can be a pointer to a memory location where an application has set up a command queue of work to be completed. In at least one embodiment, the system illustrated inFIG.17may be used to implement the first computing device102(seeFIGS.1,2,4, and6). In such embodiments, referring toFIG.2, processor1707(seeFIG.17) may implement the first CPU110, system memory1714may implement the first system memory112, and graphics acceleration module1746may implement the first GPU device115. The system illustrated inFIG.17may be used to implement the second computing device104. In such embodiments, processor1707(seeFIG.17) may implement the second CPU130, system memory1714may implement the second system memory132, and graphics acceleration module1746may implement the second GPU device135. FIGS.18A-18Billustrate exemplary graphics processors, in accordance with at least one embodiment. In at least one embodiment, any of the exemplary graphics processors may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. In at least one embodiment, the exemplary graphics processors are for use within an SoC. FIG.18Aillustrates an exemplary graphics processor1810of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment.FIG.18Billustrates an additional exemplary graphics processor1840of an SoC integrated circuit that may be fabricated using one or more IP cores, in accordance with at least one embodiment. In at least one embodiment, graphics processor1810ofFIG.18Ais a low power graphics processor core. In at least one embodiment, graphics processor1840ofFIG.18Bis a higher performance graphics processor core. In at least one embodiment, each of graphics processors1810,1840can be variants of graphics processor1310ofFIG.13. In at least one embodiment, graphics processor1810includes a vertex processor1805and one or more fragment processor(s)1815A-1815N (e.g.,1815A,1815B,1815C,1815D, through1815N−1, and1815N). In at least one embodiment, graphics processor1810can execute different shader programs via separate logic, such that vertex processor1805is optimized to execute operations for vertex shader programs, while one or more fragment processor(s)1815A-1815N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor1805performs a vertex processing stage of a 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, fragment processor(s)1815A-1815N use primitive and vertex data generated by vertex processor1805to produce a framebuffer that is displayed on a display device. In at least one embodiment, fragment processor(s)1815A-1815N are optimized to execute fragment shader programs as provided for in an OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in a Direct 3D API. In at least one embodiment, graphics processor1810additionally includes one or more MMU(s)1820A-1820B, cache(s)1825A-1825B, and circuit interconnect(s)1830A-1830B. In at least one embodiment, one or more MMU(s)1820A-1820B provide for virtual to physical address mapping for graphics processor1810, including for vertex processor1805and/or fragment processor(s)1815A-1815N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in one or more cache(s)1825A-1825B. In at least one embodiment, one or more MMU(s)1820A-1820B may be synchronized with other MMUs within a system, including one or more MMUs associated with one or more application processor(s)1305, image processors1315, and/or video processors1320ofFIG.13, such that each processor1305-1320can participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnect(s)1830A-1830B enable graphics processor1810to interface with other IP cores within an SoC, either via an internal bus of the SoC or via a direct connection. In at least one embodiment, graphics processor1840includes one or more MMU(s)1820A-1820B, caches1825A-1825B, and circuit interconnects1830A-1830B of graphics processor1810ofFIG.18A. In at least one embodiment, graphics processor1840includes one or more shader core(s)1855A-1855N (e.g.,1855A,1855B,1855C,1855D,1855E,1855F, through1855N−1, and1855N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, a number of shader cores can vary. In at least one embodiment, graphics processor1840includes an inter-core task manager1845, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores1855A-1855N and a tiling unit1858to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. In at least one embodiment, either of the example graphics processors illustrated inFIGS.18A and18Bmay be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.19Aillustrates a graphics core1900, in accordance with at least one embodiment. In at least one embodiment, graphics core1900may be included within graphics processor1310ofFIG.13. In at least one embodiment, graphics core1900may be a unified shader core1855A-1855N as inFIG.18B. In at least one embodiment, graphics core1900includes a shared instruction cache1902, a texture unit1918, and a cache/shared memory1920that are common to execution resources within graphics core1900. In at least one embodiment, graphics core1900can include multiple slices1901A-1901N or partition for each core, and a graphics processor can include multiple instances of graphics core1900. Slices1901A-1901N can include support logic including a local instruction cache1904A-1904N, a thread scheduler1906A-1906N, a thread dispatcher1908A-1908N, and a set of registers1910A-1910N. In at least one embodiment, slices1901A-1901N can include a set of additional function units (“AFUs”)1912A-1912N, floating-point units (“FPUs”)1914A-1914N, integer arithmetic logic units (“ALUs”)1916-1916N, address computational units (“ACUs”)1913A-1913N, double-precision floating-point units (“DPFPUs”)1915A-1915N, and matrix processing units (“MPUs”)1917A-1917N. In at least one embodiment, FPUs1914A-1914N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while DPFPUs1915A-1915N perform double precision (64-bit) floating point operations. In at least one embodiment, ALUs1916A-1916N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. In at least one embodiment, MPUs1917A-1917N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. In at least one embodiment, MPUs1917-1917N can perform a variety of matrix operations to accelerate CUDA programs, including enabling support for accelerated general matrix to matrix multiplication (“GEMM”). In at least one embodiment, AFUs1912A-1912N can perform additional logic operations not supported by floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.). In at least one embodiment, the graphics core1900may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.19Billustrates a general-purpose graphics processing unit (“GPGPU”)1930, in accordance with at least one embodiment. In at least one embodiment, GPGPU1930is highly-parallel and suitable for deployment on a multi-chip module. In at least one embodiment, GPGPU1930can be configured to enable highly-parallel compute operations to be performed by an array of GPUs. In at least one embodiment, GPGPU1930can be linked directly to other instances of GPGPU1930to create a multi-GPU cluster to improve execution time for CUDA programs. In at least one embodiment, GPGPU1930includes a host interface1932to enable a connection with a host processor. In at least one embodiment, host interface1932is a PCIe interface. In at least one embodiment, host interface1932can be a vendor specific communications interface or communications fabric. In at least one embodiment, GPGPU1930receives commands from a host processor and uses a global scheduler1934to distribute execution threads associated with those commands to a set of compute clusters1936A-1936H. In at least one embodiment, compute clusters1936A-1936H share a cache memory1938. In at least one embodiment, cache memory1938can serve as a higher-level cache for cache memories within compute clusters1936A-1936H. In at least one embodiment, GPGPU1930includes memory1944A-1944B coupled with compute clusters1936A-1936H via a set of memory controllers1942A-1942B. In at least one embodiment, memory1944A-1944B can include various types of memory devices including DRAM or graphics random access memory, such as synchronous graphics random access memory (“SGRAM”), including graphics double data rate (“GDDR”) memory. In at least one embodiment, compute clusters1936A-1936H each include a set of graphics cores, such as graphics core1900ofFIG.19A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for computations associated with CUDA programs. For example, in at least one embodiment, at least a subset of floating point units in each of compute clusters1936A-1936H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of floating point units can be configured to perform 64-bit floating point operations. In at least one embodiment, multiple instances of GPGPU1930can be configured to operate as a compute cluster. Compute clusters1936A-1936H may implement any technically feasible communication techniques for synchronization and data exchange. In at least one embodiment, multiple instances of GPGPU1930communicate over host interface1932. In at least one embodiment, GPGPU1930includes an I/O hub1939that couples GPGPU1930with a GPU link1940that enables a direct connection to other instances of GPGPU1930. In at least one embodiment, GPU link1940is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU1930. In at least one embodiment GPU link1940couples with a high speed interconnect to transmit and receive data to other GPGPUs1930or parallel processors. In at least one embodiment, multiple instances of GPGPU1930are located in separate data processing systems and communicate via a network device that is accessible via host interface1932. In at least one embodiment GPU link1940can be configured to enable a connection to a host processor in addition to or as an alternative to host interface1932. In at least one embodiment, GPGPU1930can be configured to execute a CUDA program. In at least one embodiment, the GPGPU1930may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.20Aillustrates a parallel processor2000, in accordance with at least one embodiment. In at least one embodiment, various components of parallel processor2000may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (“ASICs”), or FPGAs. In at least one embodiment, parallel processor2000includes a parallel processing unit2002. In at least one embodiment, parallel processing unit2002includes an I/O unit2004that enables communication with other devices, including other instances of parallel processing unit2002. In at least one embodiment, I/O unit2004may be directly connected to other devices. In at least one embodiment, I/O unit2004connects with other devices via use of a hub or switch interface, such as memory hub2005. In at least one embodiment, connections between memory hub2005and I/O unit2004form a communication link. In at least one embodiment, I/O unit2004connects with a host interface2006and a memory crossbar2016, where host interface2006receives commands directed to performing processing operations and memory crossbar2016receives commands directed to performing memory operations. In at least one embodiment, when host interface2006receives a command buffer via I/O unit2004, host interface2006can direct work operations to perform those commands to a front end2008. In at least one embodiment, front end2008couples with a scheduler2010, which is configured to distribute commands or other work items to a processing array2012. In at least one embodiment, scheduler2010ensures that processing array2012is properly configured and in a valid state before tasks are distributed to processing array2012. In at least one embodiment, scheduler2010is implemented via firmware logic executing on a microcontroller. In at least one embodiment, microcontroller implemented scheduler2010is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on processing array2012. In at least one embodiment, host software can prove workloads for scheduling on processing array2012via one of multiple graphics processing doorbells. In at least one embodiment, workloads can then be automatically distributed across processing array2012by scheduler2010logic within a microcontroller including scheduler2010. In at least one embodiment, processing array2012can include up to “N” clusters (e.g., cluster2014A, cluster2014B, through cluster2014N). In at least one embodiment, each cluster2014A-2014N of processing array2012can execute a large number of concurrent threads. In at least one embodiment, scheduler2010can allocate work to clusters2014A-2014N of processing array2012using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. In at least one embodiment, scheduling can be handled dynamically by scheduler2010, or can be assisted in part by compiler logic during compilation of program logic configured for execution by processing array2012. In at least one embodiment, different clusters2014A-2014N of processing array2012can be allocated for processing different types of programs or for performing different types of computations. In at least one embodiment, processing array2012can be configured to perform various types of parallel processing operations. In at least one embodiment, processing array2012is configured to perform general-purpose parallel compute operations. For example, in at least one embodiment, processing array2012can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations. In at least one embodiment, processing array2012is configured to perform parallel graphics processing operations. In at least one embodiment, processing array2012can include additional logic to support execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, processing array2012can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, parallel processing unit2002can transfer data from system memory via I/O unit2004for processing. In at least one embodiment, during processing, transferred data can be stored to on-chip memory (e.g., a parallel processor memory2022) during processing, then written back to system memory. In at least one embodiment, when parallel processing unit2002is used to perform graphics processing, scheduler2010can be configured to divide a processing workload into approximately equal sized tasks, to better enable distribution of graphics processing operations to multiple clusters2014A-2014N of processing array2012. In at least one embodiment, portions of processing array2012can be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. In at least one embodiment, intermediate data produced by one or more of clusters2014A-2014N may be stored in buffers to allow intermediate data to be transmitted between clusters2014A-2014N for further processing. In at least one embodiment, processing array2012can receive processing tasks to be executed via scheduler2010, which receives commands defining processing tasks from front end2008. In at least one embodiment, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how data is to be processed (e.g., what program is to be executed). In at least one embodiment, scheduler2010may be configured to fetch indices corresponding to tasks or may receive indices from front end2008. In at least one embodiment, front end2008can be configured to ensure processing array2012is configured to a valid state before a workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated. In at least one embodiment, each of one or more instances of parallel processing unit2002can couple with parallel processor memory2022. In at least one embodiment, parallel processor memory2022can be accessed via memory crossbar2016, which can receive memory requests from processing array2012as well as I/O unit2004. In at least one embodiment, memory crossbar2016can access parallel processor memory2022via a memory interface2018. In at least one embodiment, memory interface2018can include multiple partition units (e.g., a partition unit2020A, partition unit2020B, through partition unit2020N) that can each couple to a portion (e.g., memory unit) of parallel processor memory2022. In at least one embodiment, a number of partition units2020A-2020N is configured to be equal to a number of memory units, such that a first partition unit2020A has a corresponding first memory unit2024A, a second partition unit2020B has a corresponding memory unit2024B, and an Nth partition unit2020N has a corresponding Nth memory unit2024N. In at least one embodiment, a number of partition units2020A-2020N may not be equal to a number of memory devices. In at least one embodiment, memory units2024A-2024N can include various types of memory devices, including DRAM or graphics random access memory, such as SGRAM, including GDDR memory. In at least one embodiment, memory units2024A-2024N may also include 3D stacked memory, including but not limited to high bandwidth memory (“HBM”). In at least one embodiment, render targets, such as frame buffers or texture maps may be stored across memory units2024A-2024N, allowing partition units2020A-2020N to write portions of each render target in parallel to efficiently use available bandwidth of parallel processor memory2022. In at least one embodiment, a local instance of parallel processor memory2022may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory. In at least one embodiment, any one of clusters2014A-2014N of processing array2012can process data that will be written to any of memory units2024A-2024N within parallel processor memory2022. In at least one embodiment, memory crossbar2016can be configured to transfer an output of each cluster2014A-2014N to any partition unit2020A-2020N or to another cluster2014A-2014N, which can perform additional processing operations on an output. In at least one embodiment, each cluster2014A-2014N can communicate with memory interface2018through memory crossbar2016to read from or write to various external memory devices. In at least one embodiment, memory crossbar2016has a connection to memory interface2018to communicate with I/O unit2004, as well as a connection to a local instance of parallel processor memory2022, enabling processing units within different clusters2014A-2014N to communicate with system memory or other memory that is not local to parallel processing unit2002. In at least one embodiment, memory crossbar2016can use virtual channels to separate traffic streams between clusters2014A-2014N and partition units2020A-2020N. In at least one embodiment, multiple instances of parallel processing unit2002can be provided on a single add-in card, or multiple add-in cards can be interconnected. In at least one embodiment, different instances of parallel processing unit2002can be configured to interoperate even if different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit2002can include higher precision floating point units relative to other instances. In at least one embodiment, systems incorporating one or more instances of parallel processing unit2002or parallel processor2000can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. In at least one embodiment, one or more instances of parallel processing unit2002or parallel processor2000may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). One or more instances of parallel processor memory2022may be used to implement the first GPU memory118(seeFIGS.1,2,4, and6) of the first computing device102and/or the second GPU memory138(seeFIG.2) of the second computing device104. The I/O unit2004may be connected to the connection120in the first computing device102. The I/O unit2004may be connected to the connection140in the second computing device104. FIG.20Billustrates a processing cluster2094, in accordance with at least one embodiment. In at least one embodiment, processing cluster2094is included within a parallel processing unit. In at least one embodiment, processing cluster2094is one of processing clusters2014A-2014N ofFIG.20. In at least one embodiment, processing cluster2094can be configured to execute many threads in parallel, where the term “thread” refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single instruction, multiple data (“SIMD”) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single instruction, multiple thread (“SIMT”) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster2094. In at least one embodiment, operation of processing cluster2094can be controlled via a pipeline manager2032that distributes processing tasks to SIMT parallel processors. In at least one embodiment, pipeline manager2032receives instructions from scheduler2010ofFIG.20and manages execution of those instructions via a graphics multiprocessor2034and/or a texture unit2036. In at least one embodiment, graphics multiprocessor2034is an exemplary instance of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors of differing architectures may be included within processing cluster2094. In at least one embodiment, one or more instances of graphics multiprocessor2034can be included within processing cluster2094. In at least one embodiment, graphics multiprocessor2034can process data and a data crossbar2040can be used to distribute processed data to one of multiple possible destinations, including other shader units. In at least one embodiment, pipeline manager2032can facilitate distribution of processed data by specifying destinations for processed data to be distributed via data crossbar2040. In at least one embodiment, each graphics multiprocessor2034within processing cluster2094can include an identical set of functional execution logic (e.g., arithmetic logic units, load/store units (“LSUs”), etc.). In at least one embodiment, functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. In at least one embodiment, functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In at least one embodiment, same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present. In at least one embodiment, instructions transmitted to processing cluster2094constitute a thread. In at least one embodiment, a set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group executes a program on different input data. In at least one embodiment, each thread within a thread group can be assigned to a different processing engine within graphics multiprocessor2034. In at least one embodiment, a thread group may include fewer threads than a number of processing engines within graphics multiprocessor2034. In at least one embodiment, when a thread group includes fewer threads than a number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. In at least one embodiment, a thread group may also include more threads than a number of processing engines within graphics multiprocessor2034. In at least one embodiment, when a thread group includes more threads than the number of processing engines within graphics multiprocessor2034, processing can be performed over consecutive clock cycles. In at least one embodiment, multiple thread groups can be executed concurrently on graphics multiprocessor2034. In at least one embodiment, graphics multiprocessor2034includes an internal cache memory to perform load and store operations. In at least one embodiment, graphics multiprocessor2034can forego an internal cache and use a cache memory (e.g., L1 cache2048) within processing cluster2094. In at least one embodiment, each graphics multiprocessor2034also has access to Level 2 (“L2”) caches within partition units (e.g., partition units2020A-2020N ofFIG.20A) that are shared among all processing clusters2094and may be used to transfer data between threads. In at least one embodiment, graphics multiprocessor2034may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit2002may be used as global memory. In at least one embodiment, processing cluster2094includes multiple instances of graphics multiprocessor2034that can share common instructions and data, which may be stored in L1 cache2048. In at least one embodiment, each processing cluster2094may include an MMU2045that is configured to map virtual addresses into physical addresses. In at least one embodiment, one or more instances of MMU2045may reside within memory interface2018ofFIG.20. In at least one embodiment, MMU2045includes a set of page table entries (“PTEs”) used to map a virtual address to a physical address of a tile and optionally a cache line index. In at least one embodiment, MMU2045may include address translation lookaside buffers (“TLBs”) or caches that may reside within graphics multiprocessor2034or L1 cache2048or processing cluster2094. In at least one embodiment, a physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. In at least one embodiment, a cache line index may be used to determine whether a request for a cache line is a hit or miss. In at least one embodiment, processing cluster2094may be configured such that each graphics multiprocessor2034is coupled to a texture unit2036for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering texture data. In at least one embodiment, texture data is read from an internal texture L1 cache (not shown) or from an L1 cache within graphics multiprocessor2034and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. In at least one embodiment, each graphics multiprocessor2034outputs a processed task to data crossbar2040to provide the processed task to another processing cluster2094for further processing or to store the processed task in an L2 cache, a local parallel processor memory, or a system memory via memory crossbar2016. In at least one embodiment, a pre-raster operations unit (“preROP”)2042is configured to receive data from graphics multiprocessor2034, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units2020A-2020N ofFIG.20). In at least one embodiment, PreROP2042can perform optimizations for color blending, organize pixel color data, and perform address translations. In at least one embodiment, the processing cluster2094may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.20Cillustrates a graphics multiprocessor2096, in accordance with at least one embodiment. In at least one embodiment, graphics multiprocessor2096is graphics multiprocessor2034ofFIG.20B. In at least one embodiment, graphics multiprocessor2096couples with pipeline manager2032of processing cluster2094. In at least one embodiment, graphics multiprocessor2096has an execution pipeline including but not limited to an instruction cache2052, an instruction unit2054, an address mapping unit2056, a register file2058, one or more GPGPU cores2062, and one or more LSUs2066. GPGPU cores2062and LSUs2066are coupled with cache memory2072and shared memory2070via a memory and cache interconnect2068. In at least one embodiment, instruction cache2052receives a stream of instructions to execute from pipeline manager2032. In at least one embodiment, instructions are cached in instruction cache2052and dispatched for execution by instruction unit2054. In at least one embodiment, instruction unit2054can dispatch instructions as thread groups (e.g., warps), with each thread of a thread group assigned to a different execution unit within GPGPU core2062. In at least one embodiment, an instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit2056can be used to translate addresses in a unified address space into a distinct memory address that can be accessed by LSUs2066. In at least one embodiment, register file2058provides a set of registers for functional units of graphics multiprocessor2096. In at least one embodiment, register file2058provides temporary storage for operands connected to data paths of functional units (e.g., GPGPU cores2062, LSUs2066) of graphics multiprocessor2096. In at least one embodiment, register file2058is divided between each of functional units such that each functional unit is allocated a dedicated portion of register file2058. In at least one embodiment, register file2058is divided between different thread groups being executed by graphics multiprocessor2096. In at least one embodiment, GPGPU cores2062can each include FPUs and/or integer ALUs that are used to execute instructions of graphics multiprocessor2096. GPGPU cores2062can be similar in architecture or can differ in architecture. In at least one embodiment, a first portion of GPGPU cores2062include a single precision FPU and an integer ALU while a second portion of GPGPU cores2062include a double precision FPU. In at least one embodiment, FPUs can implement IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. In at least one embodiment, graphics multiprocessor2096can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In at least one embodiment one or more of GPGPU cores2062can also include fixed or special function logic. In at least one embodiment, GPGPU cores2062include SIMD logic capable of performing a single instruction on multiple sets of data. In at least one embodiment GPGPU cores2062can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for GPGPU cores2062can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (“SPMD”) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for an SIMT execution model can executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit. In at least one embodiment, memory and cache interconnect2068is an interconnect network that connects each functional unit of graphics multiprocessor2096to register file2058and to shared memory2070. In at least one embodiment, memory and cache interconnect2068is a crossbar interconnect that allows LSU2066to implement load and store operations between shared memory2070and register file2058. In at least one embodiment, register file2058can operate at a same frequency as GPGPU cores2062, thus data transfer between GPGPU cores2062and register file2058is very low latency. In at least one embodiment, shared memory2070can be used to enable communication between threads that execute on functional units within graphics multiprocessor2096. In at least one embodiment, cache memory2072can be used as a data cache for example, to cache texture data communicated between functional units and texture unit2036. In at least one embodiment, shared memory2070can also be used as a program managed cached. In at least one embodiment, threads executing on GPGPU cores2062can programmatically store data within shared memory in addition to automatically cached data that is stored within cache memory2072. In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. In at least one embodiment, a GPU may be communicatively coupled to host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In at least one embodiment, a GPU may be integrated on the same package or chip as cores and communicatively coupled to cores over a processor bus/interconnect that is internal to a package or a chip. In at least one embodiment, regardless of the manner in which a GPU is connected, processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a WD. In at least one embodiment, the GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. In at least one embodiment, the graphics multiprocessor2096may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.21illustrates a graphics processor2100, in accordance with at least one embodiment. In at least one embodiment, graphics processor2100includes a ring interconnect2102, a pipeline front-end2104, a media engine2137, and graphics cores2180A-2180N. In at least one embodiment, ring interconnect2102couples graphics processor2100to other processing units, including other graphics processors or one or more general-purpose processor cores. In at least one embodiment, graphics processor2100is one of many processors integrated within a multi-core processing system. In at least one embodiment, graphics processor2100receives batches of commands via ring interconnect2102. In at least one embodiment, incoming commands are interpreted by a command streamer2103in pipeline front-end2104. In at least one embodiment, graphics processor2100includes scalable execution logic to perform 3D geometry processing and media processing via graphics core(s)2180A-2180N. In at least one embodiment, for 3D geometry processing commands, command streamer2103supplies commands to geometry pipeline2136. In at least one embodiment, for at least some media processing commands, command streamer2103supplies commands to a video front end2134, which couples with a media engine2137. In at least one embodiment, media engine2137includes a Video Quality Engine (“VQE”)2130for video and image post-processing and a multi-format encode/decode “MFXX”) engine2133to provide hardware-accelerated media data encode and decode. In at least one embodiment, geometry pipeline2136and media engine2137each generate execution threads for thread execution resources provided by at least one graphics core2180A. In at least one embodiment, graphics processor2100includes scalable thread execution resources featuring modular graphics cores2180A-2180N (sometimes referred to as core slices), each having multiple sub-cores2150A-550N,2160A-2160N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor2100can have any number of graphics cores2180A through2180N. In at least one embodiment, graphics processor2100includes a graphics core2180A having at least a first sub-core2150A and a second sub-core2160A. In at least one embodiment, graphics processor2100is a low power processor with a single sub-core (e.g., sub-core2150A). In at least one embodiment, graphics processor2100includes multiple graphics cores2180A-2180N, each including a set of first sub-cores2150A-2150N and a set of second sub-cores2160A-2160N. In at least one embodiment, each sub-core in first sub-cores2150A-2150N includes at least a first set of execution units (“EUs”)2152A-2152N and media/texture samplers2154A-2154N. In at least one embodiment, each sub-core in second sub-cores2160A-2160N includes at least a second set of execution units2162A-2162N and samplers2164A-2164N. In at least one embodiment, each sub-core2150A-2150N,2160A-2160N shares a set of shared resources2170A-2170N. In at least one embodiment, shared resources2170include shared cache memory and pixel operation logic. In at least one embodiment, the graphics processor2100may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.22illustrates a processor2200, in accordance with at least one embodiment. In at least one embodiment, processor2200may include, without limitation, logic circuits to perform instructions. In at least one embodiment, processor2200may perform instructions, including x86 instructions, ARM instructions, specialized instructions for ASICs, etc. In at least one embodiment, processor2210may include registers to store packed data, such as 64-bit wide MMX™ registers in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, Calif. In at least one embodiment, MMX registers, available in both integer and floating point forms, may operate with packed data elements that accompany SIMD and streaming SIMD extensions (“SSE”) instructions. In at least one embodiment, 128-bit wide XMM registers relating to SSE2, SSE3, SSE4, AVX, or beyond (referred to generically as “SSEx”) technology may hold such packed data operands. In at least one embodiment, processors2210may perform instructions to accelerate CUDA programs. In at least one embodiment, processor2200includes an in-order front end (“front end”)2201to fetch instructions to be executed and prepare instructions to be used later in processor pipeline. In at least one embodiment, front end2201may include several units. In at least one embodiment, an instruction prefetcher2226fetches instructions from memory and feeds instructions to an instruction decoder2228which in turn decodes or interprets instructions. For example, in at least one embodiment, instruction decoder2228decodes a received instruction into one or more operations called “micro-instructions” or “micro-operations” (also called “micro ops” or “uops”) for execution. In at least one embodiment, instruction decoder2228parses instruction into an opcode and corresponding data and control fields that may be used by micro-architecture to perform operations. In at least one embodiment, a trace cache2230may assemble decoded uops into program ordered sequences or traces in a uop queue2234for execution. In at least one embodiment, when trace cache2230encounters a complex instruction, a microcode ROM2232provides uops needed to complete an operation. In at least one embodiment, some instructions may be converted into a single micro-op, whereas others need several micro-ops to complete full operation. In at least one embodiment, if more than four micro-ops are needed to complete an instruction, instruction decoder2228may access microcode ROM2232to perform instruction. In at least one embodiment, an instruction may be decoded into a small number of micro-ops for processing at instruction decoder2228. In at least one embodiment, an instruction may be stored within microcode ROM2232should a number of micro-ops be needed to accomplish operation. In at least one embodiment, trace cache2230refers to an entry point programmable logic array (“PLA”) to determine a correct micro-instruction pointer for reading microcode sequences to complete one or more instructions from microcode ROM2232. In at least one embodiment, after microcode ROM2232finishes sequencing micro-ops for an instruction, front end2201of machine may resume fetching micro-ops from trace cache2230. In at least one embodiment, out-of-order execution engine (“out of order engine”)2203may prepare instructions for execution. In at least one embodiment, out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down a pipeline and get scheduled for execution. Out-of-order execution engine2203includes, without limitation, an allocator/register renamer2240, a memory uop queue2242, an integer/floating point uop queue2244, a memory scheduler2246, a fast scheduler2202, a slow/general floating point scheduler (“slow/general FP scheduler”)2204, and a simple floating point scheduler (“simple FP scheduler”)2206. In at least one embodiment, fast schedule2202, slow/general floating point scheduler2204, and simple floating point scheduler2206are also collectively referred to herein as “uop schedulers2202,2204,2206.” Allocator/register renamer2240allocates machine buffers and resources that each uop needs in order to execute. In at least one embodiment, allocator/register renamer2240renames logic registers onto entries in a register file. In at least one embodiment, allocator/register renamer2240also allocates an entry for each uop in one of two uop queues, memory uop queue2242for memory operations and integer/floating point uop queue2244for non-memory operations, in front of memory scheduler2246and uop schedulers2202,2204,2206. In at least one embodiment, uop schedulers2202,2204,2206, determine when a uop is ready to execute based on readiness of their dependent input register operand sources and availability of execution resources uops need to complete their operation. In at least one embodiment, fast scheduler2202of at least one embodiment may schedule on each half of main clock cycle while slow/general floating point scheduler2204and simple floating point scheduler2206may schedule once per main processor clock cycle. In at least one embodiment, uop schedulers2202,2204,2206arbitrate for dispatch ports to schedule uops for execution. In at least one embodiment, execution block2211includes, without limitation, an integer register file/bypass network2208, a floating point register file/bypass network (“FP register file/bypass network”)2210, address generation units (“AGUs”)2212and2214, fast ALUs2216and2218, a slow ALU2220, a floating point ALU (“FP”)2222, and a floating point move unit (“FP move”)2224. In at least one embodiment, integer register file/bypass network2208and floating point register file/bypass network2210are also referred to herein as “register files2208,2210.” In at least one embodiment, AGUSs2212and2214, fast ALUs2216and2218, slow ALU2220, floating point ALU2222, and floating point move unit2224are also referred to herein as “execution units2212,2214,2216,2218,2220,2222, and2224.” In at least one embodiment, an execution block may include, without limitation, any number (including zero) and type of register files, bypass networks, address generation units, and execution units, in any combination. In at least one embodiment, register files2208,2210may be arranged between uop schedulers2202,2204,2206, and execution units2212,2214,2216,2218,2220,2222, and2224. In at least one embodiment, integer register file/bypass network2208performs integer operations. In at least one embodiment, floating point register file/bypass network2210performs floating point operations. In at least one embodiment, each of register files2208,2210may include, without limitation, a bypass network that may bypass or forward just completed results that have not yet been written into register file to new dependent uops. In at least one embodiment, register files2208,2210may communicate data with each other. In at least one embodiment, integer register file/bypass network2208may include, without limitation, two separate register files, one register file for low-order thirty-two bits of data and a second register file for high order thirty-two bits of data. In at least one embodiment, floating point register file/bypass network2210may include, without limitation, 128-bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width. In at least one embodiment, execution units2212,2214,2216,2218,2220,2222,2224may execute instructions. In at least one embodiment, register files2208,2210store integer and floating point data operand values that micro-instructions need to execute. In at least one embodiment, processor2200may include, without limitation, any number and combination of execution units2212,2214,2216,2218,2220,2222,2224. In at least one embodiment, floating point ALU2222and floating point move unit2224may execute floating point, MMX, SIMD, AVX and SSE, or other operations. In at least one embodiment, floating point ALU2222may include, without limitation, a 64-bit by 64-bit floating point divider to execute divide, square root, and remainder micro ops. In at least one embodiment, instructions involving a floating point value may be handled with floating point hardware. In at least one embodiment, ALU operations may be passed to fast ALUs2216,2218. In at least one embodiment, fast ALUS2216,2218may execute fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations go to slow ALU2220as slow ALU2220may include, without limitation, integer execution hardware for long-latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. In at least one embodiment, memory load/store operations may be executed by AGUs2212,2214. In at least one embodiment, fast ALU2216, fast ALU2218, and slow ALU2220may perform integer operations on 64-bit data operands. In at least one embodiment, fast ALU2216, fast ALU2218, and slow ALU2220may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. In at least one embodiment, floating point ALU2222and floating point move unit2224may be implemented to support a range of operands having bits of various widths. In at least one embodiment, floating point ALU2222and floating point move unit2224may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions. In at least one embodiment, uop schedulers2202,2204,2206dispatch dependent operations before parent load has finished executing. In at least one embodiment, as uops may be speculatively scheduled and executed in processor2200, processor2200may also include logic to handle memory misses. In at least one embodiment, if a data load misses in a data cache, there may be dependent operations in flight in pipeline that have left a scheduler with temporarily incorrect data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, dependent operations might need to be replayed and independent ones may be allowed to complete. In at least one embodiment, schedulers and replay mechanisms of at least one embodiment of a processor may also be designed to catch instruction sequences for text string comparison operations. In at least one embodiment, the term “registers” may refer to on-board processor storage locations that may be used as part of instructions to identify operands. In at least one embodiment, registers may be those that may be usable from outside of a processor (from a programmer's perspective). In at least one embodiment, registers might not be limited to a particular type of circuit. Rather, in at least one embodiment, a register may store data, provide data, and perform functions described herein. In at least one embodiment, registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In at least one embodiment, integer registers store 32-bit integer data. A register file of at least one embodiment also contains eight multimedia SIMD registers for packed data. In at least one embodiment, the processor2200may be used to implement the first CPU110(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second CPU130(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.23illustrates a processor2300, in accordance with at least one embodiment. In at least one embodiment, processor2300includes, without limitation, one or more processor cores (“cores”)2302A-2302N, an integrated memory controller2314, and an integrated graphics processor2308. In at least one embodiment, processor2300can include additional cores up to and including additional processor core2302N represented by dashed lined boxes. In at least one embodiment, each of processor cores2302A-2302N includes one or more internal cache units2304A-2304N. In at least one embodiment, each processor core also has access to one or more shared cached units2306. In at least one embodiment, internal cache units2304A-2304N and shared cache units2306represent a cache memory hierarchy within processor2300. In at least one embodiment, cache memory units2304A-2304N may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as an L2, L3, Level 4 (“L4”), or other levels of cache, where a highest level of cache before external memory is classified as an LLC. In at least one embodiment, cache coherency logic maintains coherency between various cache units2306and2304A-2304N. In at least one embodiment, processor2300may also include a set of one or more bus controller units2316and a system agent core2310. In at least one embodiment, one or more bus controller units2316manage a set of peripheral buses, such as one or more PCI or PCI express buses. In at least one embodiment, system agent core2310provides management functionality for various processor components. In at least one embodiment, system agent core2310includes one or more integrated memory controllers2314to manage access to various external memory devices (not shown). In at least one embodiment, one or more of processor cores2302A-2302N include support for simultaneous multi-threading. In at least one embodiment, system agent core2310includes components for coordinating and operating processor cores2302A-2302N during multi-threaded processing. In at least one embodiment, system agent core2310may additionally include a power control unit (“PCU”), which includes logic and components to regulate one or more power states of processor cores2302A-2302N and graphics processor2308. In at least one embodiment, processor2300additionally includes graphics processor2308to execute graphics processing operations. In at least one embodiment, graphics processor2308couples with shared cache units2306, and system agent core2310, including one or more integrated memory controllers2314. In at least one embodiment, system agent core2310also includes a display controller2311to drive graphics processor output to one or more coupled displays. In at least one embodiment, display controller2311may also be a separate module coupled with graphics processor2308via at least one interconnect, or may be integrated within graphics processor2308. In at least one embodiment, a ring based interconnect unit2312is used to couple internal components of processor2300. In at least one embodiment, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques. In at least one embodiment, graphics processor2308couples with ring interconnect2312via an I/O link2313. In at least one embodiment, I/O link2313represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module2318, such as an eDRAM module. In at least one embodiment, each of processor cores2302A-2302N and graphics processor2308use embedded memory modules2318as a shared LLC. In at least one embodiment, processor cores2302A-2302N are homogeneous cores executing a common instruction set architecture. In at least one embodiment, processor cores2302A-2302N are heterogeneous in terms of ISA, where one or more of processor cores2302A-2302N execute a common instruction set, while one or more other cores of processor cores2302A-23-02N executes a subset of a common instruction set or a different instruction set. In at least one embodiment, processor cores2302A-2302N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more cores having a lower power consumption. In at least one embodiment, processor2300can be implemented on one or more chips or as an SoC integrated circuit. In at least one embodiment, the processor2300may be used to implement the first CPU110(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second CPU130(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.24illustrates a graphics processor core2400, in accordance with at least one embodiment described. In at least one embodiment, graphics processor core2400is included within a graphics core array. In at least one embodiment, graphics processor core2400, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core2400is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. In at least one embodiment, each graphics core2400can include a fixed function block2430coupled with multiple sub-cores2401A-2401F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. In at least one embodiment, fixed function block2430includes a geometry/fixed function pipeline2436that can be shared by all sub-cores in graphics processor2400, for example, in lower performance and/or lower power graphics processor implementations. In at least one embodiment, geometry/fixed function pipeline2436includes a 3D fixed function pipeline, a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers. In at least one embodiment, fixed function block2430also includes a graphics SoC interface2437, a graphics microcontroller2438, and a media pipeline2439. Graphics SoC interface2437provides an interface between graphics core2400and other processor cores within an SoC integrated circuit. In at least one embodiment, graphics microcontroller2438is a programmable sub-processor that is configurable to manage various functions of graphics processor2400, including thread dispatch, scheduling, and pre-emption. In at least one embodiment, media pipeline2439includes logic to facilitate decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. In at least one embodiment, media pipeline2439implements media operations via requests to compute or sampling logic within sub-cores2401-2401F. In at least one embodiment, SoC interface2437enables graphics core2400to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared LLC memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, SoC interface2437can also enable communication with fixed function devices within an SoC, such as camera imaging pipelines, and enables use of and/or implements global memory atomics that may be shared between graphics core2400and CPUs within an SoC. In at least one embodiment, SoC interface2437can also implement power management controls for graphics core2400and enable an interface between a clock domain of graphic core2400and other clock domains within an SoC. In at least one embodiment, SoC interface2437enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions can be dispatched to media pipeline2439, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline2436, geometry and fixed function pipeline2414) when graphics processing operations are to be performed. In at least one embodiment, graphics microcontroller2438can be configured to perform various scheduling and management tasks for graphics core2400. In at least one embodiment, graphics microcontroller2438can perform graphics and/or compute workload scheduling on various graphics parallel engines within execution unit (EU) arrays2402A-2402F,2404A-2404F within sub-cores2401A-2401F. In at least one embodiment, host software executing on a CPU core of an SoC including graphics core2400can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on an appropriate graphics engine. In at least one embodiment, scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In at least one embodiment, graphics microcontroller2438can also facilitate low-power or idle states for graphics core2400, providing graphics core2400with an ability to save and restore registers within graphics core2400across low-power state transitions independently from an operating system and/or graphics driver software on a system. In at least one embodiment, graphics core2400may have greater than or fewer than illustrated sub-cores2401A-2401F, up to N modular sub-cores. For each set of N sub-cores, in at least one embodiment, graphics core2400can also include shared function logic2410, shared and/or cache memory2412, a geometry/fixed function pipeline2414, as well as additional fixed function logic2416to accelerate various graphics and compute processing operations. In at least one embodiment, shared function logic2410can include logic units (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within graphics core2400. Shared and/or cache memory2412can be an LLC for N sub-cores2401A-2401F within graphics core2400and can also serve as shared memory that is accessible by multiple sub-cores. In at least one embodiment, geometry/fixed function pipeline2414can be included instead of geometry/fixed function pipeline2436within fixed function block2430and can include same or similar logic units. In at least one embodiment, graphics core2400includes additional fixed function logic2416that can include various fixed function acceleration logic for use by graphics core2400. In at least one embodiment, additional fixed function logic2416includes an additional geometry pipeline for use in position only shading. In position-only shading, at least two geometry pipelines exist, whereas in a full geometry pipeline within geometry/fixed function pipeline2416,2436, and a cull pipeline, which is an additional geometry pipeline which may be included within additional fixed function logic2416. In at least one embodiment, cull pipeline is a trimmed down version of a full geometry pipeline. In at least one embodiment, a full pipeline and a cull pipeline can execute different instances of an application, each instance having a separate context. In at least one embodiment, position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, in at least one embodiment, cull pipeline logic within additional fixed function logic2416can execute position shaders in parallel with a main application and generally generates critical results faster than a full pipeline, as a cull pipeline fetches and shades position attribute of vertices, without performing rasterization and rendering of pixels to a frame buffer. In at least one embodiment, a cull pipeline can use generated critical results to compute visibility information for all triangles without regard to whether those triangles are culled. In at least one embodiment, a full pipeline (which in this instance may be referred to as a replay pipeline) can consume visibility information to skip culled triangles to shade only visible triangles that are finally passed to a rasterization phase. In at least one embodiment, additional fixed function logic2416can also include general purpose processing acceleration logic, such as fixed function matrix multiplication logic, for accelerating CUDA programs. In at least one embodiment, each graphics sub-core2401A-2401F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. In at least one embodiment, graphics sub-cores2401A-2401F include multiple EU arrays2402A-2402F,2404A-2404F, thread dispatch and inter-thread communication (“TD/IC”) logic2403A-2403F, a 3D (e.g., texture) sampler2405A-2405F, a media sampler2406A-2406F, a shader processor2407A-2407F, and shared local memory (“SLM”)2408A-2408F. EU arrays2402A-2402F,2404A-2404F each include multiple execution units, which are GPGPUs capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. In at least one embodiment, TD/IC logic2403A-2403F performs local thread dispatch and thread control operations for execution units within a sub-core and facilitate communication between threads executing on execution units of a sub-core. In at least one embodiment, 3D sampler2405A-2405F can read texture or other 3D graphics related data into memory. In at least one embodiment, 3D sampler can read texture data differently based on a configured sample state and texture format associated with a given texture. In at least one embodiment, media sampler2406A-2406F can perform similar read operations based on a type and format associated with media data. In at least one embodiment, each graphics sub-core2401A-2401F can alternately include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each of sub-cores2401A-2401F can make use of shared local memory2408A-2408F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. In at least one embodiment, the graphics processor core2400may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). FIG.25illustrates a parallel processing unit (“PPU”)2500, in accordance with at least one embodiment. In at least one embodiment, PPU2500is configured with machine-readable code that, if executed by PPU2500, causes PPU2500to perform some or all of processes and techniques described herein. In at least one embodiment, PPU2500is a multi-threaded processor that is implemented on one or more integrated circuit devices and that utilizes multithreading as a latency-hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) on multiple threads in parallel. In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU2500. In at least one embodiment, PPU2500is a GPU configured to implement a graphics rendering pipeline for processing three-dimensional (“3D”) graphics data in order to generate two-dimensional (“2D”) image data for display on a display device such as an LCD device. In at least one embodiment, PPU2500is utilized to perform computations such as linear algebra operations and machine-learning operations.FIG.25illustrates an example parallel processor for illustrative purposes only and should be construed as a non-limiting example of a processor architecture that may be implemented in at least one embodiment. In at least one embodiment, one or more PPUs2500are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, one or more PPUs2500are configured to accelerate CUDA programs. In at least one embodiment, PPU2500includes, without limitation, an I/O unit2506, a front-end unit2510, a scheduler unit2512, a work distribution unit2514, a hub2516, a crossbar (“Xbar”)2520, one or more general processing clusters (“GPCs”)2518, and one or more partition units (“memory partition units”)2522. In at least one embodiment, PPU2500is connected to a host processor or other PPUs2500via one or more high-speed GPU interconnects (“GPU interconnects”)2508. In at least one embodiment, PPU2500is connected to a host processor or other peripheral devices via a system bus or interconnect2502. In at least one embodiment, PPU2500is connected to a local memory comprising one or more memory devices (“memory”)2504. In at least one embodiment, memory devices2504include, without limitation, one or more dynamic random access memory (DRAM) devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as high-bandwidth memory (“HBM”) subsystems, with multiple DRAM dies stacked within each device. In at least one embodiment, high-speed GPU interconnect2508may refer to a wire-based multi-lane communications link that is used by systems to scale and include one or more PPUs2500combined with one or more CPUs, supports cache coherence between PPUs2500and CPUs, and CPU mastering. In at least one embodiment, data and/or commands are transmitted by high-speed GPU interconnect2508through hub2516to/from other units of PPU2500such as one or more copy engines, video encoders, video decoders, power management units, and other components which may not be explicitly illustrated inFIG.25. In at least one embodiment, I/O unit2506is configured to transmit and receive communications (e.g., commands, data) from a host processor (not illustrated inFIG.25) over system bus2502. In at least one embodiment, I/O unit2506communicates with host processor directly via system bus2502or through one or more intermediate devices such as a memory bridge. In at least one embodiment, I/O unit2506may communicate with one or more other processors, such as one or more of PPUs2500via system bus2502. In at least one embodiment, I/O unit2506implements a PCIe interface for communications over a PCIe bus. In at least one embodiment, I/O unit2506implements interfaces for communicating with external devices. In at least one embodiment, I/O unit2506decodes packets received via system bus2502. In at least one embodiment, at least some packets represent commands configured to cause PPU2500to perform various operations. In at least one embodiment, I/O unit2506transmits decoded commands to various other units of PPU2500as specified by commands. In at least one embodiment, commands are transmitted to front-end unit2510and/or transmitted to hub2516or other units of PPU2500such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly illustrated inFIG.25). In at least one embodiment, I/O unit2506is configured to route communications between and among various logical units of PPU2500. In at least one embodiment, a program executed by host processor encodes a command stream in a buffer that provides workloads to PPU2500for processing. In at least one embodiment, a workload comprises instructions and data to be processed by those instructions. In at least one embodiment, buffer is a region in a memory that is accessible (e.g., read/write) by both a host processor and PPU2500—a host interface unit may be configured to access buffer in a system memory connected to system bus2502via memory requests transmitted over system bus2502by I/O unit2506. In at least one embodiment, a host processor writes a command stream to a buffer and then transmits a pointer to the start of the command stream to PPU2500such that front-end unit2510receives pointers to one or more command streams and manages one or more command streams, reading commands from command streams and forwarding commands to various units of PPU2500. In at least one embodiment, front-end unit2510is coupled to scheduler unit2512that configures various GPCs2518to process tasks defined by one or more command streams. In at least one embodiment, scheduler unit2512is configured to track state information related to various tasks managed by scheduler unit2512where state information may indicate which of GPCs2518a task is assigned to, whether task is active or inactive, a priority level associated with task, and so forth. In at least one embodiment, scheduler unit2512manages execution of a plurality of tasks on one or more of GPCs2518. In at least one embodiment, scheduler unit2512is coupled to work distribution unit2514that is configured to dispatch tasks for execution on GPCs2518. In at least one embodiment, work distribution unit2514tracks a number of scheduled tasks received from scheduler unit2512and work distribution unit2514manages a pending task pool and an active task pool for each of GPCs2518. In at least one embodiment, pending task pool comprises a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC2518; active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by GPCs2518such that as one of GPCs2518completes execution of a task, that task is evicted from active task pool for GPC2518and one of other tasks from pending task pool is selected and scheduled for execution on GPC2518. In at least one embodiment, if an active task is idle on GPC2518, such as while waiting for a data dependency to be resolved, then the active task is evicted from GPC2518and returned to a pending task pool while another task in the pending task pool is selected and scheduled for execution on GPC2518. In at least one embodiment, work distribution unit2514communicates with one or more GPCs2518via XBar2520. In at least one embodiment, XBar2520is an interconnect network that couples many units of PPU2500to other units of PPU2500and can be configured to couple work distribution unit2514to a particular GPC2518. In at least one embodiment, one or more other units of PPU2500may also be connected to XBar2520via hub2516. In at least one embodiment, tasks are managed by scheduler unit2512and dispatched to one of GPCs2518by work distribution unit2514. GPC2518is configured to process task and generate results. In at least one embodiment, results may be consumed by other tasks within GPC2518, routed to a different GPC2518via XBar2520, or stored in memory2504. In at least one embodiment, results can be written to memory2504via partition units2522, which implement a memory interface for reading and writing data to/from memory2504. In at least one embodiment, results can be transmitted to another PPU2504or CPU via high-speed GPU interconnect2508. In at least one embodiment, PPU2500includes, without limitation, a number U of partition units2522that is equal to number of separate and distinct memory devices2504coupled to PPU2500. In at least one embodiment, a host processor executes a driver kernel that implements an application programming interface (“API”) that enables one or more applications executing on host processor to schedule operations for execution on PPU2500. In at least one embodiment, multiple compute applications are simultaneously executed by PPU2500and PPU2500provides isolation, quality of service (“QoS”), and independent address spaces for multiple compute applications. In at least one embodiment, an application generates instructions (e.g., in the form of API calls) that cause a driver kernel to generate one or more tasks for execution by PPU2500and the driver kernel outputs tasks to one or more streams being processed by PPU2500. In at least one embodiment, each task comprises one or more groups of related threads, which may be referred to as a warp. In at least one embodiment, a warp comprises a plurality of related threads (e.g., 32 threads) that can be executed in parallel. In at least one embodiment, cooperating threads can refer to a plurality of threads including instructions to perform a task and that exchange data through shared memory. In at least one embodiment, the PPU2500may be used to implement the first GPU device115(seeFIGS.1,2,4, and6) of the first computing device102(seeFIGS.1,2,4, and6) and/or the second GPU device135(seeFIG.2) of the second computing device104(seeFIGS.1,4, and6). In at least one embodiment, the system interconnect2502and/or the high-speed GPU interconnect2508may be used to implement the connection120of the first computing device102. In at least one embodiment, the system interconnect2502and/or high-speed GPU interconnect2508may be used to implement the connection140of the second computing device104. In at least one embodiment, memory devices2504may be used to implement the first GPU memory118of the first computing device102. In at least one embodiment, memory devices2504may be used to implement the second GPU memory138of the second computing device104. FIG.26illustrates a GPC2600, in accordance with at least one embodiment. In at least one embodiment, GPC2600is GPC2518ofFIG.25. In at least one embodiment, each GPC2600includes, without limitation, a number of hardware units for processing tasks and each GPC2600includes, without limitation, a pipeline manager2602, a pre-raster operations unit (“PROP”)2604, a raster engine2608, a work distribution crossbar (“WDX”)2616, an MMU2618, one or more Data Processing Clusters (“DPCs”)2606, and any suitable combination of parts. In at least one embodiment, operation of GPC2600is controlled by pipeline manager2602. In at least one embodiment, pipeline manager2602manages configuration of one or more DPCs2606for processing tasks allocated to GPC2600. In at least one embodiment, pipeline manager2602configures at least one of one or more DPCs2606to implement at least a portion of a graphics rendering pipeline. In at least one embodiment, DPC2606is configured to execute a vertex shader program on a programmable streaming multiprocessor (“SM”)2614. In at least one embodiment, pipeline manager2602is configured to route packets received from a work distribution unit to appropriate logical units within GPC2600and, in at least one embodiment, some packets may be routed to fixed function hardware units in PROP2604and/or raster engine2608while other packets may be routed to DPCs2606for processing by a primitive engine2612or SM2614. In at least one embodiment, pipeline manager2602configures at least one of DPCs2606to implement a computing pipeline. In at least one embodiment, pipeline manager2602configures at least one of DPCs2606to execute at least a portion of a CUDA program. In at least one embodiment, PROP unit2604is configured to route data generated by raster engine2608and DPCs2606to a Raster Operations (“ROP”) unit in a partition unit, such as memory partition unit2522described in more detail above in conjunction withFIG.25. In at least one embodiment, PROP unit2604is configured to perform optimizations for color blending, organize pixel data, perform address translations, and more. In at least one embodiment, raster engine2608includes, without limitation, a number of fixed function hardware units configured to perform various raster operations and, in at least one embodiment, raster engine2608includes, without limitation, a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, a tile coalescing engine, and any suitable combination thereof. In at least one embodiment, a setup engine receives transformed vertices and generates plane equations associated with geometric primitive defined by vertices; plane equations are transmitted to a coarse raster engine to generate coverage information (e.g., an x, y coverage mask for a tile) for a primitive; the output of the coarse raster engine is transmitted to a culling engine where fragments associated with a primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. In at least one embodiment, fragments that survive clipping and culling are passed to a fine raster engine to generate attributes for pixel fragments based on plane equations generated by a setup engine. In at least one embodiment, the output of raster engine2608comprises fragments to be processed by any suitable entity such as by a fragment shader implemented within DPC2606. In at least one embodiment, each DPC2606included in GPC2600comprise, without limitation, an M-Pipe Controller (“MPC”)2610; primitive engine2612; one or more SMs2614; and any suitable combination thereof. In at least one embodiment, MPC2610controls operation of DPC2606, routing packets received from pipeline manager2602to appropriate units in DPC2606. In at least one embodiment, packets associated with a vertex are routed to primitive engine2612, which is configured to fetch vertex attributes associated with vertex from memory; in contrast, packets associated with a shader program may be transmitted to SM2614. In at least one embodiment, SM2614comprises, without limitation, a programmable streaming processor that is configured to process tasks represented by a number of threads. In at least one embodiment, SM2614is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently and implements a SIMD architecture where each thread in a group of threads (e.g., a warp) is configured to process a different set of data based on same set of instructions. In at least one embodiment, all threads in group of threads execute same instructions. In at least one embodiment, SM2614implements a SIMT architecture wherein each thread in a group of threads is configured to process a different set of data based on same set of instructions, but where individual threads in group of threads are allowed to diverge during execution. In at least one embodiment, a program counter, a call stack, and an execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within a warp diverge. In another embodiment, a program counter, a call stack, and an execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. In at least one embodiment, an execution state is maintained for each individual thread and threads executing the same instructions may be converged and executed in parallel for better efficiency. At least one embodiment of SM2614is described in more detail in conjunction withFIG.27. In at least one embodiment, MMU2618provides an interface between GPC2600and a memory partition unit (e.g., partition unit2522ofFIG.25) and MMU2618provides translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In at least one embodiment, MMU2618provides one or more translation lookaside buffers (TLBs) for performing translation of virtual addresses into physical addresses in memory. FIG.27illustrates a streaming multiprocessor (“SM”)2700, in accordance with at least one embodiment. In at least one embodiment, SM2700is SM2614ofFIG.26. In at least one embodiment, SM2700includes, without limitation, an instruction cache2702; one or more scheduler units2704; a register file2708; one or more processing cores (“cores”)2710; one or more special function units (“SFUs”)2712; one or more LSUs2714; an interconnect network2716; a shared memory/L1 cache2718; and any suitable combination thereof. In at least one embodiment, a work distribution unit dispatches tasks for execution on GPCs of parallel processing units (PPUs) and each task is allocated to a particular Data Processing Cluster (DPC) within a GPC and, if a task is associated with a shader program, then the task is allocated to one of SMs2700. In at least one embodiment, scheduler unit2704receives tasks from a work distribution unit and manages instruction scheduling for one or more thread blocks assigned to SM2700. In at least one embodiment, scheduler unit2704schedules thread blocks for execution as warps of parallel threads, wherein each thread block is allocated at least one warp. In at least one embodiment, each warp executes threads. In at least one embodiment, scheduler unit2704manages a plurality of different thread blocks, allocating warps to different thread blocks and then dispatching instructions from a plurality of different cooperative groups to various functional units (e.g., processing cores2710, SFUs2712, and LSUs2714) during each clock cycle. In at least one embodiment, “cooperative groups” may refer to a programming model for organizing groups of communicating threads that allows developers to express granularity at which threads are communicating, enabling expression of richer, more efficient parallel decompositions. In at least one embodiment, cooperative launch APIs support synchronization amongst thread blocks for execution of parallel algorithms. In at least one embodiment, APIs of conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (e.g., syncthreads( ) function). However, in at least one embodiment, programmers may define groups of threads at smaller than thread block granularities and synchronize within defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces. In at least one embodiment, cooperative groups enable programmers to define groups of threads explicitly at sub-block and multi-block granularities, and to perform collective operations such as synchronization on threads in a cooperative group. In at least one embodiment, a sub-block granularity is as small as a single thread. In at least one embodiment, a programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. In at least one embodiment, cooperative group primitives enable new patterns of cooperative parallelism, including, without limitation, producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks. In at least one embodiment, a dispatch unit2706is configured to transmit instructions to one or more of functional units and scheduler unit2704includes, without limitation, two dispatch units2706that enable two different instructions from same warp to be dispatched during each clock cycle. In at least one embodiment, each scheduler unit2704includes a single dispatch unit2706or additional dispatch units2706. In at least one embodiment, each SM2700, in at least one embodiment, includes, without limitation, register file2708that provides a set of registers for functional units of SM2700. In at least one embodiment, register file2708is divided between each of the functional units such that each functional unit is allocated a dedicated portion of register file2708. In at least one embodiment, register file2708is divided between different warps being executed by SM2700and register file2708provides temporary storage for operands connected to data paths of functional units. In at least one embodiment, each SM2700comprises, without limitation, a plurality of L processing cores2710. In at least one embodiment, SM2700includes, without limitation, a large number (e.g., 128 or more) of distinct processing cores2710. In at least one embodiment, each processing core2710includes, without limitation, a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes, without limitation, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, floating point arithmetic logic units implement IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores2710include, without limitation, 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores. In at least one embodiment, tensor cores are configured to perform matrix operations. In at least one embodiment, one or more tensor cores are included in processing cores2710. In at least one embodiment, tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In at least one embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation D=A×B+C, where A, B, C, and D are 4×4 matrices. In at least one embodiment, matrix multiply inputs A and B are 16-bit floating point matrices and accumulation matrices C and D are 16-bit floating point or 32-bit floating point matrices. In at least one embodiment, tensor cores operate on 16-bit floating point input data with 32-bit floating point accumulation. In at least one embodiment, 16-bit floating point multiply uses 64 operations and results in a full precision product that is then accumulated using 32-bit floating point a27ition with other intermediate products for a 4×4×4 matrix multiply. Tensor cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements, in at least one embodiment. In at least one embodiment, an API, such as a CUDA-C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use tensor cores from a CUDA-C++ program. In at least one embodiment, at the CUDA level, a warp-level interface assumes 16×16 size matrices spanning all 32 threads of a warp. In at least one embodiment, each SM2700comprises, without limitation, M SFUs2712that perform special functions (e.g., attribute evaluation, reciprocal square root, and like). In at least one embodiment, SFUs2712include, without limitation, a tree traversal unit configured to traverse a hierarchical tree data structure. In at least one embodiment, SFUs2712include, without limitation, a texture unit configured to perform texture map filtering operations. In at least one embodiment, texture units are configured to load texture maps (e.g., a 2D array of texels) from memory and sample texture maps to produce sampled texture values for use in shader programs executed by SM2700. In at least one embodiment, texture maps are stored in shared memory/L1 cache2718. In at least one embodiment, texture units implement texture operations such as filtering operations using mip-maps (e.g., texture maps of varying levels of detail). In at least one embodiment, each SM2700includes, without limitation, two texture units. In at least one embodiment, each SM2700comprises, without limitation, N LSUs2714that implement load and store operations between shared memory/L1 cache2718and register file2708. In at least one embodiment, each SM2700includes, without limitation, interconnect network2716that connects each of the functional units to register file2708and LSU2714to register file2708and shared memory/L1 cache2718. In at least one embodiment, interconnect network2716is a crossbar that can be configured to connect any of the functional units to any of the registers in register file2708and connect LSUs2714to register file2708and memory locations in shared memory/L1 cache2718. In at least one embodiment, shared memory/L1 cache2718is an array of on-chip memory that allows for data storage and communication between SM2700and a primitive engine and between threads in SM2700. In at least one embodiment, shared memory/L1 cache2718comprises, without limitation, 128 KB of storage capacity and is in a path from SM2700to a partition unit. In at least one embodiment, shared memory/L1 cache2718is used to cache reads and writes. In at least one embodiment, one or more of shared memory/L1 cache2718, L2 cache, and memory are backing stores. In at least one embodiment, combining data cache and shared memory functionality into a single memory block provides improved performance for both types of memory accesses. In at least one embodiment, capacity is used or is usable as a cache by programs that do not use shared memory, such as if shared memory is configured to use half of capacity, texture and load/store operations can use remaining capacity. In at least one embodiment, integration within shared memory/L1 cache2718enables shared memory/L1 cache2718to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data. In at least one embodiment, when configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. In at least one embodiment, fixed function GPUs are bypassed, creating a much simpler programming model. In at least one embodiment and in a general purpose parallel computation configuration, a work distribution unit assigns and distributes blocks of threads directly to DPCs. In at least one embodiment, threads in a block execute the same program, using a unique thread ID in a calculation to ensure each thread generates unique results, using SM2700to execute a program and perform calculations, shared memory/L1 cache2718to communicate between threads, and LSU2714to read and write global memory through shared memory/L1 cache2718and a memory partition unit. In at least one embodiment, when configured for general purpose parallel computation, SM2700writes commands that scheduler unit2704can use to launch new work on DPCs. In at least one embodiment, PPU is included in or coupled to a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), a PDA, a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and more. In at least one embodiment, PPU is embodied on a single semiconductor substrate. In at least one embodiment, PPU is included in an SoC along with one or more other devices such as additional PPUs, memory, a RISC CPU, an MMU, a digital-to-analog converter (“DAC”), and like. In at least one embodiment, PPU may be included on a graphics card that includes one or more memory devices. In at least one embodiment, a graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In at least one embodiment, PPU may be an integrated GPU (“iGPU”) included in chipset of motherboard. Software Constructions for General-Purpose Computing The following figures set forth, without limitation, exemplary software constructs for implementing at least one embodiment. FIG.28illustrates a software stack of a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform is a platform for leveraging hardware on a computing system to accelerate computational tasks. A programming platform may be accessible to software developers through libraries, compiler directives, and/or extensions to programming languages, in at least one embodiment. In at least one embodiment, a programming platform may be, but is not limited to, CUDA, Radeon Open Compute Platform (“ROCm”), OpenCL (OpenCL™ is developed by Khronos group), SYCL, or Intel One API. In at least one embodiment, a software stack2800of a programming platform provides an execution environment for an application2801. In at least one embodiment, application2801may include any computer software capable of being launched on software stack2800. In at least one embodiment, application2801may include, but is not limited to, an artificial intelligence (“AI”)/machine learning (“ML”) application, a high performance computing (“HPC”) application, a virtual desktop infrastructure (“VDI”), or a data center workload. In at least one embodiment, application2801and software stack2800run on hardware2807. Hardware2807may include one or more GPUs, CPUs, FPGAs, AI engines, and/or other types of compute devices that support a programming platform, in at least one embodiment. In at least one embodiment, such as with CUDA, software stack2800may be vendor specific and compatible with only devices from particular vendor(s). In at least one embodiment, such as in with OpenCL, software stack2800may be used with devices from different vendors. In at least one embodiment, hardware2807includes a host connected to one more devices that can be accessed to perform computational tasks via application programming interface (“API”) calls. A device within hardware2807may include, but is not limited to, a GPU, FPGA, AI engine, or other compute device (but may also include a CPU) and its memory, as opposed to a host within hardware2807that may include, but is not limited to, a CPU (but may also include a compute device) and its memory, in at least one embodiment. In at least one embodiment, software stack2800of a programming platform includes, without limitation, a number of libraries2803, a runtime2805, and a device kernel driver2806. Each of libraries2803may include data and programming code that can be used by computer programs and leveraged during software development, in at least one embodiment. In at least one embodiment, libraries2803may include, but are not limited to, pre-written code and subroutines, classes, values, type specifications, configuration data, documentation, help data, and/or message templates. In at least one embodiment, libraries2803include functions that are optimized for execution on one or more types of devices. In at least one embodiment, libraries2803may include, but are not limited to, functions for performing mathematical, deep learning, and/or other types of operations on devices. In at least one embodiment, libraries2803are associated with corresponding APIs2802, which may include one or more APIs, that expose functions implemented in libraries2803. In at least one embodiment, application2801is written as source code that is compiled into executable code, as discussed in greater detail below in conjunction withFIGS.33-35. Executable code of application2801may run, at least in part, on an execution environment provided by software stack2800, in at least one embodiment. In at least one embodiment, during execution of application2801, code may be reached that needs to run on a device, as opposed to a host. In such a case, runtime2805may be called to load and launch requisite code on the device, in at least one embodiment. In at least one embodiment, runtime2805may include any technically feasible runtime system that is able to support execution of application S01. In at least one embodiment, runtime2805is implemented as one or more runtime libraries associated with corresponding APIs, which are shown as API(s)2804. One or more of such runtime libraries may include, without limitation, functions for memory management, execution control, device management, error handling, and/or synchronization, among other things, in at least one embodiment. In at least one embodiment, memory management functions may include, but are not limited to, functions to allocate, deallocate, and copy device memory, as well as transfer data between host memory and device memory. In at least one embodiment, execution control functions may include, but are not limited to, functions to launch a function (sometimes referred to as a “kernel” when a function is a global function callable from a host) on a device and set attribute values in a buffer maintained by a runtime library for a given function to be executed on a device. Runtime libraries and corresponding API(s)2804may be implemented in any technically feasible manner, in at least one embodiment. In at least one embodiment, one (or any number of) API may expose a low-level set of functions for fine-grained control of a device, while another (or any number of) API may expose a higher-level set of such functions. In at least one embodiment, a high-level runtime API may be built on top of a low-level API. In at least one embodiment, one or more of runtime APIs may be language-specific APIs that are layered on top of a language-independent runtime API. In at least one embodiment, device kernel driver2806is configured to facilitate communication with an underlying device. In at least one embodiment, device kernel driver2806may provide low-level functionalities upon which APIs, such as API(s)2804, and/or other software relies. In at least one embodiment, device kernel driver2806may be configured to compile intermediate representation (“IR”) code into binary code at runtime. For CUDA, device kernel driver2806may compile Parallel Thread Execution (“PTX”) IR code that is not hardware specific into binary code for a specific target device at runtime (with caching of compiled binary code), which is also sometimes referred to as “finalizing” code, in at least one embodiment. Doing so may permit finalized code to run on a target device, which may not have existed when source code was originally compiled into PTX code, in at least one embodiment. Alternatively, in at least one embodiment, device source code may be compiled into binary code offline, without requiring device kernel driver2806to compile IR code at runtime. FIG.29illustrates a CUDA implementation of software stack2800ofFIG.28, in accordance with at least one embodiment. In at least one embodiment, a CUDA software stack2900, on which an application2901may be launched, includes CUDA libraries2903, a CUDA runtime2905, a CUDA driver2907, and a device kernel driver2908. In at least one embodiment, CUDA software stack2900executes on hardware2909, which may include a GPU that supports CUDA and is developed by NVIDIA Corporation of Santa Clara, CA. In at least one embodiment, application2901, CUDA runtime2905, and device kernel driver2908may perform similar functionalities as application2801, runtime2805, and device kernel driver2806, respectively, which are described above in conjunction withFIG.28. In at least one embodiment, CUDA driver2907includes a library (libcuda.so) that implements a CUDA driver API2906. Similar to a CUDA runtime API2904implemented by a CUDA runtime library (cudart), CUDA driver API2906may, without limitation, expose functions for memory management, execution control, device management, error handling, synchronization, and/or graphics interoperability, among other things, in at least one embodiment. In at least one embodiment, CUDA driver API2906differs from CUDA runtime API2904in that CUDA runtime API2904simplifies device code management by providing implicit initialization, context (analogous to a process) management, and module (analogous to dynamically loaded libraries) management. In contrast to high-level CUDA runtime API2904, CUDA driver API2906is a low-level API providing more fine-grained control of the device, particularly with respect to contexts and module loading, in at least one embodiment. In at least one embodiment, CUDA driver API2906may expose functions for context management that are not exposed by CUDA runtime API2904. In at least one embodiment, CUDA driver API2906is also language-independent and supports, e.g., OpenCL in addition to CUDA runtime API2904. Further, in at least one embodiment, development libraries, including CUDA runtime2905, may be considered as separate from driver components, including user-mode CUDA driver2907and kernel-mode device driver2908(also sometimes referred to as a “display” driver). In at least one embodiment, CUDA libraries2903may include, but are not limited to, mathematical libraries, deep learning libraries, parallel algorithm libraries, and/or signal/image/video processing libraries, which parallel computing applications such as application2901may utilize. In at least one embodiment, CUDA libraries2903may include mathematical libraries such as a cuBLAS library that is an implementation of Basic Linear Algebra Subprograms (“BLAS”) for performing linear algebra operations, a cuFFT library for computing fast Fourier transforms (“FFTs”), and a cuRAND library for generating random numbers, among others. In at least one embodiment, CUDA libraries2903may include deep learning libraries such as a cuDNN library of primitives for deep neural networks and a TensorRT platform for high-performance deep learning inference, among others. FIG.30illustrates a ROCm implementation of software stack2800ofFIG.28, in accordance with at least one embodiment. In at least one embodiment, a ROCm software stack3000, on which an application3001may be launched, includes a language runtime3003, a system runtime3005, a thunk3007, and a ROCm kernel driver3008. In at least one embodiment, ROCm software stack3000executes on hardware3009, which may include a GPU that supports ROCm and is developed by AMD Corporation of Santa Clara, CA. In at least one embodiment, application3001may perform similar functionalities as application2801discussed above in conjunction withFIG.28. In addition, language runtime3003and system runtime3005may perform similar functionalities as runtime2805discussed above in conjunction withFIG.28, in at least one embodiment. In at least one embodiment, language runtime3003and system runtime3005differ in that system runtime3005is a language-independent runtime that implements a ROCr system runtime API3004and makes use of a Heterogeneous System Architecture (“HSA”) Runtime API. HSA runtime API is a thin, user-mode API that exposes interfaces to access and interact with an AMD GPU, including functions for memory management, execution control via architected dispatch of kernels, error handling, system and agent information, and runtime initialization and shutdown, among other things, in at least one embodiment. In contrast to system runtime3005, language runtime3003is an implementation of a language-specific runtime API3002layered on top of ROCr system runtime API3004, in at least one embodiment. In at least one embodiment, language runtime API may include, but is not limited to, a Heterogeneous compute Interface for Portability (“HIP”) language runtime API, a Heterogeneous Compute Compiler (“HCC”) language runtime API, or an OpenCL API, among others. HIP language in particular is an extension of C++ programming language with functionally similar versions of CUDA mechanisms, and, in at least one embodiment, a HIP language runtime API includes functions that are similar to those of CUDA runtime API2904discussed above in conjunction withFIG.29, such as functions for memory management, execution control, device management, error handling, and synchronization, among other things. In at least one embodiment, thunk (ROCt)3007is an interface3006that can be used to interact with underlying ROCm driver3008. In at least one embodiment, ROCm driver3008is a ROCk driver, which is a combination of an AMDGPU driver and a HSA kernel driver (amdkfd). In at least one embodiment, AMDGPU driver is a device kernel driver for GPUs developed by AMD that performs similar functionalities as device kernel driver2806discussed above in conjunction withFIG.28. In at least one embodiment, HSA kernel driver is a driver permitting different types of processors to share system resources more effectively via hardware features. In at least one embodiment, various libraries (not shown) may be included in ROCm software stack3000above language runtime3003and provide functionality similarity to CUDA libraries2903, discussed above in conjunction withFIG.29. In at least one embodiment, various libraries may include, but are not limited to, mathematical, deep learning, and/or other libraries such as a hipBLAS library that implements functions similar to those of CUDA cuBLAS, a rocFFT library for computing FFTs that is similar to CUDA cuFFT, among others. FIG.31illustrates an OpenCL implementation of software stack2800ofFIG.28, in accordance with at least one embodiment. In at least one embodiment, an OpenCL software stack3100, on which an application3101may be launched, includes an OpenCL framework3110, an OpenCL runtime3106, and a driver3107. In at least one embodiment, OpenCL software stack3100executes on hardware2909that is not vendor-specific. As OpenCL is supported by devices developed by different vendors, specific OpenCL drivers may be required to interoperate with hardware from such vendors, in at least one embodiment. In at least one embodiment, application3101, OpenCL runtime3106, device kernel driver3107, and hardware3108may perform similar functionalities as application2801, runtime2805, device kernel driver2806, and hardware2807, respectively, that are discussed above in conjunction withFIG.28. In at least one embodiment, application3101further includes an OpenCL kernel3102with code that is to be executed on a device. In at least one embodiment, OpenCL defines a “platform” that allows a host to control devices connected to the host. In at least one embodiment, an OpenCL framework provides a platform layer API and a runtime API, shown as platform API3103and runtime API3105. In at least one embodiment, runtime API3105uses contexts to manage execution of kernels on devices. In at least one embodiment, each identified device may be associated with a respective context, which runtime API3105may use to manage command queues, program objects, and kernel objects, share memory objects, among other things, for that device. In at least one embodiment, platform API3103exposes functions that permit device contexts to be used to select and initialize devices, submit work to devices via command queues, and enable data transfer to and from devices, among other things. In addition, OpenCL framework provides various built-in functions (not shown), including math functions, relational functions, and image processing functions, among others, in at least one embodiment. In at least one embodiment, a compiler3104is also included in OpenCL frame-work3110. Source code may be compiled offline prior to executing an application or online during execution of an application, in at least one embodiment. In contrast to CUDA and ROCm, OpenCL applications in at least one embodiment may be compiled online by compiler3104, which is included to be representative of any number of compilers that may be used to compile source code and/or IR code, such as Standard Portable Intermediate Representation (“SPIR-V”) code, into binary code. Alternatively, in at least one embodiment, OpenCL ap-plications may be compiled offline, prior to execution of such applications. In at least one embodiment, the software stack2800provides an execution environment for the instructions230(seeFIG.2). In such embodiments, the application260(seeFIG.2) is an implementation of the application2801. In at least one embodiment, the CUDA software stack2900provides an execution environment for the instructions230(see FIG.2). In such embodiments, the application260(seeFIG.2) is an implementation of the application2901and the GPU process driver (implemented by the GPU driver module233illustrated inFIG.2) is an implementation of CUDA driver2907. In at least one embodiment, the ROCm software stack3000provides an execution environment for the instructions230(seeFIG.2). In such embodiments, the application260(seeFIG.2) is an implementation of the application3001. In at least one embodiment, the OpenCL software stack3100provides an execution environment for the instructions230(seeFIG.2). In such embodiments, the application260(seeFIG.2) is an implementation of the application3101. FIG.32illustrates software that is supported by a programming platform, in accordance with at least one embodiment. In at least one embodiment, a programming platform3204is configured to support various programming models3203, middlewares and/or libraries3202, and frameworks3201that an application3200may rely upon. In at least one embodiment, application3200may be an AI/ML application implemented using, for example, a deep learning framework such as MXNet, PyTorch, or TensorFlow, which may rely on libraries such as cuDNN, NVIDIA Collective Communications Library (“NCCL”), and/or NVIDA Developer Data Loading Library (“DALI”) CUDA libraries to provide accelerated computing on underlying hardware. In at least one embodiment, programming platform3204may be one of a CUDA, ROCm, or OpenCL platform described above in conjunction withFIG.29,FIG.30, andFIG.31, respectively. In at least one embodiment, programming platform3204supports multiple programming models3203, which are abstractions of an underlying computing system permitting expressions of algorithms and data structures. Programming models3203may expose features of underlying hardware in order to improve performance, in at least one embodiment. In at least one embodiment, programming models3203may include, but are not limited to, CUDA, HIP, OpenCL, C++ Accelerated Massive Parallelism (“C++ AMP”), Open Multi-Processing (“OpenMP”), Open Accelerators (“OpenACC”), and/or Vulcan Compute. In at least one embodiment, libraries and/or middlewares3202provide implementations of abstractions of programming models3204. In at least one embodiment, such libraries include data and programming code that may be used by computer programs and leveraged during software development. In at least one embodiment, such middlewares include software that provides services to applications beyond those available from programming platform3204. In at least one embodiment, libraries and/or middlewares3202may include, but are not limited to, cuBLAS, cuFFT, cuRAND, and other CUDA libraries, or rocBLAS, rocFFT, rocRAND, and other ROCm libraries. In addition, in at least one embodiment, libraries and/or middlewares3202may include NCCL and ROCm Communication Collectives Library (“RCCL”) libraries providing communication routines for GPUs, a MIOpen library for deep learning acceleration, and/or an Eigen library for linear algebra, matrix and vector operations, geometrical transformations, numerical solvers, and related algorithms. In at least one embodiment, application frameworks3201depend on libraries and/or middlewares3202. In at least one embodiment, each of application frameworks3201is a software framework used to implement a standard structure of application software. Returning to the AI/ML example discussed above, an AI/ML application may be implemented using a framework such as Caffe, Caffe2, TensorFlow, Keras, PyTorch, or MxNet deep learning frameworks, in at least one embodiment. In at least one embodiment, the application260(seeFIG.2) is an implementation of the application3200. FIG.33illustrates compiling code to execute on one of programming platforms ofFIGS.28-31, in accordance with at least one embodiment. In at least one embodiment, a compiler3301receives source code3300that includes both host code as well as device code. In at least one embodiment, complier3301is configured to convert source code3300into host executable code3302for execution on a host and device executable code3303for execution on a device. In at least one embodiment, source code3300may either be compiled offline prior to execution of an application, or online during execution of an application. In at least one embodiment, source code3300may include code in any programming language supported by compiler3301, such as C++, C, Fortran, etc. In at least one embodiment, source code3300may be included in a single-source file having a mixture of host code and device code, with locations of device code being indicated therein. In at least one embodiment, a single-source file may be a .cu file that includes CUDA code or a .hip.cpp file that includes HIP code. Alternatively, in at least one embodiment, source code3300may include multiple source code files, rather than a single-source file, into which host code and device code are separated. In at least one embodiment, compiler3301is configured to compile source code3300into host executable code3302for execution on a host and device executable code3303for execution on a device. In at least one embodiment, compiler3301performs operations including parsing source code3300into an abstract system tree (AST), performing optimizations, and generating executable code. In at least one embodiment in which source code3300includes a single-source file, compiler3301may separate device code from host code in such a single-source file, compile device code and host code into device executable code3303and host executable code3302, respectively, and link device executable code3303and host executable code3302together in a single file, as discussed in greater detail below with respect toFIG.34. In at least one embodiment, host executable code3302and device executable code3303may be in any suitable format, such as binary code and/or IR code. In the case of CUDA, host executable code3302may include native object code and device executable code3303may include code in PTX intermediate representation, in at least one embodiment. In the case of ROCm, both host executable code3302and device executable code3303may include target binary code, in at least one embodiment. FIG.34is a more detailed illustration of compiling code to execute on one of programming platforms ofFIGS.28-31, in accordance with at least one embodiment. In at least one embodiment, a compiler3401is configured to receive source code3400, compile source code3400, and output an executable file3410. In at least one embodiment, source code3400is a single-source file, such as a .cu file, a .hip.cpp file, or a file in another format, that includes both host and device code. In at least one embodiment, compiler3401may be, but is not limited to, an NVIDIA CUDA compiler (“NVCC”) for compiling CUDA code in .cu files, or a HCC compiler for compiling HIP code in .hip.cpp files. In at least one embodiment, compiler3401includes a compiler front end3402, a host compiler3405, a device compiler3406, and a linker3409. In at least one embodiment, compiler front end3402is configured to separate device code3404from host code3403in source code3400. Device code3404is compiled by device compiler3406into device executable code3408, which as described may include binary code or IR code, in at least one embodiment. Separately, host code3403is compiled by host compiler3405into host executable code3407, in at least one embodiment. For NVCC, host compiler3405may be, but is not limited to, a general purpose C/C++ compiler that outputs native object code, while device compiler3406may be, but is not limited to, a Low Level Virtual Machine (“LLVM”)-based compiler that forks a LLVM compiler infrastructure and outputs PTX code or binary code, in at least one embodiment. For HCC, both host compiler3405and device compiler3406may be, but are not limited to, LLVM-based compilers that output target binary code, in at least one embodiment. Subsequent to compiling source code3400into host executable code3407and device executable code3408, linker3409links host and device executable code3407and3408together in executable file3410, in at least one embodiment. In at least one embodiment, native object code for a host and PTX or binary code for a device may be linked together in an Executable and Linkable Format (“ELF”) file, which is a container format used to store object code. FIG.35illustrates translating source code prior to compiling source code, in accordance with at least one embodiment. In at least one embodiment, source code3500is passed through a translation tool3501, which translates source code3500into translated source code3502. In at least one embodiment, a compiler3503is used to compile translated source code3502into host executable code3504and device executable code3505in a process that is similar to compilation of source code3300by compiler3301into host executable code3302and device executable3303, as discussed above in conjunction withFIG.33. In at least one embodiment, a translation performed by translation tool3501is used to port source3500for execution in a different environment than that in which it was originally intended to run. In at least one embodiment, translation tool3501may include, but is not limited to, a HIP translator that is used to “hipify” CUDA code intended for a CUDA platform into HIP code that can be compiled and executed on a ROCm platform. In at least one embodiment, translation of source code3500may include parsing source code3500and converting calls to API(s) provided by one programming model (e.g., CUDA) into corresponding calls to API(s) provided by another programming model (e.g., HIP), as discussed in greater detail below in conjunction withFIGS.36A-37. Returning to the example of hipifying CUDA code, calls to CUDA runtime API, CUDA driver API, and/or CUDA libraries may be converted to corresponding HIP API calls, in at least one embodiment. In at least one embodiment, automated translations performed by translation tool3501may sometimes be incomplete, requiring additional, manual effort to fully port source code3500. Configuring GPUS for General-Purpose Computing The following figures set forth, without limitation, exemplary architectures for compiling and executing compute source code, in accordance with at least one embodiment. FIG.36Aillustrates a system36A00configured to compile and execute CUDA source code3610using different types of processing units, in accordance with at least one embodiment. In at least one embodiment, system36A00includes, without limitation, CUDA source code3610, a CUDA compiler3650, host executable code3670(1), host executable code3670(2), CUDA device executable code3684, a CPU3690, a CUDA-enabled GPU3694, a GPU3692, a CUDA to HIP translation tool3620, HIP source code3630, a HIP compiler driver3640, an HCC3660, and HCC device executable code3682. In at least one embodiment, CUDA source code3610is a collection of human-readable code in a CUDA programming language. In at least one embodiment, CUDA code is human-readable code in a CUDA programming language. In at least one embodiment, a CUDA programming language is an extension of the C++ programming language that includes, without limitation, mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, device code is source code that, after compilation, is executable in parallel on a device. In at least one embodiment, a device may be a processor that is optimized for parallel instruction processing, such as CUDA-enabled GPU3690, GPU36192, or another GPGPU, etc. In at least one embodiment, host code is source code that, after compilation, is executable on a host. In at least one embodiment, a host is a processor that is optimized for sequential instruction processing, such as CPU3690. In at least one embodiment, CUDA source code3610includes, without limitation, any number (including zero) of global functions3612, any number (including zero) of device functions3614, any number (including zero) of host functions3616, and any number (including zero) of host/device functions3618. In at least one embodiment, global functions3612, device functions3614, host functions3616, and host/device functions3618may be mixed in CUDA source code3610. In at least one embodiment, each of global functions3612is executable on a device and callable from a host. In at least one embodiment, one or more of global functions3612may therefore act as entry points to a device. In at least one embodiment, each of global functions3612is a kernel. In at least one embodiment and in a technique known as dynamic parallelism, one or more of global functions3612defines a kernel that is executable on a device and callable from such a device. In at least one embodiment, a kernel is executed N (where N is any positive integer) times in parallel by N different threads on a device during execution. In at least one embodiment, each of device functions3614is executed on a device and callable from such a device only. In at least one embodiment, each of host functions3616is executed on a host and callable from such a host only. In at least one embodiment, each of host/device functions3616defines both a host version of a function that is executable on a host and callable from such a host only and a device version of the function that is executable on a device and callable from such a device only. In at least one embodiment, CUDA source code3610may also include, without limitation, any number of calls to any number of functions that are defined via a CUDA runtime API3602. In at least one embodiment, CUDA runtime API3602may include, without limitation, any number of functions that execute on a host to allocate and deallocate device memory, transfer data between host memory and device memory, manage systems with multiple devices, etc. In at least one embodiment, CUDA source code3610may also include any number of calls to any number of functions that are specified in any number of other CUDA APIs. In at least one embodiment, a CUDA API may be any API that is designed for use by CUDA code. In at least one embodiment, CUDA APIs include, without limitation, CUDA runtime API3602, a CUDA driver API, APIs for any number of CUDA libraries, etc. In at least one embodiment and relative to CUDA runtime API3602, a CUDA driver API is a lower-level API but provides finer-grained control of a device. In at least one embodiment, examples of CUDA libraries include, without limitation, cuBLAS, cuFFT, cuRAND, cuDNN, etc. In at least one embodiment, CUDA compiler3650compiles input CUDA code (e.g., CUDA source code3610) to generate host executable code3670(1) and CUDA device executable code3684. In at least one embodiment, CUDA compiler3650is NVCC. In at least one embodiment, host executable code3670(1) is a compiled version of host code included in input source code that is executable on CPU3690. In at least one embodiment, CPU3690may be any processor that is optimized for sequential instruction processing. In at least one embodiment, CUDA device executable code3684is a compiled version of device code included in input source code that is executable on CUDA-enabled GPU3694. In at least one embodiment, CUDA device executable code3684includes, without limitation, binary code. In at least one embodiment, CUDA device executable code3684includes, without limitation, IR code, such as PTX code, that is further compiled at runtime into binary code for a specific target device (e.g., CUDA-enabled GPU3694) by a device driver. In at least one embodiment, CUDA-enabled GPU3694may be any processor that is optimized for parallel instruction processing and that supports CUDA. In at least one embodiment, CUDA-enabled GPU3694is developed by NVIDIA Corporation of Santa Clara, CA. In at least one embodiment, CUDA to HIP translation tool3620is configured to translate CUDA source code3610to functionally similar HIP source code3630. In a least one embodiment, HIP source code3630is a collection of human-readable code in a HIP programming language. In at least one embodiment, HIP code is human-readable code in a HIP programming language. In at least one embodiment, a HIP programming language is an extension of the C++ programming language that includes, without limitation, functionally similar versions of CUDA mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, a HIP programming language may include a subset of functionality of a CUDA programming language. In at least one embodiment, for example, a HIP programming language includes, without limitation, mechanism(s) to define global functions3612, but such a HIP programming language may lack support for dynamic parallelism and therefore global functions3612defined in HIP code may be callable from a host only. In at least one embodiment, HIP source code3630includes, without limitation, any number (including zero) of global functions3612, any number (including zero) of device functions3614, any number (including zero) of host functions3616, and any number (including zero) of host/device functions3618. In at least one embodiment, HIP source code3630may also include any number of calls to any number of functions that are specified in a HIP runtime API3632. In at least one embodiment, HIP runtime API3632includes, without limitation, functionally similar versions of a subset of functions included in CUDA runtime API3602. In at least one embodiment, HIP source code3630may also include any number of calls to any number of functions that are specified in any number of other HIP APIs. In at least one embodiment, a HIP API may be any API that is designed for use by HIP code and/or ROCm. In at least one embodiment, HIP APIs include, without limitation, HIP runtime API3632, a HIP driver API, APIs for any number of HIP libraries, APIs for any number of ROCm libraries, etc. In at least one embodiment, CUDA to HIP translation tool3620converts each kernel call in CUDA code from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in CUDA code to any number of other functionally similar HIP calls. In at least one embodiment, a CUDA call is a call to a function specified in a CUDA API, and a HIP call is a call to a function specified in a HIP API. In at least one embodiment, CUDA to HIP translation tool3620converts any number of calls to functions specified in CUDA runtime API3602to any number of calls to functions specified in HIP runtime API3632. In at least one embodiment, CUDA to HIP translation tool3620is a tool known as hipify-perl that executes a text-based translation process. In at least one embodiment, CUDA to HIP translation tool3620is a tool known as hipify-clang that, relative to hipify-perl, executes a more complex and more robust translation process that involves parsing CUDA code using clang (a compiler front-end) and then translating resulting symbols. In at least one embodiment, properly converting CUDA code to HIP code may require modifications (e.g., manual edits) in addition to those performed by CUDA to HIP translation tool3620. In at least one embodiment, HIP compiler driver3640is a front end that determines a target device3646and then configures a compiler that is compatible with target device3646to compile HIP source code3630. In at least one embodiment, target device3646is a processor that is optimized for parallel instruction processing. In at least one embodiment, HIP compiler driver3640may determine target device3646in any technically feasible fashion. In at least one embodiment, if target device3646is compatible with CUDA (e.g., CUDA-enabled GPU3694), then HIP compiler driver3640generates a HIP/NVCC compilation command3642. In at least one embodiment and as described in greater detail in conjunction withFIG.36B, HIP/NVCC compilation command3642configures CUDA compiler3650to compile HIP source code3630using, without limitation, a HIP to CUDA translation header and a CUDA runtime library. In at least one embodiment and in response to HIP/NVCC compilation command3642, CUDA compiler3650generates host executable code3670(1) and CUDA device executable code3684. In at least one embodiment, if target device3646is not compatible with CUDA, then HIP compiler driver3640generates a HIP/HCC compilation command3644. In at least one embodiment and as described in greater detail in conjunction withFIG.36C, HIP/HCC compilation command3644configures HCC3660to compile HIP source code3630using, without limitation, an HCC header and a HIP/HCC runtime library. In at least one embodiment and in response to HIP/HCC compilation command3644, HCC3660generates host executable code3670(2) and HCC device executable code3682. In at least one embodiment, HCC device executable code3682is a compiled version of device code included in HIP source code3630that is executable on GPU3692. In at least one embodiment, GPU3692may be any processor that is optimized for parallel instruction processing, is not compatible with CUDA, and is compatible with HCC. In at least one embodiment, GPU3692is developed by AMD Corporation of Santa Clara, CA In at least one embodiment GPU,3692is a non-CUDA-enabled GPU3692. For explanatory purposes only, three different flows that may be implemented in at least one embodiment to compile CUDA source code3610for execution on CPU3690and different devices are depicted inFIG.36A. In at least one embodiment, a direct CUDA flow compiles CUDA source code3610for execution on CPU3690and CUDA-enabled GPU3694without translating CUDA source code3610to HIP source code3630. In at least one embodiment, an indirect CUDA flow translates CUDA source code3610to HIP source code3630and then compiles HIP source code3630for execution on CPU3690and CUDA-enabled GPU3694. In at least one embodiment, a CUDA/HCC flow translates CUDA source code3610to HIP source code3630and then compiles HIP source code3630for execution on CPU3690and GPU3692. A direct CUDA flow that may be implemented in at least one embodiment is depicted via dashed lines and a series of bubbles annotated A1-A3. In at least one embodiment and as depicted with bubble annotated A1, CUDA compiler3650receives CUDA source code3610and a CUDA compile command3648that configures CUDA compiler3650to compile CUDA source code3610. In at least one embodiment, CUDA source code3610used in a direct CUDA flow is written in a CUDA programming language that is based on a programming language other than C++ (e.g., C, Fortran, Python, Java, etc.). In at least one embodiment and in response to CUDA compile command3648, CUDA compiler3650generates host executable code3670(1) and CUDA device executable code3684(depicted with bubble annotated A2). In at least one embodiment and as depicted with bubble annotated A3, host executable code3670(1) and CUDA device executable code3684may be executed on, respectively, CPU3690and CUDA-enabled GPU3694. In at least one embodiment, CUDA device executable code3684includes, without limitation, binary code. In at least one embodiment, CUDA device executable code3684includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime. An indirect CUDA flow that may be implemented in at least one embodiment is depicted via dotted lines and a series of bubbles annotated B1-B6. In at least one embodiment and as depicted with bubble annotated B1, CUDA to HIP translation tool3620receives CUDA source code3610. In at least one embodiment and as depicted with bubble annotated B2, CUDA to HIP translation tool3620translates CUDA source code3610to HIP source code3630. In at least one embodiment and as depicted with bubble annotated B3, HIP compiler driver3640receives HIP source code3630and determines that target device3646is CUDA-enabled. In at least one embodiment and as depicted with bubble annotated B4, HIP compiler driver3640generates HIP/NVCC compilation command3642and transmits both HIP/NVCC compilation command3642and HIP source code3630to CUDA compiler3650. In at least one embodiment and as described in greater detail in conjunction withFIG.36B, HIP/NVCC compilation command3642configures CUDA compiler3650to compile HIP source code3630using, without limitation, a HIP to CUDA translation header and a CUDA runtime library. In at least one embodiment and in response to HIP/NVCC compilation command3642, CUDA compiler3650generates host executable code3670(1) and CUDA device executable code3684(depicted with bubble annotated B5). In at least one embodiment and as depicted with bubble annotated B6, host executable code3670(1) and CUDA device executable code3684may be executed on, respectively, CPU3690and CUDA-enabled GPU3694. In at least one embodiment, CUDA device executable code3684includes, without limitation, binary code. In at least one embodiment, CUDA device executable code3684includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime. A CUDA/HCC flow that may be implemented in at least one embodiment is depicted via solid lines and a series of bubbles annotated C1-C6. In at least one embodiment and as depicted with bubble annotated C1, CUDA to HIP translation tool3620receives CUDA source code3610. In at least one embodiment and as depicted with bubble annotated C2, CUDA to HIP translation tool3620translates CUDA source code3610to HIP source code3630. In at least one embodiment and as depicted with bubble annotated C3, HIP compiler driver3640receives HIP source code3630and determines that target device3646is not CUDA-enabled. In at least one embodiment, HIP compiler driver3640generates HIP/HCC compilation command3644and transmits both HIP/HCC compilation command3644and HIP source code3630to HCC3660(depicted with bubble annotated C4). In at least one embodiment and as described in greater detail in conjunction withFIG.36C, HIP/HCC compilation command3644configures HCC3660to compile HIP source code3630using, without limitation, an HCC header and a HIP/HCC runtime library. In at least one embodiment and in response to HIP/HCC compilation command3644, HCC3660generates host executable code3670(2) and HCC device executable code3682(depicted with bubble annotated C5). In at least one embodiment and as depicted with bubble annotated C6, host executable code3670(2) and HCC device executable code3682may be executed on, respectively, CPU3690and GPU3692. In at least one embodiment, after CUDA source code3610is translated to HIP source code3630, HIP compiler driver3640may subsequently be used to generate executable code for either CUDA-enabled GPU3694or GPU3692without re-executing CUDA to HIP translation tool3620. In at least one embodiment, CUDA to HIP translation tool3620translates CUDA source code3610to HIP source code3630that is then stored in memory. In at least one embodiment, HIP compiler driver3640then configures HCC3660to generate host executable code3670(2) and HCC device executable code3682based on HIP source code3630. In at least one embodiment, HIP compiler driver3640subsequently configures CUDA compiler3650to generate host executable code3670(1) and CUDA device executable code3684based on stored HIP source code3630. In at least one embodiment, the system36A00may be used to create one or more portions of the application260(seeFIG.2). FIG.36Billustrates a system3604configured to compile and execute CUDA source code3610ofFIG.36Ausing CPU3690and CUDA-enabled GPU3694, in accordance with at least one embodiment. In at least one embodiment, system3604includes, without limitation, CUDA source code3610, CUDA to HIP translation tool3620, HIP source code3630, HIP compiler driver3640, CUDA compiler3650, host executable code3670(1), CUDA device executable code3684, CPU3690, and CUDA-enabled GPU3694. In at least one embodiment and as described previously herein in conjunction withFIG.36A, CUDA source code3610includes, without limitation, any number (including zero) of global functions3612, any number (including zero) of device functions3614, any number (including zero) of host functions3616, and any number (including zero) of host/device functions3618. In at least one embodiment, CUDA source code3610also includes, without limitation, any number of calls to any number of functions that are specified in any number of CUDA APIs. In at least one embodiment, CUDA to HIP translation tool3620translates CUDA source code3610to HIP source code3630. In at least one embodiment, CUDA to HIP translation tool3620converts each kernel call in CUDA source code3610from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in CUDA source code3610to any number of other functionally similar HIP calls. In at least one embodiment, HIP compiler driver3640determines that target device3646is CUDA-enabled and generates HIP/NVCC compilation command3642. In at least one embodiment, HIP compiler driver3640then configures CUDA compiler3650via HIP/NVCC compilation command3642to compile HIP source code3630. In at least one embodiment, HIP compiler driver3640provides access to a HIP to CUDA translation header3652as part of configuring CUDA compiler3650. In at least one embodiment, HIP to CUDA translation header3652translates any number of mechanisms (e.g., functions) specified in any number of HIP APIs to any number of mechanisms specified in any number of CUDA APIs. In at least one embodiment, CUDA compiler3650uses HIP to CUDA translation header3652in conjunction with a CUDA runtime library3654corresponding to CUDA runtime API3602to generate host executable code3670(1) and CUDA device executable code3684. In at least one embodiment, host executable code3670(1) and CUDA device executable code3684may then be executed on, respectively, CPU3690and CUDA-enabled GPU3694. In at least one embodiment, CUDA device executable code3684includes, without limitation, binary code. In at least one embodiment, CUDA device executable code3684includes, without limitation, PTX code and is further compiled into binary code for a specific target device at runtime. FIG.36Cillustrates a system3606configured to compile and execute CUDA source code3610ofFIG.36Ausing CPU3690and non-CUDA-enabled GPU3692, in accordance with at least one embodiment. In at least one embodiment, system3606includes, without limitation, CUDA source code3610, CUDA to HIP translation tool3620, HIP source code3630, HIP compiler driver3640, HCC3660, host executable code3670(2), HCC device executable code3682, CPU3690, and GPU3692. In at least one embodiment and as described previously herein in conjunction withFIG.36A, CUDA source code3610includes, without limitation, any number (including zero) of global functions3612, any number (including zero) of device functions3614, any number (including zero) of host functions3616, and any number (including zero) of host/device functions3618. In at least one embodiment, CUDA source code3610also includes, without limitation, any number of calls to any number of functions that are specified in any number of CUDA APIs. In at least one embodiment, CUDA to HIP translation tool3620translates CUDA source code3610to HIP source code3630. In at least one embodiment, CUDA to HIP translation tool3620converts each kernel call in CUDA source code3610from a CUDA syntax to a HIP syntax and converts any number of other CUDA calls in source code3610to any number of other functionally similar HIP calls. In at least one embodiment, HIP compiler driver3640subsequently determines that target device3646is not CUDA-enabled and generates HIP/HCC compilation command3644. In at least one embodiment, HIP compiler driver3640then configures HCC3660to execute HIP/HCC compilation command3644to compile HIP source code3630. In at least one embodiment, HIP/HCC compilation command3644configures HCC3660to use, without limitation, a HIP/HCC runtime library3658and an HCC header3656to generate host executable code3670(2) and HCC device executable code3682. In at least one embodiment, HIP/HCC runtime library3658corresponds to HIP runtime API3632. In at least one embodiment, HCC header3656includes, without limitation, any number and type of interoperability mechanisms for HIP and HCC. In at least one embodiment, host executable code3670(2) and HCC device executable code3682may be executed on, respectively, CPU3690and GPU3692. In at least one embodiment, the system3606may be used to create one or more portions of the application260(seeFIG.2). FIG.37illustrates an exemplary kernel translated by CUDA-to-HIP translation tool3620ofFIG.36C, in accordance with at least one embodiment. In at least one embodiment, CUDA source code3610partitions an overall problem that a given kernel is designed to solve into relatively coarse sub-problems that can independently be solved using thread blocks. In at least one embodiment, each thread block includes, without limitation, any number of threads. In at least one embodiment, each sub-problem is partitioned into relatively fine pieces that can be solved cooperatively in parallel by threads within a thread block. In at least one embodiment, threads within a thread block can cooperate by sharing data through shared memory and by synchronizing execution to coordinate memory accesses. In at least one embodiment, CUDA source code3610organizes thread blocks associated with a given kernel into a one-dimensional, a two-dimensional, or a three-dimensional grid of thread blocks. In at least one embodiment, each thread block includes, without limitation, any number of threads, and a grid includes, without limitation, any number of thread blocks. In at least one embodiment, a kernel is a function in device code that is defined using a “_global_” declaration specifier. In at least one embodiment, the dimension of a grid that executes a kernel for a given kernel call and associated streams are specified using a CUDA kernel launch syntax3710. In at least one embodiment, CUDA kernel launch syntax3710is specified as “KernelName<<<GridSize, BlockSize, SharedMemorySize, Stream>>>(KernelArguments);”. In at least one embodiment, an execution configuration syntax is a “<<< . . . >>>” construct that is inserted between a kernel name (“KernelName”) and a parenthesized list of kernel arguments (“KernelArguments”). In at least one embodiment, CUDA kernel launch syntax3710includes, without limitation, a CUDA launch function syntax instead of an execution configuration syntax. In at least one embodiment, “GridSize” is of a type dim3 and specifies the dimension and size of a grid. In at least one embodiment, type dim3 is a CUDA-defined structure that includes, without limitation, unsigned integers x, y, and z. In at least one embodiment, if z is not specified, then z defaults to one. In at least one embodiment, if y is not specified, then y defaults to one. In at least one embodiment, the number of thread blocks in a grid is equal to the product of GridSize.x, GridSize.y, and GridSize.z. In at least one embodiment, “BlockSize” is of type dim3 and specifies the dimension and size of each thread block. In at least one embodiment, the number of threads per thread block is equal to the product of BlockSize.x, BlockSize.y, and BlockSize.z. In at least one embodiment, each thread that executes a kernel is given a unique thread ID that is accessible within the kernel through a built-in variable (e.g., “threadIdx”). In at least one embodiment and with respect to CUDA kernel launch syntax3710, “SharedMemorySize” is an optional argument that specifies a number of bytes in a shared memory that is dynamically allocated per thread block for a given kernel call in addition to statically allocated memory. In at least one embodiment and with respect to CUDA kernel launch syntax3710, SharedMemorySize defaults to zero. In at least one embodiment and with respect to CUDA kernel launch syntax3710, “Stream” is an optional argument that specifies an associated stream and defaults to zero to specify a default stream. In at least one embodiment, a stream is a sequence of commands (possibly issued by different host threads) that execute in order. In at least one embodiment, different streams may execute commands out of order with respect to one another or concurrently. In at least one embodiment, CUDA source code3610includes, without limitation, a kernel definition for an exemplary kernel “MatAdd” and a main function. In at least one embodiment, main function is host code that executes on a host and includes, without limitation, a kernel call that causes kernel MatAdd to execute on a device. In at least one embodiment and as shown, kernel MatAdd adds two matrices A and B of size N×N, where N is a positive integer, and stores the result in a matrix C. In at least one embodiment, main function defines a threadsPerBlock variable as 16 by 16 and a numBlocks variable as N/16 by N/16. In at least one embodiment, main function then specifies kernel call “MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);”. In at least one embodiment and as per CUDA kernel launch syntax3710, kernel MatAdd is executed using a grid of thread blocks having a dimension N/16 by N/16, where each thread block has a dimension of 16 by 16. In at least one embodiment, each thread block includes 256 threads, a grid is created with enough blocks to have one thread per matrix element, and each thread in such a grid executes kernel MatAdd to perform one pair-wise addition. In at least one embodiment, while translating CUDA source code3610to HIP source code3630, CUDA to HIP translation tool3620translates each kernel call in CUDA source code3610from CUDA kernel launch syntax3710to a HIP kernel launch syntax3720and converts any number of other CUDA calls in source code3610to any number of other functionally similar HIP calls. In at least one embodiment, HIP kernel launch syntax3720is specified as “hipLaunchKernelGGL(KernelName,GridSize, BlockSize, SharedMemorySize, Stream, KernelArguments);”. In at least one embodiment, each of KernelName, GridSize, BlockSize, ShareMemorySize, Stream, and KernelArguments has the same meaning in HIP kernel launch syntax3720as in CUDA kernel launch syntax3710(described previously herein). In at least one embodiment, arguments SharedMemorySize and Stream are required in HIP kernel launch syntax3720and are optional in CUDA kernel launch syntax3710. In at least one embodiment, a portion of HIP source code3630depicted inFIG.37is identical to a portion of CUDA source code3610depicted inFIG.37except for a kernel call that causes kernel MatAdd to execute on a device. In at least one embodiment, kernel MatAdd is defined in HIP source code3630with the same “_global_” declaration specifier with which kernel MatAdd is defined in CUDA source code3610. In at least one embodiment, a kernel call in HIP source code3630is “hipLaunchKernelGGL(MatAdd, numBlocks, threadsPerBlock, 0, 0, A, B, C);”, while a corresponding kernel call in CUDA source code3610is “MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);”. FIG.38illustrates non-CUDA-enabled GPU3692ofFIG.36Cin greater detail, in accordance with at least one embodiment. In at least one embodiment, GPU3692is developed by AMD corporation of Santa Clara. In at least one embodiment, GPU3692can be configured to perform compute operations in a highly-parallel fashion. In at least one embodiment, GPU3692is configured to execute graphics pipeline operations such as draw commands, pixel operations, geometric computations, and other operations associated with rendering an image to a display. In at least one embodiment, GPU3692is configured to execute operations unrelated to graphics. In at least one embodiment, GPU3692is configured to execute both operations related to graphics and operations unrelated to graphics. In at least one embodiment, GPU3692can be configured to execute device code included in HIP source code3630. In at least one embodiment, GPU3692includes, without limitation, any number of programmable processing units3820, a command processor3810, an L2 cache3822, memory controllers3870, DMA engines3880(1), system memory controllers3882, DMA engines3880(2), and GPU controllers3884. In at least one embodiment, each programmable processing unit3820includes, without limitation, a workload manager3830and any number of compute units3840. In at least one embodiment, command processor3810reads commands from one or more command queues (not shown) and distributes commands to workload managers3830. In at least one embodiment, for each programmable processing unit3820, associated workload manager3830distributes work to compute units3840included in programmable processing unit3820. In at least one embodiment, each compute unit3840may execute any number of thread blocks, but each thread block executes on a single compute unit3840. In at least one embodiment, a workgroup is a thread block. In at least one embodiment, each compute unit3840includes, without limitation, any number of SIMD units3850and a shared memory3860. In at least one embodiment, each SIMD unit3850implements a SIMD architecture and is configured to perform operations in parallel. In at least one embodiment, each SIMD unit3850includes, without limitation, a vector ALU3852and a vector register file3854. In at least one embodiment, each SIMD unit3850executes a different warp. In at least one embodiment, a warp is a group of threads (e.g., 16 threads), where each thread in the warp belongs to a single thread block and is configured to process a different set of data based on a single set of instructions. In at least one embodiment, predication can be used to disable one or more threads in a warp. In at least one embodiment, a lane is a thread. In at least one embodiment, a work item is a thread. In at least one embodiment, a wavefront is a warp. In at least one embodiment, different wavefronts in a thread block may synchronize together and communicate via shared memory3860. In at least one embodiment, programmable processing units3820are referred to as “shader engines.” In at least one embodiment, each programmable processing unit3820includes, without limitation, any amount of dedicated graphics hardware in addition to compute units3840. In at least one embodiment, each programmable processing unit3820includes, without limitation, any number (including zero) of geometry processors, any number (including zero) of rasterizers, any number (including zero) of render back ends, workload manager3830, and any number of compute units3840. In at least one embodiment, compute units3840share L2 cache3822. In at least one embodiment, L2 cache3822is partitioned. In at least one embodiment, a GPU memory3890is accessible by all compute units3840in GPU3692. In at least one embodiment, memory controllers3870and system memory controllers3882facilitate data transfers between GPU3692and a host, and DMA engines3880(1) enable asynchronous memory transfers between GPU3692and such a host. In at least one embodiment, memory controllers3870and GPU controllers3884facilitate data transfers between GPU3692and other GPUs3692, and DMA engines3880(2) enable asynchronous memory transfers between GPU3692and other GPUs3692. In at least one embodiment, GPU3692includes, without limitation, any amount and type of system interconnect that facilitates data and control transmissions across any number and type of directly or indirectly linked components that may be internal or external to GPU3692. In at least one embodiment, GPU3692includes, without limitation, any number and type of I/O interfaces (e.g., PCIe) that are coupled to any number and type of peripheral devices. In at least one embodiment, GPU3692may include, without limitation, any number (including zero) of display engines and any number (including zero) of multimedia engines. In at least one embodiment, GPU3692implements a memory subsystem that includes, without limitation, any amount and type of memory controllers (e.g., memory controllers3870and system memory controllers3882) and memory devices (e.g., shared memories3860) that may be dedicated to one component or shared among multiple components. In at least one embodiment, GPU3692implements a cache subsystem that includes, without limitation, one or more cache memories (e.g., L2 cache3822) that may each be private to or shared between any number of components (e.g., SIMD units3850, compute units3840, and programmable processing units3820). FIG.39illustrates how threads of an exemplary CUDA grid3920are mapped to different compute units3840ofFIG.38, in accordance with at least one embodiment. In at least one embodiment and for explanatory purposes only, grid3920has a GridSize of BX by BY by I and a BlockSize of TX by TY by I. In at least one embodiment, grid3920therefore includes, without limitation, (BX*BY) thread blocks3930and each thread block3930includes, without limitation, (TX*TY) threads3940. Threads3940are depicted inFIG.39as squiggly arrows. In at least one embodiment, grid3920is mapped to programmable processing unit3820(1) that includes, without limitation, compute units3840(1)-3840(C). In at least one embodiment and as shown, (BJ*BY) thread blocks3930are mapped to compute unit3840(1), and the remaining thread blocks3930are mapped to compute unit3840(2). In at least one embodiment, each thread block3930may include, without limitation, any number of warps, and each warp is mapped to a different SIMD unit3850ofFIG.38. In at least one embodiment, warps in a given thread block3930may synchronize together and communicate through shared memory3860included in associated compute unit3840. For example and in at least one embodiment, warps in thread block3930(BJ,1) can synchronize together and communicate through shared memory3860(1). For example and in at least one embodiment, warps in thread block3930(BJ+1,1) can synchronize together and communicate through shared memory3860(2). FIG.40illustrates how to migrate existing CUDA code to Data Parallel C++ code, in accordance with at least one embodiment. Data Parallel C++ (DPC++) may refer to an open, standards-based alternative to single-architecture proprietary languages that allows developers to reuse code across hardware targets (CPUs and accelerators such as GPUs and FPGAs) and also perform custom tuning for a specific accelerator. DPC++ use similar and/or identical C and C++ constructs in accordance with ISO C++ which developers may be familiar with. DPC++ incorporates standard SYCL from The Khronos Group to support data parallelism and heterogeneous programming. SYCL refers to a cross-platform abstraction layer that builds on underlying concepts, portability and efficiency of OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using standard C++. SYCL may enable single source development where C++ template functions can contain both host and device code to construct complex algorithms that use OpenCL acceleration, and then re-use them throughout their source code on different types of data. In at least one embodiment, a DPC++ compiler is used to compile DPC++ source code which can be deployed across diverse hardware targets. In at least one embodiment, a DPC++ compiler is used to generate DPC++ applications that can be deployed across diverse hardware targets and a DPC++ compatibility tool can be used to migrate CUDA applications to a multiplatform program in DPC++. In at least one embodiment, a DPC++ base tool kit includes a DPC++ compiler to deploy applications across diverse hardware targets; a DPC++ library to increase productivity and performance across CPUs, GPUs, and FPGAs; a DPC++ compatibility tool to migrate CUDA applications to multi-platform applications; and any suitable combination thereof. In at least one embodiment, a DPC++ programming model is utilized to simply one or more aspects relating to programming CPUs and accelerators by using modern C++ features to express parallelism with a programming language called Data Parallel C++. DPC++ programming language may be utilized to code reuse for hosts (e.g., a CPU) and accelerators (e.g., a GPU or FPGA) using a single source language, with execution and memory dependencies being clearly communicated. Mappings within DPC++ code can be used to transition an application to run on a hardware or set of hardware devices that best accelerates a workload. A host may be available to simplify development and debugging of device code, even on platforms that do not have an accelerator available. In at least one embodiment, CUDA source code4000is provided as an input to a DPC++ compatibility tool4002to generate human readable DPC++4004. In at least one embodiment, human readable DPC++4004includes inline comments generated by DPC++ compatibility tool4002that guides a developer on how and/or where to modify DPC++ code to complete coding and tuning to desired performance4006, thereby generating DPC++ source code4008. In at least one embodiment, CUDA source code4000is or includes a collection of human-readable source code in a CUDA programming language. In at least one embodiment, CUDA source code4000is human-readable source code in a CUDA programming language. In at least one embodiment, a CUDA programming language is an extension of the C++ programming language that includes, without limitation, mechanisms to define device code and distinguish between device code and host code. In at least one embodiment, device code is source code that, after compilation, is executable on a device (e.g., GPU or FPGA) and may include or more parallelizable workflows that can be executed on one or more processor cores of a device. In at least one embodiment, a device may be a processor that is optimized for parallel instruction processing, such as CUDA-enabled GPU, GPU, or another GPGPU, etc. In at least one embodiment, host code is source code that, after compilation, is executable on a host. In least one embodiment, some or all of host code and device code can be executed in parallel across a CPU and GPU/FPGA. In at least one embodiment, a host is a processor that is optimized for sequential instruction processing, such as CPU. CUDA source code4000described in connection withFIG.40may be in accordance with those discussed elsewhere in this document. In at least one embodiment, DPC++ compatibility tool4002refers to an executable tool, program, application, or any other suitable type of tool that is used to facilitate migration of CUDA source code4000to DPC++ source code4008. In at least one embodiment, DPC++compatibility tool4002is a command-line-based code migration tool available as part of a DPC++ tool kit that is used to port existing CUDA sources to DPC++. In at least one embodiment, DPC++ compatibility tool4002converts some or all source code of a CUDA application from CUDA to DPC++ and generates a resulting file that is written at least partially in DPC++, referred to as human readable DPC++4004. In at least one embodiment, human readable DPC++4004includes comments that are generated by DPC++ compatibility tool4002to indicate where user intervention may be necessary. In at least one embodiment, user intervention is necessary when CUDA source code4000calls a CUDA API that has no analogous DPC++ API; other examples where user intervention is required are discussed later in greater detail. In at least one embodiment, a workflow for migrating CUDA source code4000(e.g., application or portion thereof) includes creating one or more compilation database files; migrating CUDA to DPC++ using a DPC++ compatibility tool4002; completing migration and verifying correctness, thereby generating DPC++ source code4008; and compiling DPC++ source code4008with a DPC++ compiler to generate a DPC++ application. In at least one embodiment, a compatibility tool provides a utility that intercepts commands used when Makefile executes and stores them in a compilation database file. In at least one embodiment, a file is stored in JSON format. In at least one embodiment, an intercept-built command converts Makefile command to a DPC compatibility command. In at least one embodiment, intercept-build is a utility script that intercepts a build process to capture compilation options, macro defs, and include paths, and writes this data to a compilation database file. In at least one embodiment, a compilation database file is a JSON file. In at least one embodiment, DPC++ compatibility tool4002parses a compilation database and applies options when migrating input sources. In at least one embodiment, use of intercept-build is optional, but highly recommended for Make or CMake based environments. In at least one embodiment, a migration database includes commands, directories, and files: command may include necessary compilation flags; directory may include paths to header files; file may include paths to CUDA files. In at least one embodiment, DPC++ compatibility tool4002migrates CUDA code (e.g., applications) written in CUDA to DPC++ by generating DPC++ wherever possible. In at least one embodiment, DPC++ compatibility tool4002is available as part of a tool kit. In at least one embodiment, a DPC++ tool kit includes an intercept-build tool. In at least one embodiment, an intercept-built tool creates a compilation database that captures compilation commands to migrate CUDA files. In at least one embodiment, a compilation database generated by an intercept-built tool is used by DPC++ compatibility tool4002to migrate CUDA code to DPC++. In at least one embodiment, non-CUDA C++ code and files are migrated as is. In at least one embodiment, DPC++ compatibility tool4002generates human readable DPC++4004which may be DPC++ code that, as generated by DPC++ compatibility tool4002, cannot be compiled by DPC++ compiler and requires additional plumbing for verifying portions of code that were not migrated correctly, and may involve manual intervention, such as by a developer. In at least one embodiment, DPC++ compatibility tool4002provides hints or tools embedded in code to help developers manually migrate additional code that could not be migrated automatically. In at least one embodiment, migration is a one-time activity for a source file, project, or application. In at least one embodiment, DPC++ compatibility tool40002is able to successfully migrate all portions of CUDA code to DPC++ and there may simply be an optional step for manually verifying and tuning performance of DPC++ source code that was generated. In at least one embodiment, DPC++ compatibility tool4002directly generates DPC++ source code4008which is compiled by a DPC++ compiler without requiring or utilizing human intervention to modify DPC++ code generated by DPC++ compatibility tool4002. In at least one embodiment, DPC++ compatibility tool generates compile-able DPC++ code which can be optionally tuned by a developer for performance, readability, maintainability, other various considerations; or any combination thereof. In at least one embodiment, one or more CUDA source files are migrated to DPC++ source files at least partially using DPC++ compatibility tool4002. In at least one embodiment, CUDA source code includes one or more header files which may include CUDA header files. In at least one embodiment, a CUDA source file includes a <cuda.h> header file and a <stdio.h> header file which can be used to print text. In at least one embodiment, a portion of a vector addition kernel CUDA source file may be written as or related to: #include <cuda.h>#include <stdio.h>#define VECTOR_SIZE 256[ ] global__ void VectorAddKernel(float* A, float* B, float* C){A[threadIdx.x] = threadIdx.x + 1.0f;B[threadIdx.x] = threadIdx.x + 1.0f;C[threadIdx.x] = A[threadIdx.x] + B[threadIdx.x];}int main( ){float *d_A, *d_B, *d_C;cudaMalloc(&d_A, VECTOR_SIZE*sizeof(float));cudaMalloc(&d_B, VECTOR_SIZE*sizeof(float));cudaMalloc(&d_C, VECTOR_SIZE*sizeof(float));VectorAddKernel<<<1, VECTOR_SIZE>>>(d_A, d_B, d_C);float Result[VECTOR_SIZE] = { };cudaMemcpy(Result, d_C, VECTOR_SIZE*sizeof(float),cudaMemcpyDeviceToHost);cudaFree(d_A);cudaFree(d_B);cudaFree(d_C);for (int i=0; i<VECTOR_SIZE; i++ {if (i % 16 == 0) {printf(″\n″);}printf(″%f ″, Result[i]);}return 0;} In at least one embodiment and in connection with CUDA source file presented above, DPC++ compatibility tool4002parses a CUDA source code and replaces header files with appropriate DPC++ and SYCL header files. In at least one embodiment, DPC++ header files includes helper declarations. In CUDA, there is a concept of a thread ID and correspondingly, in DPC++ or SYCL, for each element there is a local identifier. In at least one embodiment and in connection with CUDA source file presented above, there are two vectors A and B which are initialized and a vector addition result is put into vector C as part of VectorAddKernel( ). In at least one embodiment, DPC++ compatibility tool4002converts CUDA thread IDs used to index work elements to SYCL standard addressing for work elements via a local ID as part of migrating CUDA code to DPC++ code. In at least one embodiment, DPC++ code generated by DPC++ compatibility tool4002can be optimized—for example, by reducing dimensionality of an nd_item, thereby increasing memory and/or processor utilization. In at least one embodiment and in connection with CUDA source file presented above, memory allocation is migrated. In at least one embodiment, cudaMalloc( ) is migrated to a unified shared memory SYCL call malloc_device( ) to which a device and context is passed, relying on SYCL concepts such as platform, device, context, and queue. In at least one embodiment, a SYCL platform can have multiple devices (e.g., host and GPU devices); a device may have multiple queues to which jobs can be submitted; each device may have a context; and a context may have multiple devices and manage shared memory objects. In at least one embodiment and in connection with CUDA source file presented above, a main( ) function invokes or calls VectorAddKernel( ) to add two vectors A and B together and store result in vector C. In at least one embodiment, CUDA code to invoke VectorAddKernel( ) is replaced by DPC++ code to submit a kernel to a command queue for execution. In at least one embodiment, a command group handler cgh passes data, synchronization, and computation that is submitted to the queue, parallel for is called for a number of global elements and a number of work items in that work group where VectorAddKernel( ) is called. In at least one embodiment and in connection with CUDA source file presented above, CUDA calls to copy device memory and then free memory for vectors A, B, and C are migrated to corresponding DPC++ calls. In at least one embodiment, C++ code (e.g., standard ISO C++ code for printing a vector of floating point variables) is migrated as is, without being modified by DPC++ compatibility tool4002. In at least one embodiment, DPC++ compatibility tool4002modify CUDA APIs for memory setup and/or host calls to execute kernel on the acceleration device. In at least one embodiment and in connection with CUDA source file presented above, a corresponding human readable DPC++4004(e.g., which can be compiled) is written as or related to: #include <CL/sycl.hpp>#include <dpct/dpct.hpp>#define VECTOR_SIZE 256void VectorAddKernel(float* A, float* B, float* C,sycl::nd_item<3> item_ct1){A[item_ct1.get_local_id(2)] = item_ct1.get_local_id(2) + 1.0f;B[item_ct1.get_local_id(2)] = item_ct1.get_local_id(2) + 1.0f;C[item_ct1.get_local_id(2)] =A[item_ct1.get_local_id(2)] + B[item_ct1.get_local_id(2)];}int main( ){float *d_A, *d_B, *d_C;d_A = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float),dpct::get_current_device( ),dpct::get_default_context( ));d_B = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float),dpct::get_current_device( ),dpct::get_default_context( ));d_C = (float *)sycl::malloc_device(VECTOR_SIZE * sizeof(float),dpct::get_current_device( ),dpct::get_default_context( ));dpct::get_default_queue_wait( ).submit([&](sycl::handler &cgh) {cgh.parallel_for(sycl::nd_range<3>(sycl::range<3>(1, 1, 1) *sycl:range<3>(1, 1, VECTOR_SIZE) *sycl::range<3>(1, 1, VECTOR_SIZE)),[=](sycl::nd_items<3>item_ct1) {VectorAddKernel(d_A, d_B, d_C, item_ct1);});});float Result[VECTOR_SIZE] = { };dpct::get_default_queue_wait( ).memcpy(Result, d_C, VECTOR_SIZE * sizeof(float)).wait( );sycl::free(d_A, dpct::get_default_context( ));sycl::free(d_B, dpct::get_default_context( ));sycl::free(d_C, dpct::get_default_context( ));for (int i=0; i<VECTOR_SIZE; i++ {if (i % 16 == 0) {printf(″\n″);}printf(″%f ″, Result[i]);}return 0;} In at least one embodiment, human readable DPC++4004refers to output generated by DPC++ compatibility tool4002and may be optimized in one manner or another. In at least one embodiment, human readable DPC++4004generated by DPC++ compatibility tool4002can be manually edited by a developer after migration to make it more maintainable, performance, or other considerations. In at least one embodiment, DPC++ code generated by DPC++ compatibility tool40002such as DPC++ disclosed can be optimized by removing repeat calls to get_current_device( ) and/or get_default_context( ) for each malloc_device( ) call. In at least one embodiment, DPC++ code generated above uses a 3 dimensional nd_range which can be refactored to use only a single dimension, thereby reducing memory usage. In at least one embodiment, a developer can manually edit DPC++ code generated by DPC++ compatibility tool4002replace uses of unified shared memory with accessors. In at least one embodiment, DPC++ compatibility tool4002has an option to change how it migrates CUDA code to DPC++ code. In at least one embodiment, DPC++ compatibility tool4002is verbose because it is using a general template to migrate CUDA code to DPC++ code that works for a large number of cases. In at least one embodiment, a CUDA to DPC++ migration workflow includes steps to: prepare for migration using intercept-build script; perform migration of CUDA projects to DPC++ using DPC++ compatibility tool4002; review and edit migrated source files manually for completion and correctness; and compile final DPC++ code to generate a DPC++ application. In at least one embodiment, manual review of DPC++ source code may be required in one or more scenarios including but not limited to: migrated API does not return error code (CUDA code can return an error code which can then be consumed by the application but SYCL uses exceptions to report errors, and therefore does not use error codes to surface errors); CUDA compute capability dependent logic is not supported by DPC++; statement could not be removed. In at least one embodiment, scenarios in which DPC++ code requires manual intervention may include, without limitation: error code logic replaced with (*,0) code or commented out; equivalent DPC++ API not available; CUDA compute capability-dependent logic; hardware-dependent API (clock( )); missing features unsupported API; execution time measurement logic; handling built-in vector type conflicts; migration of cuBLAS API; and more. At least one embodiment of the disclosure can be described in view of the following clauses: 1. A computer-implemented method for use with a network interface, the computer-implemented method comprising: (a) detecting, by a first processor, receipt of a set of packets by the network interface; (b) writing, by the first processor, an indication to memory associated with a second processor that packet data transmitted by the set of packets is available for processing by the second processor; (c) detecting, by the second processor, the indication; and (d) as a result of detecting the indication, the second processor processing the packet data to obtain processed packet data. 2. The computer-implemented method of clause 1, wherein a second signal transmission distance between the second processor and the memory is shorter than a first signal transmission distance between the first processor and the memory. 3. The computer-implemented method of clause 1 or 2, wherein writing the indication comprises the first processor setting a ready flag to indicate the packet data is available for processing; and detecting the indication comprises the second processor polling the ready flag to determine when the ready flag has been set to indicate the packet data is available for processing. 4. The computer-implemented method of any one of the clauses 1-3, for use with the network interface storing the packet data in the memory associated with the second processor, the computer-implemented method further comprising flushing, by the first processor, at least a portion of the memory before writing the indication. 5. The computer-implemented method of any one of the clauses 1-4, wherein the second processor is a parallel processing unit. 6. The computer-implemented method of any one of clauses 1-5 for use with the network interface storing the packet data in the memory associated with the second processor, wherein the memory is a second memory, and the computer-implemented method further comprises: obtaining first packet data from the set of packets, the packet data being the first packet data; obtaining second packet data from the set of packets; and storing the second packet data in a first memory associated with the first processor. 7. The computer-implemented method of any one of clauses 1-6 for use with the network interface, wherein the network interface is to receive the set of packets from a computing device and store the packet data in the memory associated with the second processor, and wherein the computer-implemented method further comprises: retrieving the processed packet data from the memory; formulating a set of transmitted packets based at least in part on the processed packet data; and transmitting the set of transmitted packets to the computing device. 8. The computer-implemented method of any one of clauses 1-7, further comprising: launching, by the first processor, a first thread that causes the network interface to store the packet data in the memory associated with the second processor, and writes the indication after the packet data is stored in the memory; and launching, by the first processor, a second thread that causes the network interface to retrieve the processed packet data from the memory. 9. The computer-implemented method of any one of clauses 1-8, further comprising: creating a memory pool in the memory associated with the second processor, the memory pool comprising a set of memory buffers comprising a different memory buffer for each packet in the set of packets; and storing the set of packets in the set of memory buffers, wherein the second processor obtains the packet data from the set of memory buffers and processes the packet data to obtain the processed packet data. 10. The computer-implemented method of any one of clauses 1-9, wherein the memory is a second memory, and the computer-implemented method further comprises: (i) creating a set of first memory buffers in a first memory associated with the first processor, the set of first memory buffers comprising a corresponding first memory buffer for each of the set of packets; (ii) storing a first portion of each of the set of packets in the corresponding first memory buffer; (iii) creating a set of second memory buffers in the second memory associated with the second processor, the set of second memory buffers comprising a corresponding second memory buffer for each of the set of packets; and (iv) storing a second portion of each of the set of packets in the corresponding second memory buffer, wherein the second processor obtains the packet data from the set of second memory buffers and processes the packet data to obtain the processed packet data. 11. The computer-implemented method of clause 10, wherein the first portion of each of the set of packets comprises a packet header; and the second portion of each of the set of packets comprises a payload transmitted by the packet. 12. A system comprising: (a) a Parallel Processing Unit (“PPU”) executing a process; (b) a PPU memory associated with the PPU; (c) a network interface; (d) one or more processors; and (e) one or more memories to store instructions executable by the one or more processors and when executed by the one or more processors cause the one or more processors to at least: detect receipt of a set of packets by the network interface; and write, to a shared memory portion of the PPU memory that is shared between the one or more processors and the PPU, an indication that packet data transmitted by the set of packets is available for processing, the process being executed by the PPU before the indication is written to the shared memory portion, the process processing the packet data to obtain processed packet data after the PPU detects the indication. 13. The system of clause 12, further comprising: a computing device to send the set of packets to the network interface. 14. The system of clause 13, wherein the computing device is connected to the network interface over a network. 15. The system of any one of the clauses 12-14, further comprising: a connection connecting the PPU to the one or more processors and the network interface. 16. The system of clause 15, wherein the connection is a Peripheral Component Interconnect Express (“PCIe”) bus. 17. The system of any one of the clauses 12-16, wherein the network interface stores the packet data in the PPU memory, the one or more processors write the indication after the packet data is stored in the PPU memory, and the PPU obtains the packet data from the PPU memory and processes the packet data to obtain the processed packet data after detecting the indication. 18. The system of clause 17, wherein the instructions, when executed by the one or more processors, cause the one or more processors to instruct the network interface to: obtain the packet data from the set of packets; obtain host data from the set of packets; and store the host data in the one or more memories. 19. The system of clause 18, wherein the packet data comprises a payload transmitted by each of the set of packets; and the host data comprises a packet header transmitted by each of the set of packets. 20. The system of any one of the clauses 17-19, wherein the instructions, when executed by the one or more processors, cause the one or more processors to: flush at least a portion of the PPU memory before writing the indication. 21. The system of any one of the clauses 12-20, wherein the indication comprises a ready flag stored in the shared memory portion, writing the indication comprises setting the ready flag to indicate the packet data is available for processing, and the PPU detects the indication by polling the ready flag to determine when the ready flag has been set to indicate the packet data is available for processing. 22. The system of any one of the clauses 12-21, wherein the PPU provides a notification indicating the process has obtained processed packet data, and the instructions, when executed by the one or more processors, cause the one or more processors to: detect the notification provided by the PPU; and instruct the network interface to retrieve the processed packet data. 23. The system of clause 22, wherein the shared memory portion is a first shared memory portion, the notification comprises a done flag stored in a second shared memory portion of the one or more memories, the second shared memory portion is shared between the one or more processors and the PPU, the PPU provides the indication by setting the done flag to indicate that the processed packet data has been obtained, and the one or more processors detect the notification by polling the done flag to determine when the done flag has been set to indicate the processed packet data has been obtained. 23. The system of clause 23, wherein a first signal travel time between the PPU and the first shared memory portion is shorter than a second signal travel time between the one or more processors and the first shared memory portion, and a third signal travel time between the one or more processors and the second shared memory portion is shorter than a fourth signal travel time between the PPU and the second shared memory portion. 25. The system of any one of the clauses 12-22, wherein a first signal travel time between the PPU and the shared memory portion is shorter than a second signal travel time between the one or more processors and the shared memory portion. 26. A machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: detect receipt of a set of packets by a network interface; and write, to a shared memory that is shared between the one or more processors and a Parallel Processing Unit (“PPU”) executing a process, an indication that packet data transmitted by the set of packets is available for processing, a PPU memory associated with the PPU comprising the shared memory, the process being executed by the PPU before the indication is written to the shared memory, the process processing the packet data to obtain processed packet data after the PPU detects the indication. 27. The machine-readable medium of clause 26, wherein the network interface comprises a data processing unit (“DPU”), and wherein the set of instructions, when performed by the one or more processors, cause the one or more processors to at least: cause the network interface to store the packet data in the PPU memory associated with the PPU. 28. The machine-readable medium of clause 27, wherein the set of instructions, when performed by the one or more processors, cause the one or more processors to at least: create a set of PPU memory buffers in the PPU memory, the set of PPU memory buffers comprising a corresponding PPU memory buffer for each of the set of packets, wherein causing the network interface to store the packet data in the PPU memory comprises storing each of the set of packets in the corresponding PPU memory buffer. 29. The machine-readable medium of clause 27 or 28, wherein the packet data comprises a first portion of each of the set of packets, and the set of instructions, when performed by the one or more processors, cause the one or more processors to at least: (A) create a set of PPU memory buffers in the PPU memory, the set of PPU memory buffers comprising a corresponding PPU memory buffer for each of the set of packets, wherein causing the network interface to store the packet data in the PPU memory comprises storing the first portion of each of the set of packets in the corresponding PPU memory buffer; (B) creating a set of host memory buffers in a system memory, the set of host memory buffers comprising a corresponding host memory buffer for each of the set of packets; and (C) storing a second portion of each of the set of packets in the corresponding host memory buffer. 30. The machine-readable medium of any one of the clauses 26-29, wherein the set of instructions, when performed by the one or more processors, cause the one or more processors to at least: flush at least a portion of the PPU memory; and write the indication after flushing the portion of the PPU memory. 31. The machine-readable medium of clause 30, wherein flushing the portion of the PPU memory comprises the one or more processors performing a read operation on the PPU memory. 32. The machine-readable medium of clause 30, wherein flushing the portion of the PPU memory comprises instructing the network interface to perform a read operation on the PPU memory. 33. The machine-readable medium of any one of the clauses 26-32, wherein the PPU provides a notification indicating that the process has obtained processed packet data, and the instructions, when performed by the one or more processors, cause the one or more processors to: detect the notification provided by the PPU; and instruct the network interface to retrieve the processed packet data. 34. The machine-readable medium of clause 33, wherein the notification comprises a done flag stored in a shared memory portion of a system memory associated with the one or more processors, the PPU provides the indication by setting the done flag to indicate that the processed packet data has been obtained, and the one or more processors detect the notification by polling the done flag to determine when the done flag has been set to indicate the processed packet data has been obtained. 35. The machine-readable medium of any one of the clauses 26-34, wherein the set of packets are a set of received packets received from a computing device, and the set of instructions, when performed by the one or more processors, cause the one or more processors to at least: instruct the network interface to obtain a set of transmitted packets based at least in part on the processed packet data; and transmit the set of transmitted packets to the computing device. 36. A computer-implemented method comprising: (a) initiating, by a second processor, execution of at least one process; (b) after initiating execution of at least one process, the second processor polling a memory location in memory associated with the second processor for an indication, written by a first processor, that a set of packets has been received by a network interface, the first processor being different from the second processor, the memory location being accessible to both the first processor and the second processor; and (c) processing, by the second processor, packet data transmitted by the set of packets after the second processor detects the indication that the set of packets has been received by the network interface. 37. The computer-implemented method of clause 36, wherein the second processor is a Parallel Processing Unit (“PPU”). 38. The computer-implemented method of clause 36 or 37, further comprising: modifying, by the second processor, the memory location to no longer indicate that the set of packets has been received by the network interface after the second processor detects the indication. 39. The computer-implemented method of any one of the clauses 36-38, further comprising: providing, by the second processor, a notification to the first processor indicating that the second processor has finished processing the packet data. 40. The computer-implemented method of clause 39, wherein the memory is first memory, the notification comprises a done flag stored in a shared memory that is shared between the first processor and the second processor, the shared memory is a portion of second memory associated with the first processor, and the second processor provides the notification by setting the done flag to indicate that the second processor has finished processing the packet data. 41. The computer-implemented method of clause 40, wherein a first signal travel time between the second processor and the memory location is shorter than a second signal travel time between the first processor and the memory location, and a third signal travel time between the first processor and the shared memory is shorter than a fourth signal travel time between the second processor and the shared memory. 42. The computer-implemented method of any one of the clauses 36-40, wherein a first signal travel time between the second processor and the memory location is shorter than a second signal travel time between the first processor and the memory location. Other variations are within spirit of present disclosure. Thus, while disclosed techniques are susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in drawings and have been described above in detail. It should be understood, however, that there is no intention to limit disclosure to specific form or forms disclosed, but on contrary, intention is to cover all modifications, alternative constructions, and equivalents falling within spirit and scope of disclosure, as defined in appended claims. Use of terms “a” and “an” and “the” and similar referents in context of describing disclosed embodiments (especially in context of following claims) are to be construed to cover both singular and plural, unless otherwise indicated herein or clearly contradicted by context, and not as a definition of a term. Terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (meaning “including, but not limited to,”) unless otherwise noted. term “connected,” when unmodified and referring to physical connections, is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within range, unless otherwise indicated herein and each separate value is incorporated into specification as if it were individually recited herein. Use of term “set” (e.g., “a set of items”) or “subset” unless otherwise noted or contradicted by context, is to be construed as a nonempty collection comprising one or more members. Further, unless otherwise noted or contradicted by context, term “subset” of a corresponding set does not necessarily denote a proper subset of corresponding set, but subset and corresponding set may be equal. Conjunctive language, such as phrases of form “at least one of A, B, and C,” or “at least one of A, B and C,” unless specifically stated otherwise or otherwise clearly contradicted by context, is otherwise understood with context as used in general to present that an item, term, etc., may be either A or B or C, or any nonempty subset of set of A and B and C. For instance, in illustrative example of a set having three members, conjunctive phrases “at least one of A, B, and C” and “at least one of A, B and C” refer to any of following sets: {A}, {B}, {C}, {A, B}, {A, C}, {B, C}, {A, B, C}. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of A, at least one of B and at least one of C each to be present. In addition, unless otherwise noted or contradicted by context, term “plurality” indicates a state of being plural (e.g., “a plurality of items” indicates multiple items). A number of items in a plurality is at least two, but can be more when so indicated either explicitly or by context. Further, unless stated otherwise or otherwise clear from context, phrase “based on” means “based at least in part on” and not “based solely on.” Operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, a process such as those processes described herein (or variations and/or combinations thereof) is performed under control of one or more computer systems configured with executable instructions and is implemented as code (e.g., executable instructions, one or more computer programs or one or more applications) executing collectively on one or more processors, by hardware or combinations thereof. In at least one embodiment, code is stored on a computer-readable storage medium, for example, in form of a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, a computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., a propagating transient electric or electromagnetic transmission) but includes non-transitory data storage circuitry (e.g., buffers, cache, and queues) within transceivers of transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media having stored thereon executable instructions (or other memory to store executable instructions) that, when executed (e.g., as a result of being executed) by one or more processors of a computer system, cause computer system to perform operations described herein. A set of non-transitory computer-readable storage media, in at least one embodiment, comprises multiple non-transitory computer-readable storage media and one or more of individual non-transitory storage media of multiple non-transitory computer-readable storage media lack all of code while multiple non-transitory computer-readable storage media collectively store all of code. In at least one embodiment, executable instructions are executed such that different instructions are executed by different processors—for example, a non-transitory computer-readable storage medium store instructions and a main central processing unit (“CPU”) executes some of instructions while a graphics processing unit (“GPU”) executes other instructions. In at least one embodiment, different components of a computer system have separate processors and different processors execute different subsets of instructions. Accordingly, in at least one embodiment, computer systems are configured to implement one or more services that singly or collectively perform operations of processes described herein and such computer systems are configured with applicable hardware and/or software that enable performance of operations. Further, a computer system that implements at least one embodiment of present disclosure is a single device and, in another embodiment, is a distributed computer system comprising multiple devices that operate differently such that distributed computer system performs operations described herein and such that a single device does not perform all operations. Use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments of disclosure and does not pose a limitation on scope of disclosure unless otherwise claimed. No language in specification should be construed as indicating any non-claimed element as essential to practice of disclosure. All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein. In description and claims, terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms may be not intended as synonyms for each other. Rather, in particular examples, “connected” or “coupled” may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. “Coupled” may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. Unless specifically stated otherwise, it may be appreciated that throughout specification terms such as “processing,” “computing,” “calculating,” “determining,” or like, refer to action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within computing system's registers and/or memories into other data similarly represented as physical quantities within computing system's memories, registers or other such information storage, transmission or display devices. In a similar manner, term “processor” may refer to any device or portion of a device that processes electronic data from registers and/or memory and transform that electronic data into other electronic data that may be stored in registers and/or memory. As non-limiting examples, “processor” may be a CPU or a GPU. A “computing platform” may comprise one or more processors. As used herein, “software” processes may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes, for carrying out instructions in sequence or in parallel, continuously or intermittently. Terms “system” and “method” are used herein interchangeably insofar as system may embody one or more methods and methods may be considered a system. In present document, references may be made to obtaining, acquiring, receiving, or inputting analog or digital data into a subsystem, computer system, or computer-implemented machine. Process of obtaining, acquiring, receiving, or inputting analog and digital data can be accomplished in a variety of ways such as by receiving data as a parameter of a function call or a call to an application programming interface. In some implementations, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a serial or parallel interface. In another implementation, process of obtaining, acquiring, receiving, or inputting analog or digital data can be accomplished by transferring data via a computer network from providing entity to acquiring entity. References may also be made to providing, outputting, transmitting, sending, or presenting analog or digital data. In various examples, process of providing, outputting, transmitting, sending, or presenting analog or digital data can be accomplished by transferring data as an input or output parameter of a function call, a parameter of an application programming interface or interprocess communication mechanism. Although discussion above sets forth example implementations of described techniques, other architectures may be used to implement described functionality, and are intended to be within scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. Furthermore, although subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that subject matter claimed in appended claims is not necessarily limited to specific features or acts described. Rather, specific features and acts are disclosed as exemplary forms of implementing the claims.
302,949
11861759
DETAILED DESCRIPTION Embodiments described herein are generally directed to memory prefetching in a multiple GPU environment. In some embodiments, an apparatus, system, or process provides for improvements in memory prefetching for a multiple graphic processing unit (GPU) environment. In some embodiments, rules are applied for use by a prefetcher when a multiple GPU workload is executing across a cluster of GPUs having unified virtual memory and non-unified physical memory. In some embodiments, an apparatus, system, or process includes one or more of:(1) Protected prefetch optimizations for cross-GPU coherency;(2) Gather/scatter prefetch instructions; and(3) Prefetch operation with status notification. In some embodiments, a graphics processing unit (GPU) is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or another interconnect (e.g., a high-speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. In the following description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of skill in the art that the embodiments described herein may be practiced without one or more of these specific details. In other instances, well-known features have not been described to avoid obscuring the details of the present embodiments. System Overview FIG.1is a block diagram illustrating a computing system100configured to implement one or more aspects of the embodiments described herein. The computing system100includes a processing subsystem101having one or more processor(s)102and a system memory104communicating via an interconnection path that may include a memory hub105. The memory hub105may be a separate component within a chipset component or may be integrated within the one or more processor(s)102. The memory hub105couples with an I/O subsystem111via a communication link106. The I/O subsystem111includes an I/O hub107that can enable the computing system100to receive input from one or more input device(s)108. Additionally, the I/O hub107can enable a display controller, which may be included in the one or more processor(s)102, to provide outputs to one or more display device(s)110A. In one embodiment the one or more display device(s)110A coupled with the I/O hub107can include a local, internal, or embedded display device. In one embodiment the processing subsystem101includes one or more parallel processor(s)112coupled to memory hub105via a bus or other communication link113. The communication link113may be one of any number of standards based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. In one embodiment the one or more parallel processor(s)112form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. In one embodiment the one or more parallel processor(s)112form a graphics processing subsystem that can output pixels to one of the one or more display device(s)110A coupled via the I/O Hub107. The one or more parallel processor(s)112can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s)110B. Within the I/O subsystem111, a system storage unit114can connect to the I/O hub107to provide a storage mechanism for the computing system100. An I/O switch116can be used to provide an interface mechanism to enable connections between the I/O hub107and other components, such as a network adapter118and/or wireless network adapter119that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s)120. The network adapter118can be an Ethernet adapter or another wired network adapter. The wireless network adapter119can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios. The computing system100can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, may also be connected to the I/O hub107. Communication paths interconnecting the various components inFIG.1may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NV-Link high-speed interconnect, or interconnect protocols known in the art. In one embodiment, the one or more parallel processor(s)112incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the one or more parallel processor(s)112incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. In yet another embodiment, components of the computing system100may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s)112, memory hub105, processor(s)102, and I/O hub107can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system100can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system100can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system. It will be appreciated that the computing system100shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s)102, and the number of parallel processor(s)112, may be modified as desired. For instance, in some embodiments, system memory104is connected to the processor(s)102directly rather than through a bridge, while other devices communicate with system memory104via the memory hub105and the processor(s)102. In other alternative topologies, the parallel processor(s)112are connected to the I/O hub107or directly to one of the one or more processor(s)102, rather than to the memory hub105. In other embodiments, the I/O hub107and memory hub105may be integrated into a single chip. Some embodiments may include two or more sets of processor(s)102attached via multiple sockets, which can couple with two or more instances of the parallel processor(s)112. Some of the particular components shown herein are optional and may not be included in all implementations of the computing system100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated inFIG.1. For example, the memory hub105may be referred to as a Northbridge in some architectures, while the I/O hub107may be referred to as a Southbridge. FIG.2Aillustrates a parallel processor200, according to an embodiment. The various components of the parallel processor200may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor200is a variant of the one or more parallel processor(s)112shown inFIG.1, according to an embodiment. In one embodiment the parallel processor200includes a parallel processing unit202. The parallel processing unit includes an I/O unit204that enables communication with other devices, including other instances of the parallel processing unit202. The I/O unit204may be directly connected to other devices. In one embodiment the I/O unit204connects with other devices via the use of a hub or switch interface, such as memory hub105. The connections between the memory hub105and the I/O unit204form a communication link113. Within the parallel processing unit202, the I/O unit204connects with a host interface206and a memory crossbar216, where the host interface206receives commands directed to performing processing operations and the memory crossbar216receives commands directed to performing memory operations. When the host interface206receives a command buffer via the I/O unit204, the host interface206can direct work operations to perform those commands to a front end208. In one embodiment the front end208couples with a scheduler210, which is configured to distribute commands or other work items to a processing cluster array212. In one embodiment the scheduler210ensures that the processing cluster array212is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array212. In one embodiment the scheduler210is implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler210is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing array212. In one embodiment, the host software can prove workloads for scheduling on the processing array212via one of multiple graphics processing doorbells. The workloads can then be automatically distributed across the processing array212by the scheduler210logic within the scheduler microcontroller. The processing cluster array212can include up to “N” processing clusters (e.g., cluster214A, cluster214B, through cluster214N). Each cluster214A-214N of the processing cluster array212can execute a large number of concurrent threads. The scheduler210can allocate work to the clusters214A-214N of the processing cluster array212using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array212. In one embodiment, different clusters214A-214N of the processing cluster array212can be allocated for processing different types of programs or for performing different types of computations. The processing cluster array212can be configured to perform various types of parallel processing operations. In one embodiment the processing cluster array212is configured to perform general-purpose parallel compute operations. For example, the processing cluster array212can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations. In one embodiment the processing cluster array212is configured to perform parallel graphics processing operations. In embodiments in which the parallel processor200is configured to perform graphics processing operations, the processing cluster array212can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array212can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit202can transfer data from system memory via the I/O unit204for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory222) during processing, then written back to system memory. In one embodiment, when the parallel processing unit202is used to perform graphics processing, the scheduler210can be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters214A-214N of the processing cluster array212. In some embodiments, portions of the processing cluster array212can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters214A-214N for further processing. During operation, the processing cluster array212can receive processing tasks to be executed via the scheduler210, which receives commands defining processing tasks from front end208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler210may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end208. The front end208can be configured to ensure the processing cluster array212is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated. Each of the one or more instances of the parallel processing unit202can couple with parallel processor memory222. The parallel processor memory222can be accessed via the memory crossbar216, which can receive memory requests from the processing cluster array212as well as the I/O unit204. The memory crossbar216can access the parallel processor memory222via a memory interface218. The memory interface218can include multiple partition units (e.g., partition unit220A, partition unit220B, through partition unit220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory222. In one implementation the number of partition units220A-220N is configured to be equal to the number of memory units, such that a first partition unit220A has a corresponding first memory unit224A, a second partition unit220B has a corresponding memory unit224B, and an Nth partition unit220N has a corresponding Nth memory unit224N. In other embodiments, the number of partition units220A-220N may not be equal to the number of memory devices. In various embodiments, the memory units224A-224N can include various types of memory devices, including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory units224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units224A-224N, allowing partition units220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory222. In some embodiments, a local instance of the parallel processor memory222may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory. In one embodiment, any one of the clusters214A-214N of the processing cluster array212can process data that will be written to any of the memory units224A-224N within parallel processor memory222. The memory crossbar216can be configured to transfer the output of each cluster214A-214N to any partition unit220A-220N or to another cluster214A-214N, which can perform additional processing operations on the output. Each cluster214A-214N can communicate with the memory interface218through the memory crossbar216to read from or write to various external memory devices. In one embodiment the memory crossbar216has a connection to the memory interface218to communicate with the I/O unit204, as well as a connection to a local instance of the parallel processor memory222, enabling the processing units within the different processing clusters214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit202. In one embodiment the memory crossbar216can use virtual channels to separate traffic streams between the clusters214A-214N and the partition units220A-220N. While a single instance of the parallel processing unit202is illustrated within the parallel processor200, any number of instances of the parallel processing unit202can be included. For example, multiple instances of the parallel processing unit202can be provided on a single add-in card, or multiple add-in cards can be interconnected. The different instances of the parallel processing unit202can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in one embodiment some instances of the parallel processing unit202can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit202or the parallel processor200can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. FIG.2Bis a block diagram of a partition unit220, according to an embodiment. In one embodiment the partition unit220is an instance of one of the partition units220A-220N ofFIG.2A. As illustrated, the partition unit220includes an L2 cache221, a frame buffer interface225, and a ROP226(raster operations unit). The L2 cache221is a read/write cache that is configured to perform load and store operations received from the memory crossbar216and ROP226. Read misses and urgent write-back requests are output by L2 cache221to frame buffer interface225for processing. Updates can also be sent to the frame buffer via the frame buffer interface225for processing. In one embodiment the frame buffer interface225interfaces with one of the memory units in parallel processor memory, such as the memory units224A-224N ofFIG.2A(e.g., within parallel processor memory222). In graphics applications, the ROP226is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP226then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP226includes compression logic to compress depth or color data that is written to memory and decompress depth or color data that is read from memory. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the ROP226can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis. In some embodiments, the ROP226is included within each processing cluster (e.g., cluster214A-214N ofFIG.2A) instead of within the partition unit220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar216instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s)110ofFIG.1, routed for further processing by the processor(s)102, or routed for further processing by one of the processing entities within the parallel processor200ofFIG.2A. FIG.2Cis a block diagram of a processing cluster214within a parallel processing unit, according to an embodiment. In one embodiment the processing cluster is an instance of one of the processing clusters214A-214N ofFIG.2A. The processing cluster214can be configured to execute many threads in parallel, where the term “thread” refers to an instance of a particular program executing on a particular set of input data. In some embodiments, single-instruction, multiple-data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In other embodiments, single-instruction, multiple-thread (SIMT) techniques are used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime. Operation of the processing cluster214can be controlled via a pipeline manager232that distributes processing tasks to SIMT parallel processors. The pipeline manager232receives instructions from the scheduler210ofFIG.2Aand manages execution of those instructions via a graphics multiprocessor234and/or a texture unit236. The illustrated graphics multiprocessor234is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster214. One or more instances of the graphics multiprocessor234can be included within a processing cluster214. The graphics multiprocessor234can process data and a data crossbar240can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager232can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar240. Each graphics multiprocessor234within the processing cluster214can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. In one embodiment the same functional-unit hardware can be leveraged to perform different operations and any combination of functional units may be present. The instructions transmitted to the processing cluster214constitutes a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor234, processing can be performed over consecutive clock cycles. In one embodiment multiple thread groups can be executed concurrently on a graphics multiprocessor234. In one embodiment the graphics multiprocessor234includes an internal cache memory to perform load and store operations. In one embodiment, the graphics multiprocessor234can forego an internal cache and use a cache memory (e.g., L1 cache248) within the processing cluster214. Each graphics multiprocessor234also has access to L2 caches within the partition units (e.g., partition units220A-220N ofFIG.2A) that are shared among all processing clusters214and may be used to transfer data between threads. The graphics multiprocessor234may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit202may be used as global memory. Embodiments in which the processing cluster214includes multiple instances of the graphics multiprocessor234can share common instructions and data, which may be stored in the L1 cache248. Each processing cluster214may include an MMU245(memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU245may reside within the memory interface218ofFIG.2A. The MMU245includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU245may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor234or the L1 cache or processing cluster214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss. In graphics and computing applications, a processing cluster214may be configured such that each graphics multiprocessor234is coupled to a texture unit236for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor234and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor234outputs processed tasks to the data crossbar240to provide the processed task to another processing cluster214for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar216. A preROP242(pre-raster operations unit) is configured to receive data from graphics multiprocessor234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units220A-220N ofFIG.2A). The preROP242unit can perform optimizations for color blending, organize pixel color data, and perform address translations. It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor234, texture units236, preROPs242, etc., may be included within a processing cluster214. Further, while only one processing cluster214is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster214. In one embodiment, each processing cluster214can be configured to operate independently of other processing clusters214using separate and distinct processing units, L1 caches, etc. FIG.2Dshows a graphics multiprocessor234, according to one embodiment. In such embodiment the graphics multiprocessor234couples with the pipeline manager232of the processing cluster214. The graphics multiprocessor234has an execution pipeline including but not limited to an instruction cache252, an instruction unit254, an address mapping unit256, a register file258, one or more general purpose graphics processing unit (GPGPU) cores262, and one or more load/store units266. The GPGPU cores262and load/store units266are coupled with cache memory272and shared memory270via a memory and cache interconnect268. In one embodiment the graphics multiprocessor234additionally includes tensor and/or ray-tracing cores263that include hardware logic to accelerate matrix and/or ray-tracing operations. In one embodiment, the instruction cache252receives a stream of instructions to execute from the pipeline manager232. The instructions are cached in the instruction cache252and dispatched for execution by the instruction unit254. The instruction unit254can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit256can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units266. The register file258provides a set of registers for the functional units of the graphics multiprocessor234. The register file258provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores262, load/store units266) of the graphics multiprocessor234. In one embodiment, the register file258is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file258. In one embodiment, the register file258is divided between the different warps being executed by the graphics multiprocessor234. The GPGPU cores262can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor234. The GPGPU cores262can be similar in architecture or can differ in architecture, according to embodiments. For example and in one embodiment, a first portion of the GPGPU cores262include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. In one embodiment the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor234can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. In one embodiment one or more of the GPGPU cores can also include fixed or special function logic. In one embodiment the GPGPU cores262include SIMD logic capable of performing a single instruction on multiple sets of data. In one embodiment GPGPU cores262can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit. The memory and cache interconnect268is an interconnect network that connects each of the functional units of the graphics multiprocessor234to the register file258and to the shared memory270. In one embodiment, the memory and cache interconnect268is a crossbar interconnect that allows the load/store unit266to implement load and store operations between the shared memory270and the register file258. The register file258can operate at the same frequency as the GPGPU cores262, thus data transfer between the GPGPU cores262and the register file258is very low latency. The shared memory270can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor234. The cache memory272can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit236. The shared memory270can also be used as a program managed cached. Threads executing on the GPGPU cores262can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory272. FIG.3A-3Cillustrate additional graphics multiprocessors, according to embodiments.FIG.3A-3Billustrate graphics multiprocessors325,350, which are variants of the graphics multiprocessor234ofFIG.2C.FIG.3Cillustrates a graphics processing unit (GPU)380which includes dedicated sets of graphics processing resources arranged into multi-core groups365A-365N. The illustrated graphics multiprocessors325,350and the multi-core groups365A-365N can be streaming multiprocessor (SM) capable of simultaneous execution of a large number of execution threads. FIG.3Ashows a graphics multiprocessor325according to an additional embodiment. The graphics multiprocessor325includes multiple additional instances of execution resource units relative to the graphics multiprocessor234ofFIG.2D. For example, the graphics multiprocessor325can include multiple instances of the instruction unit332A-332B, register file334A-334B, and texture unit(s)344A-344B. The graphics multiprocessor325also includes multiple sets of graphics or compute execution units (e.g., GPGPU core336A-336B, tensor core337A-337B, ray-tracing core338A-338B) and multiple sets of load/store units340A-340B. In one embodiment the execution resource units have a common instruction cache330, texture and/or data cache memory342, and shared memory346. The various components can communicate via an interconnect fabric327. In one embodiment the interconnect fabric327includes one or more crossbar switches to enable communication between the various components of the graphics multiprocessor325. In one embodiment the interconnect fabric327is a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor325is stacked. The components of the graphics multiprocessor325communicate with remote components via the interconnect fabric327. For example, the GPGPU cores336A-336B,337A-337B, and3378A-338B can each communicate with shared memory346via the interconnect fabric327. The interconnect fabric327can arbitrate communication within the graphics multiprocessor325to ensure a fair bandwidth allocation between components. FIG.3Bshows a graphics multiprocessor350according to an additional embodiment. The graphics processor includes multiple sets of execution resources356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated inFIG.2DandFIG.3A. The execution resources356A-356D can work in concert with texture unit(s)360A-360D for texture operations, while sharing an instruction cache354, and shared memory353. In one embodiment the execution resources356A-356D can share an instruction cache354and shared memory353, as well as multiple instances of a texture and/or data cache memory358A-358B. The various components can communicate via an interconnect fabric352similar to the interconnect fabric327ofFIG.3A. Persons skilled in the art will understand that the architecture described inFIGS.1,2A-2D, and3A-3Bare descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit202ofFIG.2A, as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein. In some embodiments a parallel processor or GPGPU as described herein is communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. FIG.3Cillustrates a graphics processing unit (GPU)380which includes dedicated sets of graphics processing resources arranged into multi-core groups365A-N. While the details of only a single multi-core group365A are provided, it will be appreciated that the other multi-core groups365B-365N may be equipped with the same or similar sets of graphics processing resources. As illustrated, a multi-core group365A may include a set of graphics cores370, a set of tensor cores371, and a set of ray tracing cores372. A scheduler/dispatcher368schedules and dispatches the graphics threads for execution on the various cores370,371,372. A set of register files369store operand values used by the cores370,371,372when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating point data elements) and tile registers for storing tensor/matrix values. In one embodiment, the tile registers are implemented as combined sets of vector registers. One or more combined level 1 (L1) caches and shared memory units373store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group365A. One or more texture units374can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache375shared by all or a subset of the multi-core groups365A-365N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache375may be shared across a plurality of multi-core groups365A-365N. One or more memory controllers367couple the GPU380to a memory366which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory). Input/output (I/O) circuitry363couples the GPU380to one or more I/O devices362such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices362to the GPU380and memory366. One or more I/O memory management units (IOMMUs)364of the I/O circuitry3195couple the I/O devices362directly to the system memory366. In one embodiment, the IOMMU364manages multiple sets of page tables to map virtual addresses to physical addresses in system memory366. In this embodiment, the I/O devices362, CPU(s)361, and GPU(s)380may share the same virtual address space. In one implementation, the IOMMU364supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory366). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated inFIG.3C, each of the cores370,371,372and/or multi-core groups365A-365N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations. In one embodiment, the CPUs361, GPUs380, and I/O devices362are integrated on a single semiconductor chip and/or chip package. The illustrated memory366may be integrated on the same chip or may be coupled to the memory controllers367via an off-chip interface. In one implementation, the memory366comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles of the invention are not limited to this specific implementation. In one embodiment, the tensor cores371include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores371may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). In one embodiment, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image. In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores371. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N×N×N matrix multiply, the tensor cores371may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed. Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores371to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes). In one embodiment, the ray tracing cores372accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores372include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores372may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores372perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores371. For example, in one embodiment, the tensor cores371implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores372. However, the CPU(s)361, graphics cores370, and/or ray tracing cores372may also implement all or a portion of the denoising and/or deep learning algorithms. In addition, as described above, a distributed approach to denoising may be employed in which the GPU380is in a computing device coupled to other computing devices over a network or high speed interconnect. In this embodiment, the interconnected computing devices share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications. In one embodiment, the ray tracing cores372process all BVH traversal and ray-primitive intersections, saving the graphics cores370from being overloaded with thousands of instructions per ray. In one embodiment, each ray tracing core372includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, in one embodiment, the multi-core group365A can simply launch a ray probe, and the ray tracing cores372independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores370,371are freed to perform other graphics or compute work while the ray tracing cores372perform the traversal and intersection operations. In one embodiment, each ray tracing core372includes a traversal unit to perform BVH testing operations and an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a “hit”, “no hit”, or “multiple hit” response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores370and tensor cores371) are freed to perform other forms of graphics work. In one particular embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores370and ray tracing cores372. In one embodiment, the ray tracing cores372(and/or other cores370,371) include hardware support for a ray tracing instruction set such as Microsoft's DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores372, graphics cores370and tensor cores371is Vulkan 1.1.85. Note, however, that the underlying principles of the invention are not limited to any particular ray tracing ISA. In general, the various cores372,371,370may support a ray tracing instruction set that includes instructions/functions for ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, one embodiment includes ray tracing instructions to perform the following functions: Ray Generation—Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment. Closest Hit—A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene. Any Hit—An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point. Intersection—An intersection instruction performs a ray-primitive intersection test and outputs a result. Per-primitive Bounding box Construction—This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure). Miss—Indicates that a ray misses all geometry within a scene, or specified region of a scene. Visit—Indicates the children volumes a ray will traverse. Exceptions—Includes various types of exception handlers (e.g., invoked for various error conditions). Techniques for GPU to Host Processor Interconnection FIG.4Aillustrates an exemplary architecture in which a plurality of GPUs410-413are communicatively coupled to a plurality of multi-core processors405-406over high-speed links440A-440D (e.g., buses, point-to-point interconnects, etc.). In one embodiment, the high-speed links440A-440D support a communication throughput of 4 GB/s, 30 GB/s, 80 GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles of the invention are not limited to any particular communication protocol or throughput. In addition, in one embodiment, two or more of the GPUs410-413are interconnected over high-speed links442A-442B, which may be implemented using the same or different protocols/links than those used for high-speed links440A-440D. Similarly, two or more of the multi-core processors405-406may be connected over high speed link443which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or higher. Alternatively, all communication between the various system components shown inFIG.4Amay be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles of the invention are not limited to any particular type of interconnect technology. In one embodiment, each multi-core processor405-406is communicatively coupled to a processor memory401-402, via memory interconnects430A-430B, respectively, and each GPU410-413is communicatively coupled to GPU memory420-423over GPU memory interconnects450A-450D, respectively. The memory interconnects430A-430B and450A-450D may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories401-402and GPU memories420-423may be volatile memories such as dynamic random access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy). As described below, although the various processors405-406and GPUs410-413may be physically coupled to a particular memory401-402,420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the “effective address” space) is distributed among all of the various physical memories. For example, processor memories401-402may each comprise 64 GB of the system memory address space and GPU memories420-423may each comprise 32 GB of the system memory address space (resulting in a total of 256 GB addressable memory in this example). FIG.4Billustrates additional details for an interconnection between a multi-core processor407and a graphics acceleration module446in accordance with one embodiment. The graphics acceleration module446may include one or more GPU chips integrated on a line card which is coupled to the processor407via the high-speed link440. Alternatively, the graphics acceleration module446may be integrated on the same package or chip as the processor407. The illustrated processor407includes a plurality of cores460A-460D, each with a translation lookaside buffer461A-461D and one or more caches462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the invention (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches462A-462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches456may be included in the caching hierarchy and shared by sets of the cores460A-460D. For example, one embodiment of the processor407includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor407and the graphics accelerator integration module446connect with system memory441, which may include processor memories401-402. Coherency is maintained for data and instructions stored in the various caches462A-462D,456and system memory441via inter-core communication over a coherence bus464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus464in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus464to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles of the invention. In one embodiment, a proxy circuit425communicatively couples the graphics acceleration module446to the coherence bus464, allowing the graphics acceleration module446to participate in the cache coherence protocol as a peer of the cores. In particular, an interface435provides connectivity to the proxy circuit425over high-speed link440(e.g., a PCIe bus, NVLink, etc.) and an interface437connects the graphics acceleration module446to the high-speed link440. In one implementation, an accelerator integration circuit436provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines431,432, N of the graphics acceleration module446. The graphics processing engines431,432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines431,432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines431-432, N or the graphics processing engines431-432, N may be individual GPUs integrated on a common package, line card, or chip. In one embodiment, the accelerator integration circuit436includes a memory management unit (MMU)439for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory441. The MMU439may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache438stores commands and data for efficient access by the graphics processing engines431-432, N. In one embodiment, the data stored in cache438and graphics memories433-434, M is kept coherent with the core caches462A-462D,456and system memory441. As mentioned, this may be accomplished via proxy circuit425which takes part in the cache coherency mechanism on behalf of cache438and memories433-434, M (e.g., sending updates to the cache438related to modifications/accesses of cache lines on processor caches462A-462D,456and receiving updates from the cache438). A set of registers445store context data for threads executed by the graphics processing engines431-432, N and a context management circuit448manages the thread contexts. For example, the context management circuit448may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved, and a second thread is stored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit448may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. In one embodiment, an interrupt management circuit447receives and processes interrupts received from system devices. In one implementation, virtual/effective addresses from a graphics processing engine431are translated to real/physical addresses in system memory441by the MMU439. One embodiment of the accelerator integration circuit436supports multiple (e.g., 4, 8, 16) graphics accelerator modules446and/or other accelerator devices. The graphics accelerator module446may be dedicated to a single application executed on the processor407or may be shared between multiple applications. In one embodiment, a virtualized graphics execution environment is presented in which the resources of the graphics processing engines431-432, N are shared with multiple applications or virtual machines (VMs). The resources may be subdivided into “slices” which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications. Thus, the accelerator integration circuit acts as a bridge to the system for the graphics acceleration module446and provides address translation and system memory cache services. In addition, the accelerator integration circuit436may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management. Because hardware resources of the graphics processing engines431-432, N are mapped explicitly to the real address space seen by the host processor407, any host processor can address these resources directly using an effective address value. One function of the accelerator integration circuit436, in one embodiment, is the physical separation of the graphics processing engines431-432, N so that they appear to the system as independent units. As mentioned, in the illustrated embodiment, one or more graphics memories433-434, M are coupled to each of the graphics processing engines431-432, N, respectively. The graphics memories433-434, M store instructions and data being processed by each of the graphics processing engines431-432, N. The graphics memories433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint or Nano-Ram. In one embodiment, to reduce data traffic over the high-speed link440, biasing techniques are used to ensure that the data stored in graphics memories433-434, M is data which will be used most frequently by the graphics processing engines431-432, N and preferably not used by the cores460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines431-432, N) within the caches462A-462D,456of the cores and system memory441. FIG.4Cillustrates another embodiment in which the accelerator integration circuit436is integrated within the processor407. In this embodiment, the graphics processing engines431-432, N communicate directly over the high-speed link440to the accelerator integration circuit436via interface437and interface435(which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit436may perform the same operations as those described with respect toFIG.4B, but potentially at a higher throughput given its close proximity to the coherency bus464and caches462A-462D,456. One embodiment supports different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit436and programming models which are controlled by the graphics acceleration module446. In one embodiment of the dedicated process model, graphics processing engines431-432, N are dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines431-432, N, providing virtualization within a VM/partition. In the dedicated-process programming models, the graphics processing engines431-432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines431-432, N to provide access to each process or application. For the shared programming model, the graphics acceleration module446or an individual graphics processing engine431-432, N selects a process element using a process handle. In one embodiment, process elements are stored in system memory441and are addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list. FIG.4Dillustrates an exemplary accelerator integration slice490. As used herein, a “slice” comprises a specified portion of the processing resources of the accelerator integration circuit436. Application effective address space482within system memory441stores process elements483. In one embodiment, the process elements483are stored in response to GPU invocations481from applications480executed on the processor407. A process element483contains the process state for the corresponding application480. A work descriptor (WD)484contained in the process element483can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD484is a pointer to the job request queue in the application's address space482. The graphics acceleration module446and/or the individual graphics processing engines431-432, N can be shared by all or a subset of the processes in the system. Embodiments of the invention include an infrastructure for setting up the process state and sending a WD484to a graphics acceleration module446to start a job in a virtualized environment. In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module446or an individual graphics processing engine431. Because the graphics acceleration module446is owned by a single process, the hypervisor initializes the accelerator integration circuit436for the owning partition and the operating system initializes the accelerator integration circuit436for the owning process at the time when the graphics acceleration module446is assigned. In operation, a WD fetch unit491in the accelerator integration slice490fetches the next WD484which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module446. Data from the WD484may be stored in registers445and used by the MMU439, interrupt management circuit447and/or context management circuit448as illustrated. For example, one embodiment of the MMU439includes segment/page walk circuitry for accessing segment/page tables486within the OS virtual address space485. The interrupt management circuit447may process interrupt events492received from the graphics acceleration module446. When performing graphics operations, an effective address493generated by a graphics processing engine431-432, N is translated to a real address by the MMU439. In one embodiment, the same set of registers445are duplicated for each graphics processing engine431-432, N and/or graphics acceleration module446and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice490. Exemplary registers that may be initialized by the hypervisor are shown in Table 1. TABLE 1Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description Register Exemplary registers that may be initialized by the operating system are shown in Table 2. TABLE 2Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptor In one embodiment, each WD484is specific to a particular graphics acceleration module446and/or graphics processing engine431-432, N. It contains all the information a graphics processing engine431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed. FIG.4Eillustrates additional details for one embodiment of a shared model. This embodiment includes a hypervisor real address space498in which a process element list499is stored. The hypervisor real address space498is accessible via a hypervisor496which virtualizes the graphics acceleration module engines for the operating system495. The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module446. There are two programming models where the graphics acceleration module446is shared by multiple processes and partitions: time-sliced shared and graphics directed shared. In this model, the system hypervisor496owns the graphics acceleration module446and makes its function available to all operating systems495. For a graphics acceleration module446to support virtualization by the system hypervisor496, the graphics acceleration module446may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module446must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module446to complete in a specified amount of time, including any translation faults, or the graphics acceleration module446provides the ability to preempt the processing of the job. 3) The graphics acceleration module446must be guaranteed fairness between processes when operating in the directed shared programming model. In one embodiment, for the shared model, the application480is required to make an operating system495system call with a graphics acceleration module446type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module446type describes the targeted acceleration function for the system call. The graphics acceleration module446type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module446and can be in the form of a graphics acceleration module446command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit436and graphics acceleration module446implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor496may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element483. In one embodiment, the CSRP is one of the registers445containing the effective address of an area in the application's address space482for the graphics acceleration module446to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory. Upon receiving the system call, the operating system495may verify that the application480has registered and been given the authority to use the graphics acceleration module446. The operating system495then calls the hypervisor496with the information shown in Table 3. TABLE 3OS Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN) Upon receiving the hypervisor call, the hypervisor496verifies that the operating system495has registered and been given the authority to use the graphics acceleration module446. The hypervisor496then puts the process element483into the process element linked list for the corresponding graphics acceleration module446type. The process element may include the information shown in Table 4. TABLE 4Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer(CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer(AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization recordpointer12The Storage Descriptor Register (SDR) In one embodiment, the hypervisor initializes a plurality of accelerator integration slice490registers445. As illustrated inFIG.4F, one embodiment of the invention employs a unified memory addressable via a common virtual memory address space used to access the physical processor memories401-402and GPU memories420-423. In this implementation, operations executed on the GPUs410-413utilize the same virtual/effective memory address space to access the processors memories401-402and vice versa, thereby simplifying programmability. In one embodiment, a first portion of the virtual/effective address space is allocated to the processor memory401, a second portion to the second processor memory402, a third portion to the GPU memory420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) is thereby distributed across each of the processor memories401-402and GPU memories420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory. In one embodiment, bias/coherence management circuitry494A-494E within one or more of the MMUs439A-439E ensures cache coherence between the caches of the host processors (e.g.,405) and the GPUs410-413and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry494A-494E are illustrated inFIG.4F, the bias/coherence circuitry may be implemented within the MMU of one or more host processors405and/or within the accelerator integration circuit436. One embodiment allows GPU-attached memory420-423to be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory420-423to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor405software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory420-423without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload. In one implementation, the selection of between GPU bias and host processor bias is driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories420-423, with or without a bias cache in the GPU410-413(e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU. In one implementation, the bias table entry associated with each access to the GPU-attached memory420-423is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU410-413that find their page in GPU bias are forwarded directly to a corresponding GPU memory420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor405(e.g., over a high-speed link as discussed above). In one embodiment, requests from the processor405that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page. The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism. One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor405bias to GPU bias, but is not required for the opposite transition. In one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages uncacheable by the host processor405. To access these pages, the processor405may request access from the GPU410which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the host processor405and GPU410it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor405and vice versa. Graphics Processing Pipeline FIG.5illustrates a graphics processing pipeline500, according to an embodiment. In one embodiment a graphics processor can implement the illustrated graphics processing pipeline500. The graphics processor can be included within the parallel processing subsystems as described herein, such as the parallel processor200ofFIG.2A, which, in one embodiment, is a variant of the parallel processor(s)112ofFIG.1. The various parallel processing systems can implement the graphics processing pipeline500via one or more instances of the parallel processing unit (e.g., parallel processing unit202ofFIG.2A) as described herein. For example, a shader unit (e.g., graphics multiprocessor234ofFIG.2C) may be configured to perform the functions of one or more of a vertex processing unit504, a tessellation control processing unit508, a tessellation evaluation processing unit512, a geometry processing unit516, and a fragment/pixel processing unit524. The functions of data assembler502, primitive assemblers506,514,518, tessellation unit510, rasterizer522, and raster operations unit526may also be performed by other processing engines within a processing cluster (e.g., processing cluster214ofFIG.2A) and a corresponding partition unit (e.g., partition unit220A-220N ofFIG.2A). The graphics processing pipeline500may also be implemented using dedicated processing units for one or more functions. In one embodiment, one or more portions of the graphics processing pipeline500can be performed by parallel processing logic within a general purpose processor (e.g., CPU). In one embodiment, one or more portions of the graphics processing pipeline500can access on-chip memory (e.g., parallel processor memory222as inFIG.2A) via a memory interface528, which may be an instance of the memory interface218ofFIG.2A. In one embodiment the data assembler502is a processing unit that collects vertex data for surfaces and primitives. The data assembler502then outputs the vertex data, including the vertex attributes, to the vertex processing unit504. The vertex processing unit504is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit504reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space. A first instance of a primitive assembler506receives vertex attributes from the vertex processing unit504. The primitive assembler506readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs). The tessellation control processing unit508treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit512. The tessellation control processing unit508can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit510is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit512. The tessellation evaluation processing unit512operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives. A second instance of a primitive assembler514receives vertex attributes from the tessellation evaluation processing unit512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit516. The geometry processing unit516is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler514as specified by the geometry shader programs. In one embodiment the geometry processing unit516is programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives. In some embodiments the geometry processing unit516can add or delete elements in the geometry stream. The geometry processing unit516outputs the parameters and vertices specifying new graphics primitives to primitive assembler518. The primitive assembler518receives the parameters and vertices from the geometry processing unit516and constructs graphics primitives for processing by a viewport scale, cull, and clip unit520. The geometry processing unit516reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit520performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer522. The rasterizer522can perform depth culling and other depth-based optimizations. The rasterizer522also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit524. The fragment/pixel processing unit524is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit524transforming fragments or pixels received from rasterizer522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit524may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit526. The fragment/pixel processing unit524can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units. The raster operations unit526is a processing unit that performs raster operations including, but not limited to stencil, z-test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory222as inFIG.2A, and/or system memory104as inFIG.1), to be displayed on the one or more display device(s)110or for further processing by one of the one or more processor(s)102or parallel processor(s)112. In some embodiments the raster operations unit526is configured to compress z or color data that is written to memory and decompress z or color data that is read from memory. Machine Learning Overview The architecture described above can be applied to perform training and inference operations using machine learning models. Machine learning has been successful at solving many kinds of tasks. The computations that arise when training and using machine learning algorithms (e.g., neural networks) lend themselves naturally to efficient parallel implementations. Accordingly, parallel processors such as general-purpose graphic processing units (GPGPUs) have played a significant role in the practical implementation of deep neural networks. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. The efficiency provided by parallel machine learning algorithm implementations allows the use of high capacity networks and enables those networks to be trained on larger datasets. A machine learning algorithm is an algorithm that can learn based on a set of data. Embodiments of machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition. An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., “fed forward”) to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients (“weights”) respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms. Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the “correct” labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered “trained” when the errors for each of the outputs generated from the instances of the training data set are minimized. The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices. FIG.6is a generalized diagram of a machine learning software stack600. A machine learning application602can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application602can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application602can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation. Hardware acceleration for the machine learning application602can be enabled via a machine learning framework604. The machine learning framework604can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework604, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework604. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework604can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations. The machine learning framework604can process input data received from the machine learning application602and generate the appropriate input to a compute framework606. The compute framework606can abstract the underlying instructions provided to the GPGPU driver608to enable the machine learning framework604to take advantage of hardware acceleration via the GPGPU hardware610without requiring the machine learning framework604to have intimate knowledge of the architecture of the GPGPU hardware610. Additionally, the compute framework606can enable hardware acceleration for the machine learning framework604across a variety of types and generations of the GPGPU hardware610. GPGPU Machine Learning Acceleration FIG.7illustrates a general-purpose graphics processing unit700, according to an embodiment. In one embodiment, the general-purpose processing unit (GPGPU)700can be configured to be particularly efficient in processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU700can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks. The GPGPU700includes a host interface702to enable a connection with a host processor. In one embodiment the host interface702is a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU700receives commands from the host processor and uses a global scheduler704to distribute execution threads associated with those commands to a set of compute clusters706A-706H. The compute clusters706A-706H share a cache memory708. The cache memory708can serve as a higher-level cache for cache memories within the compute clusters706A-706H. The GPGPU700includes memory714A-B coupled with the compute clusters706A-H via a set of memory controllers712A-712B. In various embodiments, the memory714A-714B can include various types of memory devices including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment, the memory714A-714N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). In one embodiment, each of the compute clusters706A-706H includes a set of graphics multiprocessors, such as the graphics multiprocessor400ofFIG.4A. The graphics multiprocessors of the compute cluster multiple types of integer and floating-point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, and in one embodiment at least a subset of the floating-point units in each of the compute clusters706A-H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating-point units can be configured to perform 64-bit floating point operations. Multiple instances of the GPGPU700can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. In one embodiment, the multiple instances of the GPGPU700communicate over the host interface702. In one embodiment the GPGPU700includes an I/O hub709that couples the GPGPU700with a GPU link710that enables a direct connection to other instances of the GPGPU. In one embodiment the GPU link710is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU700. In one embodiment the GPU link710couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment the multiple instances of the GPGPU700are located in separate data processing systems and communicate via a network device that is accessible via the host interface702. In one embodiment the GPU link710can be configured to enable a connection to a host processor in addition to or as an alternative to the host interface702. While the illustrated configuration of the GPGPU700can be configured to train neural networks, one embodiment provides alternate configuration of the GPGPU700that can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration, the GPGPU700includes fewer of the compute clusters706A-706H relative to the training configuration. Additionally, memory technology associated with the memory714A-714B may differ between inferencing and training configurations. In one embodiment, the inferencing configuration of the GPGPU700can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks. FIG.8illustrates a multi-GPU computing system800, according to an embodiment. The multi-GPU computing system800can include a processor802coupled to multiple GPGPUs806A-806D via a host interface switch804. The host interface switch804, in one embodiment, is a PCI express switch device that couples the processor802to a PCI express bus over which the processor802can communicate with the set of GPGPUs806A-806D. Each of the multiple GPGPUs806A-806D can be an instance of the GPGPU700ofFIG.7. The GPGPUs806A-806D can interconnect via a set of high-speed point to point GPU to GPU links816. The high-speed GPU to GPU links can connect to each of the GPGPUs806A-806D via a dedicated GPU link, such as the GPU link710as inFIG.7. The P2P GPU links816enable direct communication between each of the GPGPUs806A-806D without requiring communication over the host interface bus to which the processor802is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system800, for example, via one or more network devices. While in the illustrated embodiment the GPGPUs806A-D connect to the processor802via the host interface switch804, in one embodiment the processor802includes direct support for the P2P GPU links816and can connect directly to the GPGPUs806A-806D. Machine Learning Neural Network Implementations The computing architecture provided by embodiments described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described. A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of “filters” (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network. Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for an RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed. The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general. The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques. Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task. Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network. FIG.9A-9Billustrate an exemplary convolutional neural network.FIG.9Aillustrates various layers within a CNN. As shown inFIG.9A, an exemplary CNN used to model image processing can receive input902describing the red, green, and blue (RGB) components of an input image. The input902can be processed by multiple convolutional layers (e.g., convolutional layer904, convolutional layer906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers908. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers908can be used to generate an output result from the network. The activations within the fully connected layers908can be computed using matrix multiplication instead of convolution. Not all CNN implementations make use of fully connected layers908. For example, in some implementations the convolutional layer906can generate output for the CNN. The convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers908. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images. FIG.9Billustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer912of a CNN can be processed in three stages of a convolutional layer914. The three stages can include a convolution stage916, a detector stage918, and a pooling stage920. The convolution layer914can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNN. In the convolution stage916performs several convolutions in parallel to produce a set of linear activations. The convolution stage916can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage916defines a set of linear activations that are processed by successive stages of the convolutional layer914. The linear activations can be processed by a detector stage918. In the detector stage918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as ƒ(x)=max(0, x), such that the activation is thresholded at zero. The pooling stage920uses a pooling function that replaces the output of the convolutional layer906with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage920, including max pooling, average pooling, and l2-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages. The output from the convolutional layer914can then be processed by the next layer922. The next layer922can be an additional convolutional layer or one of the fully connected layers908. For example, the first convolutional layer904ofFIG.9Acan output to the second convolutional layer906, while the second convolutional layer can output to a first layer of the fully connected layers908. FIG.10illustrates an exemplary recurrent neural network1000. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN1000can be described has having an input layer1002that receives an input vector, hidden layers1004to implement a recurrent function, a feedback mechanism1005to enable a ‘memory’ of previous states, and an output layer1006to output a result. The RNN1000operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism1005. For a given time step, the state of the hidden layers1004is defined by the previous state and the input at the current time step. An initial input (x1) at a first time step can be processed by the hidden layer1004. A second input (x2) can be processed by the hidden layer1004using state information that is determined during the processing of the initial input (x1). A given state can be computed as st=ƒ(Uxt+Wst−1), where U and W are parameter matrices. The function ƒ is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function ƒ(x)=max(0, x). However, the specific mathematical function used in the hidden layers1004can vary depending on the specific implementation details of the RNN1000. In addition to the basic CNN and RNN networks described, variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network. FIG.11illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset1102. Various training frameworks1104have been developed to enable hardware acceleration of the training process. For example, the machine learning framework604ofFIG.6may be configured as a training framework604. The training framework604can hook into an untrained neural network1106and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural net1108. To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner. Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset1102includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework1104can adjust to adjust the weights that control the untrained neural network1106. The training framework1104can provide tools to monitor how well the untrained neural network1106is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net1108. The trained neural network1108can then be deployed to implement any number of machine learning operations to generate an inference result1114based on input of new data1112. Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset1102will include input data without any associated output data. The untrained neural network1106can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network1108capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data. Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset1102includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network1108to adapt to the new data1112without forgetting the knowledge instilled within the network during initial training. Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process. FIG.12is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly-parallel general-purpose graphics processing unit700as inFIG.700. As illustrated, distributed learning can be performed model parallelism1202, data parallelism1204, or a combination of model and data parallelism1204. In model parallelism1202, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks. In data parallelism1204, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes. Combined model and data parallelism1206can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model. Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization. Exemplary Machine Learning Applications Machine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors. Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles. Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR. Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages. The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training. Exemplary parallel processors suited for training include the general-purpose graphics processing unit700ofFIG.700and the multi-GPU computing system800ofFIG.800. On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles. FIG.13illustrates an exemplary inferencing system on a chip (SOC)1300suitable for performing inferencing using a trained model. The SOC1300can integrate processing components including a media processor1302, a vision processor1304, a GPGPU1306and a multi-core processor1308. The SOC1300can additionally include on-chip memory1305that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC1300can be used as a portion of the main control system for an autonomous vehicle. Where the SOC1300is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction. During operation, the media processor1302and vision processor1304can work in concert to accelerate computer vision operations. The media processor1302can enable low latency decode of multiple high-resolution (e.g.,4K,8K) video streams. The decoded video streams can be written to a buffer in the on-chip-memory1305. The vision processor1304can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor1304can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU1306. The multi-core processor1308can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor1302and the vision processor1304. The multi-core processor1308can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU1306. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor1308. Such software can directly issue computational workloads to the GPGPU1306or the computational workloads can be issued to the multi-core processor1308, which can offload at least a portion of those operations to the GPGPU1306. The GPGPU1306can include compute clusters such as a low power configuration of the compute clusters706A-706H within general-purpose graphics processing unit700. The compute clusters within the GPGPU1306can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU1306can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations. Memory Prefetching in Multiple Graphics Processor Environment In some embodiments, an apparatus, system, or process provides for improvements in memory prefetching for a multiple GPU environment. In some embodiments, rules are applied for use by a prefetcher when a multi-GPU workload is executing across a cluster of GPUs having unified virtual memory and non-unified physical memory. FIG.14is an illustration of a multiple GPU environment according to some embodiments. As illustrated inFIG.14, a computing system1400may include a host processor1403, such as a CPU, which may support an application driver1401that directs a workload to each of multiple GPUs for processing, illustrated in this example as workloads1408A-1408D being directed to the graphics interfaces1402A-1402D of GPUs1404A-1404D. Further, each GPU may be coupled with a memory of the illustrated physical memories1406A-1406D. However, while the memory includes non-unified physical memories1406A-1406D, these memories may be part of a unified virtual memory. As a result, prefetching of data that does not recognize the physical structure of the memory can result in efficient use of system resources as prefetches for a GPU are made memory locations that are remote from the GPU. In some embodiments, a prefetcher for a GPU in a multiple GPU environment provide protected prefetch for cross-GPU coherency, wherein the prefetcher is limited to prefetching: (1) Only of pages that owned by the local GPU or the host processor; (2) That does not cross memory page boundaries during prefetching; and (3) That does not cross boundaries of allocated surfaces. FIG.15is an illustration of protected prefetching in a multiple GPU environment according to some embodiments. As illustrated, GPU1404A and GPU1404B each include a prefetcher1505A and1505B and a cache1503A and1503B, and are respectively coupled with memory1406A and memory1406B. While memory1406A and memory1406B are separate physical memories, these are a part of unified virtual memory1510. As a result, a prefetch operation could potentially result in prefetches from non-local memory if there were no restrictions in place. In some embodiments, the prefetchers1505A and1505B are prohibited from prefetching out of pages that are not owned by the local GPU or by the host processor, and are further prohibited from crossing memory page boundaries during a prefetch. In this manner, a prefetcher is prevented both from directing a prefetch to a page of non-local GPU, and from crossing into the page that is owned by another GPU. For example,FIG.15illustrates pages1511and1512within physical memory1406A and owned by GPU1404A and pages1513and1514within physical memory1406B and owned by GPU1404B. If, for example, prefetcher1505A is prefetching data, then data within pages1511and1512may be prefetched, but not within1513and1514. Further, a prefetch of page1511would be halted at1515A and a prefetch of page1512would be halted by1515B. In this manner, the prefetch of page1512would be prevented from continuing into page1513, which would result in prefetching from a non-local physical memory. FIG.16is an illustration of prefetching from memory surfaces in a multiple GPU environment according to some embodiments. In addition to the restrictions illustrated inFIG.15, a prefetcher may further be prevented from a prefetch that crosses boundaries of a memory surface. As illustrated, a memory may include creation of multiple memory surfaces1600, such as memory surface1and memory surface2. In some embodiments, a prefetcher is prohibited from prefetching that crosses a boundary of a memory surface. For example, a prefetch of memory surface1, shown as prefetch1610, is halted at a boundary of the memory surface, shown as prefetch halt1615, which prevents the prefetch of a memory surface from crossing into another memory surface, such as memory surface2. FIG.17is a flowchart to illustrate a process for prefetching in a multiple GPU environment according to some embodiments. In some embodiments, a process includes initiating an application1700, wherein the application may include processing in a computing system environment with multiple GPUs. Workloads are distributed to the multiple GPUs1705, and threads are processed by the GPUs receiving workloads1710. In some embodiments, upon initiation of a prefetch by a prefetcher of a GPU1715, there is a determination whether the prefetch is directed to a page that is owned by the local GPU of the prefetcher or the CPU (or other host processor) of the computing system1720. If not, the prefetch is denied1720, and the processing of the threads by the GPUs may continue1710. If so, then the processing of the prefetch may proceed1730. In some embodiments, the process may further include a determination whether the prefetch has reached a boundary of the page or a memory surface1735. If so, the prefetch is halted1740, and the processing of the threads by the GPUs may continue1710. This may continue until the prefetch is complete1745, with the processing of the threads by the GPUs then continuing1710. FIG.17illustrates process occurring in a certain order for ease of illustration. However, these processes may occur in a different order or in an overlapping or simultaneous manner. Further, while a single prefetch is illustrated for simplicity, multiple prefetches may be occurring from each of the multiple GPUs in operation. FIG.18Ais an illustration of a gather/scatter prefetch instruction according to some embodiments. Prefetching commonly will prefetch a cache line or multiple contiguous cache lines. However, this process requires multiple prefetch instructions if there are multiple contiguous addresses that are to be subjects of prefetches. In some embodiments, instead of pre-fetching contiguous cache lines, an array of different non-contiguous addresses can be requested in a single instruction in order to reduce the number of instructions to be generated and transmitted in a GPU. The single instruction, which may be referred to as a gather/scatter prefetch instruction, includes multiple prefetch addresses. An exemplary gather/scatter prefetch instruction1800is illustrated inFIG.18A, the instruction including 32 different addresses illustrated as A0through A31. In some embodiments, a prefetcher of a GPU in a computing system, such as prefetcher15105A of GPU1404A illustrated inFIG.15, is to issue the gather/scatter prefetch instruction1800to provide up to 32 addresses to be prefetched. In response to the instruction1800, hardware of the computing system is to parse the instruction and issue multiple batched prefetch messages to memory for each of the prefetch addresses contained in the gather/scatter prefetch instruction1800. FIG.18Bis an illustration of a selective prefetch instruction according to some embodiments. As indicated inFIG.18A, a gather/scatter prefetch instruction may identify multiple addresses to be prefetched, wherein the addresses may be noncontiguous. In some embodiments, each address may comprise a selective prefetch instruction1850that includes both an address and a cache level for the prefetched data. In this manner, a gather/scatter prefetch instruction1800illustrated inFIG.18Amay be utilized to provide selective prefetch to multiple different levels in the cache hierarchy, such as L1, L2, and L3, within a single instruction, thus further improving efficiency of prefetching operation in a computer system. FIG.19is an illustration of prefetch operation with status notification according to some embodiments. In a computing system having multiple graphics processors, each processor may be providing prefetching instructions to memory. However, the multiple threads being processed in the graphics processors may potentially overwhelm a cache or otherwise create issues with numerous prefetch instructions. FIG.19illustrates an exemplary graphics processor1900and memory1930. The graphics processor may include multiple cores, such as the illustrated shader core1902and other cores1904. The graphics processor may further include a prefetcher1906to provide prefetch requests to memory and one or more caches such as cache1908. In some embodiments, upon completion of a prefetch from memory to the cache1908for a thread1910issuing the prefetch instruction, the prefetcher is to send back an optional notification, such as a 1-bit flag, to the thread1910indicating that the prefetch is complete, and thus data is loaded in the cache1908. In some embodiments, the thread1910can utilize this notification to synchronize its execution with other threads, such as thread1911. In some embodiments, the thread may also use the notification to throttle prefetches in order to prevent prefetching too much ahead and overwhelm the cache1908. FIG.20is an illustration of an apparatus or system to provide for improved prefetching performance, according to some embodiments. As illustrated inFIG.20, a computing system2000, such as, for example, system100illustrated inFIG.1, includes one or more processors2005and multiple GPUs for the processing of data. The computing system2000further includes memory2010for the storage of data and one or more elements for the transfer of data, such as interface bus2015and transceiver2020. In some embodiments, the transceiver2020is a wireless transceiver with one or more antennas2025for transmission and reception of data, wherein the antennas2025may include a dipole antenna or other antenna structures. In some embodiments, the GPUs2030each include circuitry to support improved prefetching operation in the multiple GPU environment, including one or more of protected prefetch optimizations for cross-GPU coherency, as illustrated inFIGS.15-17, gather/scatter prefetch instructions, as illustrated inFIGS.18A and18B; and prefetch operation with status notification, as illustrated inFIG.19. System Overview FIG.21is a block diagram of a processing system2100, according to an embodiment. System2100may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors2102or processor cores2107. In one embodiment, the system2100is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network. In one embodiment, system2100can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. In some embodiments the system2100is part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system2100can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. In some embodiments, the processing system2100includes or is part of a television or set top box device. In some embodiments, system2100can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self-driving vehicle may use system2100to process the environment sensed around the vehicle. In some embodiments, the one or more processors2102each include one or more processor cores2107to process instructions which, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores2107is configured to process a specific instruction set2109. In some embodiments, instruction set2109may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores2107may process a different instruction set2109, which may include instructions to facilitate the emulation of other instruction sets. Processor core2107may also include other processing devices, such as a Digital Signal Processor (DSP). In some embodiments, the processor2102includes cache memory2104. Depending on the architecture, the processor2102can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor2102. In some embodiments, the processor2102also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores2107using known cache coherency techniques. A register file2106can be additionally included in processor2102and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor2102. In some embodiments, one or more processor(s)2102are coupled with one or more interface bus(es)2110to transmit communication signals such as address, data, or control signals between processor2102and other components in the system2100. The interface bus2110, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In one embodiment the processor(s)2102include an integrated memory controller2116and a platform controller hub2130. The memory controller2116facilitates communication between a memory device and other components of the system2100, while the platform controller hub (PCH)2130provides connections to I/O devices via a local I/O bus. The memory device2120can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device2120can operate as system memory for the system2100, to store data2122and instructions2121for use when the one or more processors2102executes an application or process. Memory controller2116also couples with an optional external graphics processor2112, which may communicate with the one or more graphics processors2108in processors2102to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator2112, which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, in one embodiment the accelerator2112is a matrix multiplication accelerator used to optimize machine learning or compute operations. In one embodiment the accelerator2112is a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor2108. In some embodiments a display device2111can connect to the processor(s)2102. The display device2111can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device2111can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications. In some embodiments the platform controller hub2130enables peripherals to connect to memory device2120and processor2102via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller2146, a network controller2134, a firmware interface2128, a wireless transceiver2126, touch sensors2125, a data storage device2124(e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device2124can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). The touch sensors2125can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver2126can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long Term Evolution (LTE) transceiver. The firmware interface2128enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller2134can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus2110. The audio controller2146, in one embodiment, is a multi-channel high definition audio controller. In one embodiment the system2100includes an optional legacy I/O controller2140for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub2130can also connect to one or more Universal Serial Bus (USB) controllers2142connect input devices, such as keyboard and mouse2143combinations, a camera2144, or other USB input devices. It will be appreciated that the system2100shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller2116and platform controller hub2130may be integrated into a discreet external graphics processor, such as the external graphics processor2112. In one embodiment the platform controller hub2130and/or memory controller2116may be external to the one or more processor(s)2102. For example, the system2100can include an external memory controller2116and platform controller hub2130, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s)2102. For example, circuit boards (“sleds”) can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. In some examples, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity. A data center can utilize a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. A power supply or source can provide voltage and/or current to system2100or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source. FIG.22is a block diagram of an embodiment of a processor2200having one or more processor cores2202A-2202N, an integrated memory controller2214, and an integrated graphics processor2208. Those elements ofFIG.22having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor2200can include additional cores up to and including additional core2202N represented by the dashed lined boxes. Each of processor cores2202A-2202N includes one or more internal cache units2204A-2204N. In some embodiments each processor core also has access to one or more shared cached units2206. The internal cache units2204A-2204N and shared cache units2206represent a cache memory hierarchy within the processor2200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units2206and2204A-2204N. In some embodiments, processor2200may also include a set of one or more bus controller units2216and a system agent core2210. The one or more bus controller units2216manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core2210provides management functionality for the various processor components. In some embodiments, system agent core2210includes one or more integrated memory controllers2214to manage access to various external memory devices (not shown). In some embodiments, one or more of the processor cores2202A-2202N include support for simultaneous multi-threading. In such embodiment, the system agent core2210includes components for coordinating and operating cores2202A-2202N during multi-threaded processing. System agent core2210may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores2202A-2202N and graphics processor2208. In some embodiments, processor2200additionally includes graphics processor2208to execute graphics processing operations. In some embodiments, the graphics processor2208couples with the set of shared cache units2206, and the system agent core2210, including the one or more integrated memory controllers2214. In some embodiments, the system agent core2210also includes a display controller2211to drive graphics processor output to one or more coupled displays. In some embodiments, display controller2211may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor2208. In some embodiments, a ring based interconnect unit2212is used to couple the internal components of the processor2200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor2208couples with the ring interconnect2212via an I/O link2213. The exemplary I/O link2213represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module2218, such as an eDRAM module. In some embodiments, each of the processor cores2202A-2202N and graphics processor2208can use embedded memory modules2218as a shared Last Level Cache. In some embodiments, processor cores2202A-2202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores2202A-2202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores2202A-2202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores2202A-2202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In one embodiment, processor cores2202A-2202N are heterogeneous in terms of computational capability. Additionally, processor2200can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components. FIG.23is a block diagram of a graphics processor2300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores, or other semiconductor devices such as, but not limited to, memory devices or network interfaces. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor2300includes a memory interface2314to access memory. Memory interface2314can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. In some embodiments, graphics processor2300also includes a display controller2302to drive display output data to a display device2320. Display controller2302includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device2320can be an internal or external display device. In one embodiment the display device2320is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor2300includes a video codec engine2306to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. In some embodiments, graphics processor2300includes a block image transfer (BLIT) engine2304to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE)2310. In some embodiments, GPE2310is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. In some embodiments, GPE2310includes a 3D pipeline2312for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline2312includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system2315. While 3D pipeline2312can be used to perform media operations, an embodiment of GPE2310also includes a media pipeline2316that is specifically used to perform media operations, such as video post-processing and image enhancement. In some embodiments, media pipeline2316includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine2306. In some embodiments, media pipeline2316additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system2315. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system2315. In some embodiments, 3D/Media subsystem2315includes logic for executing threads spawned by 3D pipeline2312and media pipeline2316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem2315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem2315includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data. Graphics Processing Engine FIG.24is a block diagram of a graphics processing engine2410of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE)2410is a version of the GPE2310shown inFIG.23. Elements ofFIG.24having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline2312and media pipeline2316ofFIG.23are illustrated. The media pipeline2316is optional in some embodiments of the GPE2410and may not be explicitly included within the GPE2410. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE2410. In some embodiments, GPE2410couples with or includes a command streamer2403, which provides a command stream to the 3D pipeline2312and/or media pipelines2316. In some embodiments, command streamer2403is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer2403receives commands from the memory and sends the commands to 3D pipeline2312and/or media pipeline2316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline2312and media pipeline2316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline2312can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline2312and/or image data and memory objects for the media pipeline2316. The 3D pipeline2312and media pipeline2316process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array2414. In one embodiment the graphics core array2414include one or more blocks of graphics cores (e.g., graphics core(s)2415A, graphics core(s)2415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic. In various embodiments the 3D pipeline2312can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array2414. The graphics core array2414provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s)2415A-2414B of the graphic core array2414includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. In some embodiments, the graphics core array2414includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s)2107ofFIG.21or core2202A-2202N as inFIG.22. Output data generated by threads executing on the graphics core array2414can output data to memory in a unified return buffer (URB)2418. The URB2418can store data for multiple threads. In some embodiments the URB2418may be used to send data between different threads executing on the graphics core array2414. In some embodiments the URB2418may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic2420. In some embodiments, graphics core array2414is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE2410. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed. The graphics core array2414couples with shared function logic2420that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic2420are hardware logic units that provide specialized supplemental functionality to the graphics core array2414. In various embodiments, shared function logic2420includes but is not limited to sampler2421, math2422, and inter-thread communication (ITC)2423logic. Additionally, some embodiments implement one or more cache(s)2425within the shared function logic2420. A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core array2414. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic2420and shared among the execution resources within the graphics core array2414. The precise set of functions that are shared between the graphics core array2414and included within the graphics core array2414varies across embodiments. In some embodiments, specific shared functions within the shared function logic2420that are used extensively by the graphics core array2414may be included within shared function logic2416within the graphics core array2414. In various embodiments, the shared function logic2416within the graphics core array2414can include some or all logic within the shared function logic2420. In one embodiment, all logic elements within the shared function logic2420may be duplicated within the shared function logic2416of the graphics core array2414. In one embodiment the shared function logic2420is excluded in favor of the shared function logic2416within the graphics core array2414. FIG.25is a block diagram of hardware logic of a graphics processor core2500, according to some embodiments described herein. Elements ofFIG.25having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. The illustrated graphics processor core2500, in some embodiments, is included within the graphics core array2414ofFIG.24. The graphics processor core2500, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core2500is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core2500can include a fixed function block2530coupled with multiple sub-cores2501A-2501F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. In some embodiments, the fixed function block2530includes a geometry/fixed function pipeline2536that can be shared by all sub-cores in the graphics processor core2500, for example, in lower performance and/or lower power graphics processor implementations. In various embodiments, the geometry/fixed function pipeline2536includes a 3D fixed function pipeline (e.g., 3D pipeline2312as inFIG.23andFIG.24) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers, such as the unified return buffer2418ofFIG.24. In one embodiment the fixed function block2530also includes a graphics SoC interface2537, a graphics microcontroller2538, and a media pipeline2539. The graphics SoC interface2537provides an interface between the graphics processor core2500and other processor cores within a system on a chip integrated circuit. The graphics microcontroller2538is a programmable sub-processor that is configurable to manage various functions of the graphics processor core2500, including thread dispatch, scheduling, and pre-emption. The media pipeline2539(e.g., media pipeline2316ofFIG.23andFIG.24) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline2539implement media operations via requests to compute or sampling logic within the sub-cores2501-2501F. In one embodiment the SoC interface2537enables the graphics processor core2500to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface2537can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core2500and CPUs within the SoC. The SoC interface2537can also implement power management controls for the graphics processor core2500and enable an interface between a clock domain of the graphic core2500and other clock domains within the SoC. In one embodiment the SoC interface2537enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline2539, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline2536, geometry and fixed function pipeline2514) when graphics processing operations are to be performed. The graphics microcontroller2538can be configured to perform various scheduling and management tasks for the graphics processor core2500. In one embodiment the graphics microcontroller2538can perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays2502A-2502F,2504A-2504F within the sub-cores2501A-2501F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics processor core2500can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller2538can also facilitate low-power or idle states for the graphics processor core2500, providing the graphics processor core2500with the ability to save and restore registers within the graphics processor core2500across low-power state transitions independently from the operating system and/or graphics driver software on the system. The graphics processor core2500may have greater than or fewer than the illustrated sub-cores2501A-2501F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core2500can also include shared function logic2510, shared and/or cache memory2512, a geometry/fixed function pipeline2514, as well as additional fixed function logic2516to accelerate various graphics and compute processing operations. The shared function logic2510can include logic units associated with the shared function logic2420ofFIG.24(e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core2500. The shared and/or cache memory2512can be a last-level cache for the set of N sub-cores2501A-2501F within the graphics processor core2500, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline2514can be included instead of the geometry/fixed function pipeline2536within the fixed function block2530and can include the same or similar logic units. In one embodiment the graphics processor core2500includes additional fixed function logic2516that can include various fixed function acceleration logic for use by the graphics processor core2500. In one embodiment the additional fixed function logic2516includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline2516,2536, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic2516. In one embodiment the cull pipeline is a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example and in one embodiment the cull pipeline logic within the additional fixed function logic2516can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase. In one embodiment the additional fixed function logic2516can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing. Within each graphics sub-core2501A-2501F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores2501A-2501F include multiple EU arrays2502A-2502F,2504A-2504F, thread dispatch and inter-thread communication (TD/IC) logic2503A-2503F, a 3D (e.g., texture) sampler2505A-2505F, a media sampler2506A-2506F, a shader processor2507A-2507F, and shared local memory (SLM)2508A-2508F. The EU arrays2502A-2502F,2504A-2504F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic2503A-2503F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler2505A-2505F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler2506A-2506F can perform similar read operations based on the type and format associated with media data. In one embodiment, each graphics sub-core2501A-2501F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores2501A-2501F can make use of shared local memory2508A-2508F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. \ Execution Units FIG.26A-26Billustrate thread execution logic2600including an array of processing elements employed in a graphics processor core according to embodiments described herein. Elements ofFIG.26A-26Bhaving the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.FIG.26Aillustrates an overview of thread execution logic2600, which can include a variant of the hardware logic illustrated with each sub-core2501A-2501F ofFIG.25.FIG.26Billustrates exemplary internal details of an execution unit. As illustrated inFIG.26A, in some embodiments thread execution logic2600includes a shader processor2602, a thread dispatcher2604, instruction cache2606, a scalable execution unit array including a plurality of execution units2608A-2608N, a sampler2610, a data cache2612, and a data port2614. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit2608A,2608B,2608C,2608D, through2608N-1and2608N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic2600includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache2606, data port2614, sampler2610, and execution units2608A-2608N. In some embodiments, each execution unit (e.g.2608A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units2608A-2608N is scalable to include any number individual execution units. In some embodiments, the execution units2608A-2608N are primarily used to execute shader programs. A shader processor2602can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher2604. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units2608A-2608N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. In some embodiments, thread dispatcher2604can also process runtime thread spawning requests from the executing shader programs. In some embodiments, the execution units2608A-2608N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units2608A-2608N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units2608A-2608N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader. Various embodiments can apply to use execution by use of Single Instruction Multiple Thread (SIMT) as an alternate to use of SIMD or in addition to use of SIMD. Reference to a SIMD core or operation can apply also to SIMT or apply to SIMD in combination with SIMT. Each execution unit in execution units2608A-2608N operates on arrays of data elements. The number of data elements is the “execution size,” or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units2608A-2608N support integer and floating-point data types. The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible. In one embodiment one or more execution units can be combined into a fused execution unit2609A-2609N having thread control logic (2607A-2607N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit2609A-2609N includes at least two execution units. For example, fused execution unit2609A includes a first EU2608A, second EU2608B, and thread control logic2607A that is common to the first EU2608A and the second EU2608B. The thread control logic2607A controls threads executed on the fused graphics execution unit2609A, allowing each EU within the fused execution units2609A-2609N to execute using a common instruction pointer register. One or more internal instruction caches (e.g.,2606) are included in the thread execution logic2600to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g.,2612) are included to cache thread data during thread execution. In some embodiments, a sampler2610is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler2610includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit. During execution, the graphics and media pipelines send thread initiation requests to thread execution logic2600via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor2602is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor2602then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor2602dispatches threads to an execution unit (e.g.,2608A) via thread dispatcher2604. In some embodiments, shader processor2602uses texture sampling logic in the sampler2610to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. In some embodiments, the data port2614provides a memory access mechanism for the thread execution logic2600to output processed data to memory for further processing on a graphics processor output pipeline. In some embodiments, the data port2614includes or couples to one or more cache memories (e.g., data cache2612) to cache data for memory access via the data port. As illustrated inFIG.26B, a graphics execution unit2608can include an instruction fetch unit2637, a general register file array (GRF)2624, an architectural register file array (ARF)2626, a thread arbiter2622, a send unit2630, a branch unit2632, a set of SIMD floating point units (FPUs)2634, and in one embodiment a set of dedicated integer SIMD ALUs2635. The GRF2624and ARF2626includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit2608. In one embodiment, per thread architectural state is maintained in the ARF2626, while data used during thread execution is stored in the GRF2624. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF2626. In one embodiment the graphics execution unit2608has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. In one embodiment, the graphics execution unit2608can co-issue multiple instructions, which may each be different instructions. The thread arbiter2622of the graphics execution unit thread2608can dispatch the instructions to one of the send unit2630, branch unit2632, or SIMD FPU(s)2634for execution. Each execution thread can access 128 general-purpose registers within the GRF2624, where each register can store 32 bytes, accessible as an 8-element vector of 32-bit data elements. In one embodiment, each execution unit thread has access to 4 Kbytes within the GRF2624, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In one embodiment up to seven threads can execute simultaneously, although the number of threads per execution unit can also vary according to embodiments. In an embodiment in which seven threads may access 4 Kbytes, the GRF2624can store a total of 28 Kbytes. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures. In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via “send” instructions that are executed by the message passing send unit2630. In one embodiment, branch instructions are dispatched to a dedicated branch unit2632to facilitate SIMD divergence and eventual convergence. In one embodiment the graphics execution unit2608includes one or more SIMD floating point units (FPU(s))2634to perform floating-point operations. In one embodiment, the FPU(s)2634also support integer computation. In one embodiment the FPU(s)2634can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In one embodiment, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In some embodiments, a set of 8-bit integer SIMD ALUs2635are also present and may be specifically optimized to perform operations associated with machine learning computations. In one embodiment, arrays of multiple instances of the graphics execution unit2608can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can choose the exact number of execution units per sub-core grouping. In one embodiment the execution unit2608can execute instructions across a plurality of execution channels. In a further embodiment, each thread executed on the graphics execution unit2608is executed on a different channel. FIG.27is a block diagram illustrating a graphics processor instruction formats2700according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format2700described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed. In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format2710. A 64-bit compacted instruction format2730is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format2710provides access to all instruction options, while some options and operations are restricted in the 64-bit format2730. The native instructions available in the 64-bit format2730vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field2713. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format2710. Other sizes and formats of instruction can be used. For each format, instruction opcode2712defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field2714enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format2710an exec-size field2716limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field2716is not available for use in the 64-bit compact instruction format2730. Some execution unit instructions have up to three operands including two source operands, src02720, src12722, and one destination2718. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC22724), where the instruction opcode2712determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction. In some embodiments, the 128-bit instruction format2710includes an access/address mode field2726specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction. In some embodiments, the 128-bit instruction format2710includes an access/address mode field2726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands. In one embodiment, the address mode portion of the access/address mode field2726determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction. In some embodiments, instructions are grouped based on opcode2712bit-fields to simplify Opcode decode2740. For an 8-bit opcode, bits4,5, and6allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group2742includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group2742shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group2744(e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group2746includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group2748includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group2748performs the arithmetic operations in parallel across data channels. The vector math group2750includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. Graphics Pipeline FIG.28is a block diagram of another embodiment of a graphics processor2800. Elements ofFIG.28having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. In some embodiments, graphics processor2800includes a geometry pipeline2820, a media pipeline2830, a display engine2840, thread execution logic2850, and a render output pipeline2870. In some embodiments, graphics processor2800is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor2800via a ring interconnect2802. In some embodiments, ring interconnect2802couples graphics processor2800to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect2802are interpreted by a command streamer2803, which supplies instructions to individual components of the geometry pipeline2820or the media pipeline2830. In some embodiments, command streamer2803directs the operation of a vertex fetcher2805that reads vertex data from memory and executes vertex-processing commands provided by command streamer2803. In some embodiments, vertex fetcher2805provides vertex data to a vertex shader2807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher2805and vertex shader2807execute vertex-processing instructions by dispatching execution threads to execution units2852A-2852B via a thread dispatcher2831. In some embodiments, execution units2852A-2852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units2852A-2852B have an attached L1 cache2851that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions. In some embodiments, geometry pipeline2820includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader2811configures the tessellation operations. A programmable domain shader2817provides back-end evaluation of tessellation output. A tessellator2813operates at the direction of hull shader2811and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline2820. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader2811, tessellator2813, and domain shader2817) can be bypassed. In some embodiments, complete geometric objects can be processed by a geometry shader2819via one or more threads dispatched to execution units2852A-2852B, or can proceed directly to the clipper2829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled, the geometry shader2819receives input from the vertex shader2807. In some embodiments, geometry shader2819is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled. Before rasterization, a clipper2829processes vertex data. The clipper2829may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component2873in the render output pipeline2870dispatches pixel shaders to convert the geometric objects into per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic2850. In some embodiments, an application can bypass the rasterizer and depth test component2873and access un-rasterized vertex data via a stream out unit2823. The graphics processor2800has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units2852A-2852B and associated logic units (e.g., L1 cache2851, sampler2854, texture cache2858, etc.) interconnect via a data port2856to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler2854, L1 cache2851, texture cache2858, and execution units2852A-2852B each have separate memory access paths. In one embodiment the texture cache2858can also be configured as a sampler cache. In some embodiments, render output pipeline2870contains a rasterizer and depth test component2873that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache2878and depth cache2879are also available in some embodiments. A pixel operations component2877performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine2841, or substituted at display time by the display controller2843using overlay display planes. In some embodiments, a shared L3 cache2875is available to all graphics components, allowing the sharing of data without the use of main system memory. In some embodiments, graphics processor media pipeline2830includes a media engine2837and a video front-end2834. In some embodiments, video front-end2834receives pipeline commands from the command streamer2803. In some embodiments, media pipeline2830includes a separate command streamer. In some embodiments, video front-end2834processes media commands before sending the command to the media engine2837. In some embodiments, media engine2837includes thread spawning functionality to spawn threads for dispatch to thread execution logic2850via thread dispatcher2831. In some embodiments, graphics processor2800includes a display engine2840. In some embodiments, display engine2840is external to processor2800and couples with the graphics processor via the ring interconnect2802, or some other interconnect bus or fabric. In some embodiments, display engine2840includes a 2D engine2841and a display controller2843. In some embodiments, display engine2840contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller2843couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector. In some embodiments, the geometry pipeline2820and media pipeline2830are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor. Graphics Pipeline Programming FIG.29Ais a block diagram illustrating a graphics processor command format2900according to some embodiments.FIG.29Bis a block diagram illustrating a graphics processor command sequence2910according to an embodiment. The solid lined boxes inFIG.29Aillustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format2900ofFIG.29Aincludes data fields to identify a client2902, a command operation code (opcode)2904, and data2906for the command. A sub-opcode2905and a command size2908are also included in some commands. In some embodiments, client2902specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode2904and, if present, sub-opcode2905to determine the operation to perform. The client unit performs the command using information in data field2906. For some commands an explicit command size2908is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, commands are aligned via multiples of a double word. Other command formats can be used. The flow diagram inFIG.29Billustrates an exemplary graphics processor command sequence2910. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence. In some embodiments, the graphics processor command sequence2910may begin with a pipeline flush command2912to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline2922and the media pipeline2924do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked ‘dirty’ can be flushed to memory. In some embodiments, pipeline flush command2912can be used for pipeline synchronization or before placing the graphics processor into a low power state. In some embodiments, a pipeline select command2913is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command2913is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command2912is required immediately before a pipeline switch via the pipeline select command2913. In some embodiments, a pipeline control command2914configures a graphics pipeline for operation and is used to program the 3D pipeline2922and the media pipeline2924. In some embodiments, pipeline control command2914configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command2914is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands. In some embodiments, commands to configure the return buffer state2916are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state2916includes selecting the size and number of return buffers to use for a set of pipeline operations. The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination2920, the command sequence is tailored to the 3D pipeline2922beginning with the 3D pipeline state2930or the media pipeline2924beginning at the media pipeline state2940. The commands to configure the 3D pipeline state2930include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state2930commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used. In some embodiments, 3D primitive2932command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive2932command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive2932command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive2932command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline2922dispatches shader execution threads to graphics processor execution units. In some embodiments, 3D pipeline2922is triggered via an execute2934command or event. In some embodiments, a register write triggers command execution. In some embodiments, execution is triggered via a ‘go’ or ‘kick’ command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations. In some embodiments, the graphics processor command sequence2910follows the media pipeline2924path when performing media operations. In general, the specific use and manner of programming for the media pipeline2924depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives. In some embodiments, media pipeline2924is configured in a similar manner as the 3D pipeline2922. A set of commands to configure the media pipeline state2940are dispatched or placed into a command queue before the media object commands2942. In some embodiments, commands for the media pipeline state2940include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state2940also support the use of one or more pointers to “indirect” state elements that contain a batch of state settings. In some embodiments, media object commands2942supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command2942. Once the pipeline state is configured and media object commands2942are queued, the media pipeline2924is triggered via an execute command2944or an equivalent execute event (e.g., register write). Output from media pipeline2924may then be post processed by operations provided by the 3D pipeline2922or the media pipeline2924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations. Graphics Software Architecture FIG.30illustrates an exemplary graphics software architecture for a data processing system3000according to some embodiments. In some embodiments, software architecture includes a 3D graphics application3010, an operating system3020, and at least one processor3030. In some embodiments, processor3030includes a graphics processor3032and one or more general-purpose processor core(s)3034. The graphics application3010and operating system3020each execute in the system memory3050of the data processing system. In some embodiments, 3D graphics application3010contains one or more shader programs including shader instructions3012. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application also includes executable instructions3014in a machine language suitable for execution by the general-purpose processor core3034. The application also includes graphics objects3016defined by vertex data. In some embodiments, operating system3020is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system3020can support a graphics API3022such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system3020uses a front-end shader compiler3024to compile any shader instructions3012in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application3010. In some embodiments, the shader instructions3012are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API. In some embodiments, user mode graphics driver3026contains a back-end shader compiler3027to convert the shader instructions3012into a hardware specific representation. When the OpenGL API is in use, shader instructions3012in the GLSL high-level language are passed to a user mode graphics driver3026for compilation. In some embodiments, user mode graphics driver3026uses operating system kernel mode functions3028to communicate with a kernel mode graphics driver3029. In some embodiments, kernel mode graphics driver3029communicates with graphics processor3032to dispatch commands and instructions. IP Core Implementations One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein. FIG.31Ais a block diagram illustrating an IP core development system3100that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system3100may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility3130can generate a software simulation3110of an IP core design in a high-level programming language (e.g., C/C++). The software simulation3110can be used to design, test, and verify the behavior of the IP core using a simulation model3112. The simulation model3112may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design3115can then be created or synthesized from the simulation model3112. The RTL design3115is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design3115, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary. The RTL design3115or equivalent may be further synthesized by the design facility into a hardware model3120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rdparty fabrication facility3165using non-volatile memory3140(e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection3150or wireless connection3160. The fabrication facility3165may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein. FIG.31Billustrates a cross-section side view of an integrated circuit package assembly3170, according to some embodiments described herein. The integrated circuit package assembly3170illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly3170includes multiple units of hardware logic3172,3174connected to a substrate3180. The logic3172,3174may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic3172,3174can be implemented within a semiconductor die and coupled with the substrate3180via an interconnect structure3173. The interconnect structure3173may be configured to route electrical signals between the logic3172,3174and the substrate3180, and can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure3173may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic3172,3174. In some embodiments, the substrate3180is an epoxy-based laminate substrate. The substrate3180may include other suitable types of substrates in other embodiments. The package assembly3170can be connected to other electrical devices via a package interconnect3183. The package interconnect3183may be coupled to a surface of the substrate3180to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module. In some embodiments, the units of logic3172,3174are electrically coupled with a bridge3182that is configured to route electrical signals between the logic3172,3174. The bridge3182may be a dense interconnect structure that provides a route for electrical signals. The bridge3182may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic3172,3174. Although two units of logic3172,3174and a bridge3182are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge3182may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations. Exemplary System on a Chip Integrated Circuit FIG.32-33illustrated exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. FIG.32is a block diagram illustrating an exemplary system on a chip integrated circuit3200that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit3200includes one or more application processor(s)3205(e.g., CPUs), at least one graphics processor3210, and may additionally include an image processor3215and/or a video processor3220, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit3200includes peripheral or bus logic including a USB controller3225, UART controller3230, an SPI/SDIO controller3235, and an I2S/I2C controller3240. Additionally, the integrated circuit can include a display device3245coupled to one or more of a high-definition multimedia interface (HDMI) controller3250and a mobile industry processor interface (MIPI) display interface3255. Storage may be provided by a flash memory subsystem3260including flash memory and a flash memory controller. Memory interface may be provided via a memory controller3265for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine3270. FIG.33A-33Bare block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein.FIG.33Aillustrates an exemplary graphics processor3310of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment.FIG.33Billustrates an additional exemplary graphics processor3340of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor3310ofFIG.33Ais an example of a low power graphics processor core. Graphics processor3340ofFIG.33Bis an example of a higher performance graphics processor core. Each of the graphics processors3310,3340can be variants of the graphics processor3210ofFIG.32. As shown inFIG.33A, graphics processor3310includes a vertex processor3305and one or more fragment processor(s)3315A-3315N (e.g.,3315A,3315B,3315C,3315D, through3315N-1, and3315N). Graphics processor3310can execute different shader programs via separate logic, such that the vertex processor3305is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s)3315A-3315N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor3305performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s)3315A-3315N use the primitive and vertex data generated by the vertex processor3305to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s)3315A-3315N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API. Graphics processor3310additionally includes one or more memory management units (MMUs)3320A-3320B, cache(s)3325A-3325B, and circuit interconnect(s)3330A-3330B. The one or more MMU(s)3320A-3320B provide for virtual to physical address mapping for the graphics processor3310, including for the vertex processor3305and/or fragment processor(s)3315A-3315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s)3325A-3325B. In one embodiment the one or more MMU(s)3320A-3320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s)3205, image processor3215, and/or video processor3220ofFIG.32, such that each processor3205-3220can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s)3330A-3330B enable graphics processor3310to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. As shownFIG.33B, graphics processor3340includes the one or more MMU(s)3320A-3320B, cache(s)3325A-3325B, and circuit interconnect(s)3330A-3330B of the graphics processor3310ofFIG.33A. Graphics processor3340includes one or more shader cores3355A-3355N (e.g.,3355A,3355B,3355C,3355D,3355E,3355F, through3355N-1, and3355N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor3340includes an inter-core task manager3345, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores3355A-3355N and a tiling unit3358to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. In some embodiments, an apparatus includes a plurality of processors including a host processor and a plurality of graphics processing units (GPUs) to process data, each of the plurality of GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the plurality of GPUs is to prefetch data from the memory to the cache of the respective GPU; and wherein the prefetcher of a GPU of the plurality of GPUs is prohibited from prefetching from a page that is not owned by the GPU or by the host processor. In some embodiments, the memory includes a unified virtual memory. In some embodiments, a prefetch of a page by a prefetcher of a GPU of the plurality of GPUs is halted upon reaching a boundary of the page. In some embodiments, a prefetch of a page by a prefetcher of a GPU of the plurality of GPUs is halted upon reaching a boundary of a memory surface. In some embodiments, a prefetch instruction from a prefetcher of a GPU of the plurality of GPUs is a gather/scatter prefetch message including a plurality of prefetch addresses. In some embodiments, the apparatus is to parse the gather/scatter prefetch message and issue a prefetch message for each of the plurality of prefetch addresses. In some embodiments, the gather/scatter prefetch message further includes an entry for each of the plurality of addresses to indicate a cache level for prefetching. In some embodiments, a prefetcher of a GPU of the plurality of GPUs is to send a flag to a thread in a core of the GPU when a prefetch for the thread is complete. In some embodiments, one or more non-transitory computer-readable storage mediums having stored thereon executable computer program instructions that, when executed by one or more processors, cause the one or more processors to perform operations including generating a prefetch instruction by a prefetcher of a first graphics processing unit (GPU), the first GPU being one GPU of a plurality of GPUs in a computing system, the prefetch instruction being directed to a memory including a plurality of memory elements; and caching prefetched data in a cache of the first GPU, wherein the prefetcher of the first GPU is prohibited from prefetching from a page that is not owned by the first GPU or by a host processor of the computing system. In some embodiments, the memory includes a unified virtual memory. In some embodiments, the instructions further include instructions for halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of the page. In some embodiments, the instructions further include instructions for halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of a memory surface. In some embodiments, a prefetch instruction from the prefetcher of the first GPU is a gather/scatter prefetch message including a plurality of prefetch addresses. In some embodiments, the instructions further include instructions for parsing the gather/scatter prefetch message and issuing a prefetch message for each of the plurality of prefetch addresses. In some embodiments, the gather/scatter prefetch message further includes an entry for each of the plurality of addresses to indicate a cache level for prefetching. In some embodiments, the instructions further include instructions for sending a flag from the prefetcher of the first GPU to a thread in a core of the first GPU when a prefetch for the thread is complete. In some embodiments, a method includes generating a prefetch instruction by a prefetcher of a first graphics processing unit (GPU), the first GPU being one GPU of a plurality of GPUs in a computing system, the prefetch instruction being directed to a memory including a plurality of memory elements; and caching prefetched data in a cache of the first GPU, wherein the prefetcher of the first GPU is prohibited from prefetching from a page that is not owned by the first GPU or by a host processor of the computing system. In some embodiments, the memory includes a unified virtual memory. In some embodiments, the method further includes halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of the page. In some embodiments, the method further includes halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of a memory surface. In some embodiments, a prefetch instruction from the prefetcher of the first GPU is a gather/scatter prefetch message including a plurality of prefetch addresses. In some embodiments, the method further includes parsing the gather/scatter prefetch message and issuing a prefetch message for each of the plurality of prefetch addresses. In some embodiments, the gather/scatter prefetch message further includes an entry for each of the plurality of addresses to indicate a cache level for prefetching. In some embodiments, the method further includes sending a flag from the prefetcher of the first GPU to a thread in a core of the first GPU when a prefetch for the thread is complete. In some embodiments, a apparatus includes means for generating a prefetch instruction by a prefetcher of a first graphics processing unit (GPU), the first GPU being one GPU of a plurality of GPUs in a computing system, the prefetch instruction being directed to a memory including a plurality of memory elements; and means for caching prefetched data in a cache of the first GPU, wherein the prefetcher of the first GPU is prohibited from prefetching from a page that is not owned by the first GPU or by a host processor of the computing system. In some embodiments, the memory includes a unified virtual memory. In some embodiments, the apparatus further includes means for halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of the page. In some embodiments, the apparatus further includes means for halting a prefetch of a page by the prefetcher of the first GPU upon reaching a boundary of a memory surface. In some embodiments, a prefetch instruction from the prefetcher of the first GPU is a gather/scatter prefetch message including a plurality of prefetch addresses. In some embodiments, the apparatus further includes means for parsing the gather/scatter prefetch message and issuing a prefetch message for each of the plurality of prefetch addresses. In some embodiments, the gather/scatter prefetch message further includes an entry for each of the plurality of addresses to indicate a cache level for prefetching. In some embodiments, the apparatus further includes means for sending a flag from the prefetcher of the first GPU to a thread in a core of the first GPU when a prefetch for the thread is complete. In the description above, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent, however, to one skilled in the art that embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. There may be intermediate structure between illustrated components. The components described or illustrated herein may have additional inputs or outputs that are not illustrated or described. Various embodiments may include various processes. These processes may be performed by hardware components or may be embodied in computer program or machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the processes. Alternatively, the processes may be performed by a combination of hardware and software. Portions of various embodiments may be provided as a computer program product, which may include a computer-readable medium having stored thereon computer program instructions, which may be used to program a computer (or other electronic devices) for execution by one or more processors to perform a process according to certain embodiments. The computer-readable medium may include, but is not limited to, magnetic disks, optical disks, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically-erasable programmable read-only memory (EEPROM), magnetic or optical cards, flash memory, or other type of computer-readable medium suitable for storing electronic instructions. Moreover, embodiments may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer. In some embodiments, a non-transitory computer-readable storage medium has stored thereon data representing sequences of instructions that, when executed by a processor, cause the processor to perform certain operations. Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the present embodiments. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the concept but to illustrate it. The scope of the embodiments is not to be determined by the specific examples provided above but only by the claims below. If it is said that an element “A” is coupled to or with element “B,” element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A “causes” a component, feature, structure, process, or characteristic B, it means that “A” is at least a partial cause of “B” but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing “B.” If the specification indicates that a component, feature, structure, process, or characteristic “may”, “might”, or “could” be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, this does not mean there is only one of the described elements. An embodiment is an implementation or example. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various novel aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiments requires more features than are expressly recited in each claim. Rather, as the following claims reflect, novel aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment.
231,601
11861760
Like reference numerals are used for like components where appropriate in the drawings. DETAILED DESCRIPTION A first embodiment of the technology described herein comprises a method of operating a tile-based graphics processor that is operable to execute a graphics processing pipeline, the graphics processing pipeline including at least a primitive providing stage and one or more subsequent processing stages, wherein the primitive providing stage is operable to provide primitives to be processed for a rendering tile to the one or more subsequent processing stages for processing, and the one or more subsequent processing stages are operable to perform one or more processing steps in respect of primitives provided for processing by the primitive providing stage; the method comprising:the primitive providing stage providing primitives to be processed for a rendering tile to the one or more subsequent processing stages of the graphics processing pipeline for processing; andwhen there are no more primitives left for the primitive providing stage to provide for processing to the one or more subsequent processing stages of the graphics processing pipeline for the rendering tile, determining whether one or more processing steps to be performed by one or more of the one or more subsequent processing stages of the graphics processing pipeline in respect of a primitive provided for processing by the primitive providing stage for the rendering tile need not be performed; andwhen it is determined that one or more processing steps to be performed by one or more of the one or more subsequent processing stages of the graphics processing pipeline in respect of a primitive provided for processing by the primitive providing stage for the rendering tile need not be performed, causing the one or more of the one or more subsequent processing stages of the graphics processing pipeline to omit performing the one or more processing steps in respect of the primitive. A second embodiment of the technology described herein comprises a tile-based graphics processor that is operable to execute a graphics processing pipeline, the graphics processing pipeline including one or more processing stages that are operable to perform one or more processing steps in respect of primitives provided for processing; the tile-based graphics processor comprising:a primitive providing circuit configured to provide primitives to be processed for a rendering tile to the one or more processing stages of the graphics processing pipeline for processing; anda determining circuit configured to, when there are no more primitives left for the primitive providing circuit to provide for processing to the one or more processing stages of the graphics processing pipeline for a rendering tile, determine whether one or more processing steps to be performed by one or more of the one or more processing stages of the graphics processing pipeline in respect of a primitive provided for processing by the primitive providing circuit for the rendering tile need not be performed; and to:when it is determined that one or more processing steps to be performed by one or more of the one or more processing stages of the graphics processing pipeline in respect of a primitive provided for processing by the primitive providing circuit for the rendering tile need not be performed, cause the one or more of the one or more processing stages of the graphics processing pipeline to omit performing the one or more processing steps in respect of the primitive. The technology described herein is concerned with arrangements in which it is determined whether a graphics processing pipeline need not perform one or more processing steps (e.g. processing operations) in respect of a primitive (or primitives) that has already begun it's processing in the graphics processing pipeline, such that a processing step or steps which the graphics processing pipeline would otherwise perform can be omitted (i.e. not performed) in respect of such a primitive. In the technology described herein, it is determined whether a processing step (operation) to be performed by the graphics processing pipeline need not be performed when there are (and in an embodiment in response to there being) no more primitives left to be provided for processing (e.g. rasterising and rendering) to the pipeline for a rendering tile. As will be discussed in more detail below, the Applicants have recognised that when the “end” of a rendering tile is reached (i.e. when there are no more primitives left to be provided for processing (e.g. rasterising and rendering) to the pipeline for that rendering tile), it may often be the case that there are primitives (fragments) already and still in the pipeline that can be discarded or “killed”, without affecting the desired output for that rendering tile. This may be the case, for example, due to such primitives (fragments) only affecting buffers that will not be output by the pipeline when the rendering tile is completed. For example, it may typically be the case that only colour buffer data is required to be output from a tile-based graphics processing pipeline once it has completed a rendering tile, whereas other data generated by the pipeline while generating the rendering tile, such as depth and stencil buffer data, will not be required to be output. As such, fragments that will update the depth and/or stencil buffer data, but not the output colour buffer, may be discardable (and in an embodiment are discarded) upon reaching the end of a rendering tile. Thus, in the technology described herein, reaching the end of a rendering tile is used to affect and control processing remaining to be performed in respect of primitives for that rendering tile that are already in the pipeline (and whose processing is still to be completed). This can help to reduce the processing effort required to generate a rendering tile, and thus overall render output, e.g. frame for display. This is generally advantageous, but may be particularly advantageous in contexts in which resources are limited, such as in portable devices, e.g. mobile phones and tablets. It will be appreciated therefore, that the technology described herein provides an improved tile-based graphics processor. The tile-based graphics processor (and pipeline) should, and in an embodiment does, generate an overall render output on a tile-by-tile basis. The render output (area) should thus be, and in an embodiment is, divided into plural rendering tiles for rendering purposes. In an embodiment, each rendering tile that the graphics processor (and pipeline) generates for a render output is generated in the manner of the technology described herein. Thus, in an embodiment, the primitive providing stage (circuit) provides to the one or more (subsequent) processing stages of the graphics processing pipeline, the primitives for each rendering tile of a set of plural rendering tiles that a render output is divided into for rendering purposes, and it is in an embodiment determined whether one or more processing steps remaining to be performed for a respective rendering tile need not be performed (and processing step(s) are potentially omitted) each time there are no primitives left to be provided for a respective rendering tile of the render output. The overall render output may comprise any suitable render output, such as frame for display, or render-to-texture output, etc. The render output will typically comprise an array of data elements (sampling points) (e.g. pixels), for each of which appropriate render output data (e.g. a set of colour value data) is generated by the graphics processor. The render output data may comprise colour data, for example, a set of red, green and blue, RGB values and a transparency (alpha, a) value. However, the render output data may comprise data other than colour data, such as depth (Z) and/or stencil (S) data. The tiles that the render output is divided into for rendering purposes can be any suitable and desired such tiles. The size and shape of the rendering tiles may normally be dictated by the tile configuration that the graphics processor is configured to use and handle. The rendering tiles are in an embodiment all the same size and shape (i.e. regularly-sized and shaped tiles are in an embodiment used), although this is not essential. The tiles are in an embodiment rectangular, and in an embodiment square. The size and number of tiles can be selected as desired. In an embodiment, each tile is 16×16, or 32×32 data elements (e.g. fragments or pixels) in size (with the render output then being divided into however many such tiles as are required for the render output size and shape that is being used). To facilitate tile-based graphics processing, the tile-based graphics processor should, and in an embodiment does, include one or more tile buffers that store rendered data for a rendering tile being rendered by the tile-based graphics processor, until the tile-based graphics processor completes the rendering of the rendering tile. The tile buffer may store an array or arrays of sample values for the tile in question, with the sample values in an embodiment being grouped into sets of one or more sample values (such as groups of 2×2 sample values) that are each associated with a respective (e.g. display) pixel. The sample values may, e.g., comprise colour values (a colour buffer), depth values (a depth buffer), stencil values (a stencil buffer), etc. The tile buffer should be, and in an embodiment is, provided local to (i.e. on the same chip as) the tile-based graphics processor, for example, and in an embodiment, as part of RAM that is located on (local to) the graphics processor (chip). The tile buffer may accordingly have a fixed storage capacity, for example corresponding to the data (e.g. for an array or arrays of sample values) that the tile-based graphics processor needs to store for (only) a single rendering tile until the rendering of that tile is completed. Once a rendering tile is completed by the tile-based graphics processor, rendered data for the rendering tile should be, and in an embodiment is, written out from the tile buffer to other storage that is in an embodiment external to (i.e. on a different chip to) the tile-based graphics processor, such as a frame buffer in external memory, for use. Rendering tiles may be combined to form the render output in any suitable and desired manner. In an embodiment, when each rendering tile of a render output is completed by the graphics processor, output data for the rendering tile is written out from the tile buffer to external memory, such as a frame buffer, such that rendering tiles for the render output are combined in the external memory. The graphics processor in an embodiment includes a write out circuit coupled to the tile buffer for this purpose. The external memory could be, and is in an embodiment, on a different chip to the graphics processor, and may, for example, be a main memory of the overall graphics processing system that the graphics processor is part of. It may be dedicated memory for this purpose or it may be part of a memory that is used for other data as well. The primitive providing stage (circuit) can be any suitable and desired graphics processing stage (circuit) (of the graphics processing pipeline) that can provide primitives for processing to subsequent processing stages of the graphics processing pipeline. In an embodiment, the graphics processing pipeline includes (prior to the primitive providing stage) a primitive list preparing stage (a “tiler”) that prepares primitive lists for respective regions of the render output, and the primitive providing stage (circuit) comprises (or is) a primitive list reader that reads primitives listed for a rendering tile by the primitive list preparing stage (“tiler”), and passes those read primitives to the one or more (subsequent) stages of the graphics processing pipeline for processing. In this case, the regions of the render output that the primitive list preparing stage (“tiler”) can prepare primitive lists for may correspond e.g. to single rendering tiles, or to sets of plural rendering tiles (e.g. in the case of “hierarchical tiling” arrangements). Accordingly, depending on how primitives are listed, the primitive list reader may read primitives for any particular rendering tile from a single primitive list or from plural primitive lists. In this case, in an embodiment, the “end” of a rendering tile (that is, the lack of any primitives remaining to be provided for processing for the rendering tile) will be when there are no primitives (to be rasterised and rendered) listed in the primitive list(s) for the rendering tile that the primitive list reader has not read and passed to the one or more (subsequent) stages of the graphics processing pipeline for processing. Thus, in an embodiment, the determining of whether one or more processing steps need not be performed is performed when there are (in response to there being) no primitives listed in the primitive list(s) for a rendering tile that have not been read and passed to the one or more (subsequent) stages of the graphics processing pipeline for processing (e.g. rasterising and rendering). The determining of whether one or more processing steps need not be performed may be triggered upon reaching the “end” of a rendering tile in any suitable and desired manner. In an embodiment, the determining of whether one or more processing steps need not be performed is triggered when it is recognised (i.e. in response to it being recognised) that there are no primitives left for the primitive providing stage (e.g. primitive list reader) to provide for processing to the one or more subsequent processing stages for the rendering tile. For example, the e.g. primitive list reader could determine that the “end” of a rendering tile has been reached when there are no primitives left in the primitive list(s) for the rendering tile in question (that it has not read and passed on). In an embodiment, a marker included in the primitive list(s) for a rendering tile (explicitly) indicates the “end” of the rendering tile, and the e.g. primitive list reader recognises that the “end” of a rendering tile has been reached when it encounters such a marker. Such a marker may comprise, for example and in an embodiment, an end of tile flag, a new tile command, etc. The e.g. primitive list reader may, in response to recognising the “end” of a rendering tile, cause the determining of whether one or more processing steps need not be performed, e.g. by issuing an appropriate command. Thus, in an embodiment the determining of whether one or more processing steps for a rendering tile need not be performed is performed in response to (the recognition of) there being no more primitives left for the primitive providing stage to provide for processing to the one or more subsequent processing stages of the graphics processing pipeline for the rendering tile. In another embodiment, the determining of whether one or more processing steps need not be performed is triggered by a (e.g. the final) command listed in the primitive lists(s) for the rendering tile in question. In this case, the command may, for example, be included in the primitive list(s) for a (and in an embodiment each) rendering tile, in an embodiment by the primitive list preparing stage (“tiler”). Thus in an embodiment, the determining of whether one or more processing steps for a rendering tile need not be performed is performed in response to a command issued to the one or more subsequent processing stages of the graphics processing pipeline for the rendering tile. The command is in an embodiment the last command that the primitive providing stage (e.g. primitive list reader) provides to the one or more subsequent stages for the rendering tile in question. The one or more (subsequent) processing stages of the graphics processing pipeline can be any suitable and desired graphics processing stages (circuits) that can process primitives provided for processing by the primitive providing stage (circuit) (e.g. primitive list reader). In an embodiment, the one or more (subsequent) processing stages comprise (or are) one or more earlier processing stages and one or more later processing stages. It will be appreciated here that a later processing stage refers to a processing stage of the graphics processing pipeline that may perform its respective processing step(s) in respect of a primitive after an earlier processing stage, and thus may use a processing result generated by an earlier processing stage. For example, and in an embodiment, the one or more earlier processing stages comprise (or are) a rasteriser, and the one or more later processing stages comprise (or are) one or more fragment processing stages (circuits). In this case, the rasteriser in an embodiment receives primitives from the primitive providing stage (circuit) (e.g. primitive list reader), rasterises those primitives to fragments, and provides the fragments to the (first one of the) one or more fragment processing stages (circuits) for processing. The one or more fragment processing stages (circuits) in an embodiment receive fragments from the rasteriser, and are operable to perform one or more fragment processing (e.g. rendering) steps (operations) on those fragments to generate rendered fragment data, which rendered fragment data may be written to (an appropriate buffer of) the tile buffer. The rasteriser can rasterise primitives provided by the primitive providing stage (circuit) (e.g. primitive list reader) to fragments in any suitable and desired manner. In an embodiment, the rasteriser is configured to perform “hierarchical rasterisation”. Thus, the rasteriser is in an embodiment operable to test primitives to be rasterised against progressively smaller patches (regions) of the render output area, e.g. and in an embodiment, in an iterative manner. Other arrangements for the rasteriser would be possible. For example, in other embodiments, the rasteriser rasterises primitives to fragments in a non-hierarchical manner. The one or more fragment processing stages (circuits) can be any suitable graphics processing pipeline stages that can perform any appropriate fragment processing (e.g. rendering) steps (operations) in respect of fragments generated by the rasteriser. The one or more fragment processing stages may comprise, for example, any one or more of: a fragment output buffer that is operable to issue fragments to a next stage of the graphics processing system; an (early and/or late) fragment depth or stencil testing stage that is operable to perform a fragment depth or stencil test, and to write out fragment depth or stencil test results to a (e.g. depth or stencil tile) buffer of the graphics processing system; and a fragment renderer that is operable to perform fragment rendering to generate rendered fragment data, and to write out rendered fragment data to a (e.g. colour tile) buffer of the graphics processing system. The fragment rendering may comprise, for example, texture mapping, blending, shading, etc. As discussed above, in embodiments of the technology described herein, when there are no primitives left to be provided for processing (e.g. rasterising and rendering) for a rendering tile (i.e. when the “end” of a rendering tile has been reached), a determining operation is triggered in which it is determined (by the determining circuit) whether one or more processing steps (operations) that remain to be performed for the rendering tile can be omitted. It will be appreciated here that the determining operation of the technology described herein should be performed in respect of a rendering tile after the primitive providing stage (circuit) (e.g. the primitive list reader) has provided all of the primitives that are to be processed (e.g. rasterised and rendered) for the rendering tile to the one or more (subsequent) processing stages for processing, but in an embodiment before the one or more (subsequent) processing stages have completed all of the processing steps in respect of those primitives, e.g. such that at the point in time at which it is determined whether a processing step can be omitted in respect of the rendering tile, there remains processing step(s) left to be performed (and so potentially omitted) in respect of the rendering tile in question. Correspondingly, in an embodiment, the determining operation in respect of a rendering tile is performed after all of the primitives that the rasteriser is to rasterise for the rendering tile in question have been received by the rasteriser. In an embodiment, the determining operation in respect of a rendering tile is performed after the rasteriser has completed the rasterisation of all of the primitives that the rasteriser is to rasterise for the rendering tile in question. Correspondingly, in an embodiment, the determining operation in respect of a rendering tile is performed after all of the fragments that the one or more fragment processing stages are to process (e.g. render) for the rendering tile in question have been received by (at least the first of) the one or more fragment processing stages, but in an embodiment before the one or more fragment processing stages have completed all of the fragment processing in respect of those fragments. It may be determined (by the determining circuit) that one or more processing steps (operations) need not be performed for any suitable and desired reasons, and based on any suitable and desired criteria. In an embodiment, a processing step (e.g. operation) that remains to be performed for a rendering tile upon reaching the “end” of the rendering tile can be omitted if it can be determined that that processing step will not affect a buffer (of the tiles buffers) that will be output (to the main memory) when the graphics processor completes the rendering tile (i.e. when all of the processing steps to be performed by the one or more (subsequent) processing stages for the rendering tile have been completed). Thus, in an embodiment, it is determined (by the determining circuit) that a processing step (e.g. operation) need not be performed when it is determined that the processing step will not affect an output buffer, i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question. It will be appreciated here that it may typically be the case that only a colour buffer (in the tile buffer) will be output to (a frame buffer in) the e.g. main memory once a rendering tile has been completed by the graphics processor, and other buffers, e.g. depth (Z) and stencil (S) buffers, will typically not be output to the e.g. main memory. Thus, in an embodiment, an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question) is a colour buffer, and other buffers, such as depth and/or stencil buffers are not output buffers. However, other output buffers are possible. For example, there could be plural different, e.g. colour, output buffers. Moreover, a depth and/or stencil buffer(s) could be an output buffer. For example, in the case of a deferred rendering scheme, a depth buffer may be an output buffer for an initial depth only rendering pass. It may be determined (by the determining circuit) that a processing step (e.g. operation) will not affect an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question) in any suitable and desired manner. In an embodiment, it may be determined (by the determining circuit) that a processing step (e.g. operation) will not affect an output buffer when the processing step is a processing step in respect of a primitive that does not (directly) write to an output buffer (e.g. when the processing step is a processing step in respect of a primitive that writes to another buffer, such as a depth and/or stencil buffer). Thus, in an embodiment, it is determined whether a primitive does not write data to an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question), and it is determined that a processing step to be performed in respect of the primitive need not be performed when it is determined that the primitive does not write data to an output buffer. As well as considering whether a primitive will directly write to an output buffer, the possibility of a primitive indirectly affecting a write to an output buffer may also be taken into account. For example, it could be the case that even though a first primitive does not itself directly write to an output buffer, the processing of the first primitive will affect the processing of a second primitive that will write to an output buffer. This may be achieved as desired. In an embodiment, it may be determined (by the determining circuit) that a processing step (e.g. operation) cannot (directly or indirectly) affect an output buffer when the processing step is a processing step in respect of a primitive that does not itself write data to an output buffer, and which was provided for processing by the primitive providing stage (circuit) after any other primitives for the rendering tile that do write data to an output buffer. Thus, in an embodiment, it is determined whether a primitive was provided for processing by the primitive providing stage (circuit) after any other primitives for the rendering tile that write data to an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question), and it is determined that a processing step (e.g. operation) to be performed in respect of the primitive need not be performed when it is determined that the primitive (does not write data to an output buffer and) was provided for processing by the primitive providing stage (circuit) after any other primitives for the rendering tile that write data to an output buffer. A processing step (e.g. operation) in respect of a primitive may be caused to be omitted in any suitable and desired manner. In an embodiment, a command is issued to the one or more (subsequent) processing stages that triggers one or more of the one or more (subsequent) processing stages to determine that a processing step or steps to be performed in respect of a primitive that does not affect a buffer to be written out from the tile buffer need not be performed. The command may be provided by the primitive list providing stage (e.g. primitive list reader), e.g. as described above. In an embodiment, the one or more later (e.g. fragment) processing stages of the graphics processing pipeline can be caused, by the one or more earlier processing stages (e.g. the rasteriser), to omit performing one or more processing steps in respect of (fragments for) a primitive. It will be appreciated that in these embodiments, the one or more earlier processing stages may comprise the determining circuit. In this case, in an embodiment, the one or more later (e.g. fragment) processing stages of the graphics processing pipeline are caused, by the one or more earlier processing stages (e.g. the rasteriser), to omit performing one or more processing steps in respect of (fragments for) a primitive that does not affect (that has been determined as not affecting) an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question). In one such embodiment, the one or more earlier processing stages (e.g. the rasteriser) of the graphics processing pipeline are operable to determine whether a primitive will be overdrawn such that one or more processing steps (e.g. operations) to be performed by the one or more later (e.g. fragment) processing stages of the graphics processing pipeline in respect of the primitive need not be performed, and to, when it is determined that a primitive will be overdrawn such that one or more processing steps to be performed by the one or more later (e.g. fragment) processing stages in respect of the primitive need not be performed, cause the one or more later (e.g. fragment) processing stages to omit performing the one or more processing steps in respect of the primitive. In this case, in an embodiment, when there are no more primitives left for the primitive providing stage (circuit) to provide for processing to the one or more (subsequent) processing stages of the graphics processing pipeline for the rendering tile, the one or more earlier processing stages (e.g. the rasteriser) are caused to determine that a primitive that does not affect an output buffer will be overdrawn such that one or more processing steps (operations) to be performed by the one or more later (e.g. fragment) processing stages in respect of the primitive that does not affect an output buffer need not be performed, and to therefore cause the one or more later (e.g. fragment) processing stages to omit performing the one or more processing steps in respect of the primitive that does not affect an output buffer. The one or more earlier processing stages (e.g. the rasteriser) may be caused to determine that a primitive that does not affect an output buffer will be overdrawn in any suitable and desired manner. In an embodiment, a (the) command is issued to the one or more earlier processing stages (e.g. the rasteriser) that triggers the one or more earlier processing stages to determine that the primitive that does not affect a buffer to be written out from the tile buffer will be overdrawn. In this case, the command may indicate that the entirety of one or more, or all, non-output buffers (i.e. one or more, or all, buffers that will not be written out from the tile buffer upon completing the rendering tile in question) are to be overwritten. In an embodiment, the one or more earlier processing stages (e.g. the rasteriser) of the graphics processing pipeline are operable to determine whether a later primitive will overdraw an earlier primitive such that one or more processing steps (e.g. operations) to be performed by the one or more later (e.g. fragment) processing stages of the graphics processing pipeline in respect of the earlier primitive need not be performed, and to, when it is determined that a later primitive will overdraw an earlier primitive such that one or more processing steps to be performed by the one or more later (e.g. fragment) processing stages in respect of the earlier primitive need not be performed, cause the one or more later (e.g. fragment) processing stages to omit performing the one or more processing steps in respect of the earlier primitive. It will be appreciated here that a later primitive refers to a primitive that was provided for processing by the primitive providing stage (circuit) (e.g. primitive list reader) (and e.g. received by the one or more earlier processing stages (e.g. the rasteriser)) after an earlier primitive was provided for processing by the primitive providing stage (circuit) (e.g. primitive list reader) (and e.g. received by the one or more earlier processing stages (e.g. the rasteriser)). In an embodiment, the command issued to the one or more earlier processing stages (e.g. the rasteriser) is in the form of a “dummy primitive”. Thus, in an embodiment, the one or more earlier processing stages (e.g. the rasteriser) are caused to determine that a primitive that does not affect an output buffer will be overdrawn by providing to the one or more earlier processing stages (e.g. the rasteriser), a dummy primitive that will trigger the one or more earlier processing stages (e.g. the rasteriser) to determine that the dummy primitive will overdraw a primitive that does not affect an output buffer. In this case, the dummy primitive may be a primitive covering the entire rendering tile, and that is set to overwrite one or more non-output buffers (i.e. one or more buffers that will not be written out from the tile buffer upon completing the rendering tile in question), e.g. all buffers except output buffer(s). The dummy primitive should be provided to the one or more earlier processing stages (e.g. the rasteriser) after all of the “regular” (e.g. primitive list(s) listed) primitives have been provided to the one or more earlier processing stages (e.g. the rasteriser) by the primitive providing stage (circuit) (e.g. primitive list reader) for the rendering tile in question, and the one or more earlier processing stages (e.g. the rasteriser) may respond to the dummy primitive substantially as they would do for a “regular” primitive, and thus, in particular, trigger one or more later (e.g. fragment) processing stages to omit performing the appropriate processing step(s) in respect of primitive(s) for the rendering tile. The one or more earlier processing stages (e.g. the rasteriser) can indicate to the one or more later (e.g. fragment) processing stages that one or more processing steps should be omitted (and thereby trigger the one or more later (e.g. fragment) processing stages to omit performing those steps) in any suitable and desired manner, and using any suitable mechanism. In an embodiment, one or more earlier processing stages (e.g. the rasteriser) can send a signal to one or more later (e.g. fragment) processing stages that indicates to the one or more later (e.g. fragment) processing stages that one or more (e.g. fragment) processing steps should be omitted. Thus, in an embodiment, the one or more earlier processing stages (e.g. the rasteriser) are operable to cause the one or more later (e.g. fragment) processing stages to omit performing one or more processing steps in respect of a primitive by sending a signal to the one or more later (e.g. fragment) processing stages that indicates that the one or more processing steps in respect of the primitive need not be performed, and the one or more later (e.g. fragment) processing stages are in an embodiment operable to omit performing one or more processing steps in respect of a primitive in response to receiving a signal from the one or more earlier processing stages (e.g. the rasteriser) that indicates that the one or more processing steps in respect of the primitive need not be performed. In this case, the one or more earlier processing stages (e.g. the rasteriser) in an embodiment cause the one or more later processing stages to omit performing one or more processing steps (e.g. operations) in respect of a primitive that does not affect an output buffer by sending a signal to the one or more later (e.g. fragment) processing stages that indicates that the one or more processing steps in respect of the primitive that does not affect an output buffer need not be performed. The graphics processing pipeline may include such a signalling mechanism solely for the purposes of omitting processing steps upon reaching the “end” of a rendering tile. However, in an embodiment, the signalling mechanism is used (also) for other purposes, such as, and in an embodiment, “Forward Pixel Kill” (FPK) purposes, e.g. and in an embodiment as described in US 2014/0168220 and/or US 2014/0354654, the entire contents of which are hereby incorporated by reference. Thus, the graphics processor of the technology described herein may be configured to perform “Forward Pixel Kill” (FPK), and include any one or more, or all, of the features as described in any one or both of these documents, as appropriate. For example, it may be determined that a later primitive will overdraw an earlier primitive as described in any one or both of these documents. In another embodiment, one or more earlier processing stages (e.g. the rasteriser) can indicate to one or more later (e.g. fragment) processing stages that one or more processing steps should be omitted by updating metadata that the one or more later (e.g. fragment) processing stages query to determine whether a processing step to be performed should be omitted. Thus, in an embodiment, the one or more later (e.g. fragment) processing stages are operable to, prior to performing (in an embodiment each of) one or more (e.g. fragment) processing steps (e.g. operations) in respect of a primitive, determine whether metadata stored for the primitive indicates that the one or more (e.g. fragment) processing steps need not be performed, and to, when it is determined that metadata stored for the primitive indicates that the one or more (e.g. fragment) processing steps need not be performed, omit performing the one or more (e.g. fragment) processing steps; and the one or more earlier processing stages (e.g. the rasteriser) are in an embodiment operable to cause the one or more later (e.g. fragment) processing stages to omit performing one or more (e.g. fragment) processing steps in respect of a primitive by storing metadata for the primitive that indicates that the one or more (e.g. fragment) processing steps need not be performed. In this case, the one or more earlier processing stages (e.g. the rasteriser) in an embodiment cause the one or more later (e.g. fragment) processing stages to omit performing one or more (e.g. fragment) processing steps in respect of a primitive that does not affect an output buffer by storing metadata for the primitive that does not affect an output buffer that indicates that the one or more (e.g. fragment) processing steps in respect of the primitive that does not affect an output buffer need not be performed. The graphics processing pipeline may include such a metadata mechanism solely for the purposes of omitting processing steps upon reaching the “end” of a rendering tile. However, in an embodiment, the metadata mechanism is used (also) for other purposes, such as, and in an embodiment, “Patch Forward Pixel Kill” (PFPK) purposes, e.g. and in an embodiment as described in US 2020/0074721, the entire contents of which is hereby incorporated by reference. Thus, the graphics processor of the technology described herein may be configured to perform “Patch Forward Pixel Kill” (PFPK), and include any one or more, or all, of the features as described in this document, as appropriate. For example, it may be determined that a later primitive will overdraw an earlier primitive as described in this document. Thus, in an embodiment, as well as determining whether one or more processing steps can be omitted (and (potentially) omitting those processing steps) when there are (in response to there being) no more primitives left for the primitive providing stage (circuit) to provide for processing to the one or more (subsequent) processing stages for the rendering tile, it is determined whether one or more processing steps can be omitted (and those processing steps (potentially) omitted) in response to the one or more (subsequent) processing stages receiving one or more, and in an embodiment each, primitive provided for processing by the primitive providing stage (circuit) for the rendering tile. As well as indicating whether one or more processing steps in respect of a primitive need not be performed, the metadata may include any other suitable information. In an embodiment, the metadata is arranged and used as described in US 2020/0074721. In an embodiment, the metadata data (also) indicates the one or more buffers (of the tile buffer) that a primitive writes to. Thus, in an embodiment, the metadata is consulted when determining whether a primitive directly affects, i.e. writes data to, an output buffer, in an embodiment by querying the metadata for the primitive in question. In an embodiment, when determining whether a primitive that does not affect an output buffer will be (apparently) overdrawn (by a dummy primitive), plural sets of metadata stored respectively for plural earlier primitives for the rendering tile are searched through to identify whether any of those earlier primitives for the rendering tile are primitives that do not write to an output buffer and will be (apparently) overdrawn (by a dummy primitive). The metadata may also be consulted to determine whether a primitive could indirectly affect an output buffer. In an embodiment, the metadata searching is conducted sequentially backwards (e.g. in an opposite order to the order in which the primitives were provided by the primitive providing stage (circuit)) though the plural sets of metadata stored respectively for the plural earlier primitives for the rendering tile, and the searching is terminated when it is determined that at least one of the plural earlier primitives for the rendering tile will not be (apparently) overdrawn (by a dummy primitive). This may help to avoid inadvertently identifying earlier primitives whose processing may impact on the processing of later primitives. The one or more processing steps that are (potentially) omitted in respect of a primitive can be any suitable processing steps. (It will be appreciated here that a processing step that is omitted in respect of a primitive may be performed (i.e. not omitted) for another primitive.) For example, the one or more processing steps that may be omitted may comprise any one or more, or all, of: a step of performing a fragment depth or stencil test; a step of writing out fragment depth or stencil test results to a buffer of the graphics processing system; a step of performing fragment rendering to generate rendered fragment data; and a step of writing out rendered fragment data to a buffer of the graphics processing system. The one or more processing steps that may be omitted may comprise one or more specific processing operations that the one or more (subsequent) processing stages of the graphics processing pipeline would otherwise perform in respect of a primitive, for example, and in an embodiment, as described in US 2020/0074721. In an embodiment, the one or more processing steps that may be omitted comprise all of the processing steps that remain to be performed in respect of a primitive for the rendering tile. Accordingly, in an embodiment, the effect of omitting the one or more processing steps will be that a primitive (that does not affect an output buffer) is discarded from further processing, i.e. “killed”, for the particular rendering tile in question. (It will be appreciated here that a primitive that has been discarded (“killed”) for a particular rendering tile may be processed for a different rendering tile). This may be achieved in any suitable and desired manner. In an embodiment, a (and in an embodiment each) fragment processing stage is operable to issue fragments to a next fragment processing stage or buffer of the graphics processing system, and a (and in an embodiment each) fragment processing stage is operable to “kill” one or more fragments by not issuing (omitting issuing) the one or more fragments to the next fragment processing stage or buffer. Thus, the one or more processing steps that may be omitted in an embodiment comprise a step of issuing one or more fragments for a primitive to a next processing stage or buffer of the graphics processing system. It will be appreciated here that the effect of omitting issuing a fragment to a next processing stage or buffer of the graphics processing system will be that no further processing steps will be performed in respect of that fragment. Thus, the fragment will be, in effect, “killed”. Correspondingly, omitting issuing all fragments for a primitive to a next processing stage or buffer of the graphics processing system will (and in an embodiment does) result in no further processing steps being performed in respect of that primitive. That is, the primitive will be, in effect, “killed”. It will be appreciated, therefore, that in an embodiment of the technology described herein, when there are (and in an embodiment in response to there being) no more primitives left for the primitive providing stage (circuit) to provide for processing to the one or more (subsequent) processing stages of the graphics processing pipeline for the rendering tile, one or more primitives in the pipeline for the rendering tile that will not affect an output buffer (i.e. a buffer to be written out from the tile buffer upon completing the rendering tile in question) are identified, and (fragments for) the identified one or more primitives are discarded from further processing (i.e. “killed”). It will be appreciated that in embodiments of the technology described herein, processing steps may be omitted in respect of plural primitives that are, e.g. identified as not affecting an output buffer. Similarly, it will be appreciated that in embodiments of the technology described herein, the render output may be a render output, e.g. output frame, in a sequence of plural such render outputs, e.g. output frames, that the graphics processor generates. In this case, each render output that the graphics processor (and pipeline) generates is in an embodiment generated in the manner of the technology described herein. It will furthermore be appreciated that, the graphics processor of the technology described herein may be part of an overall graphics processing system that includes, e.g., and in an embodiment, a host processor that, e.g., executes applications that require processing by the graphics processor (and, optionally, a display). The host processor will send appropriate commands and data to the graphics processor to control it to perform graphics processing operations and to produce graphics processing output required by applications executing on the host processor. To facilitate this, the host processor should, and in an embodiment does, also execute a driver for the graphics processor. The host processor may also execute a compiler or compilers for compiling programs to be executed by (e.g., a programmable processing stage (shader) of the) graphics processor. The graphics processor may also comprise, and/or be in communication with, one or more memories and/or memory devices that store the data described herein, and/or the output data generated by the graphics processor, and/or store software (e.g. program) for performing the processes described herein. The graphics processor may also be in communication with a host microprocessor, and/or with a display for displaying images based on the data generated by the graphics processor. The technology described herein can be used for all forms of output that a graphics processor may be used to generate. For example, the graphics processor may generate frames for display, render-to-texture outputs, etc. The output data values from the processing are in an embodiment exported to external, e.g. main, memory, for storage and use, such as to a frame buffer for a display. In an embodiment, the various functions of the technology described herein are carried out on a single graphics processing platform that generates and outputs data (such as rendered fragment data that is, e.g., written to the frame buffer), for example for a display device. The technology described herein can be implemented in any suitable system, such as a suitably operable micro-processor based system. In some embodiments, the technology described herein is implemented in a computer and/or micro-processor based system. The technology described herein is in an embodiment implemented in a portable device, such as, and in an embodiment, a mobile phone or tablet. The various functions of the technology described herein can be carried out in any desired and suitable manner. For example, the functions of the technology described herein can be implemented in hardware or software, as desired. Thus, for example, the various functional elements, stages, units, and “means” of the technology described herein may comprise a suitable processor or processors, controller or controllers, functional units, circuitry, circuits, processing logic, microprocessor arrangements, etc., that are operable to perform the various functions, etc., such as appropriately dedicated hardware elements (processing circuits/circuitry) and/or programmable hardware elements (processing circuits/circuitry) that can be programmed to operate in the desired manner. It should also be noted here that the various functions, etc., of the technology described herein may be duplicated and/or carried out in parallel on a given processor. Equally, the various processing stages may share processing circuits/circuitry, etc., if desired. Furthermore, any one or more or all of the processing stages or units of the technology described herein may be embodied as processing stage or unit circuits/circuitry, e.g., in the form of one or more fixed-function units (hardware) (processing circuits/circuitry), and/or in the form of programmable processing circuitry that can be programmed to perform the desired operation. Equally, any one or more of the processing stages or units and processing stage or unit circuits/circuitry of the technology described herein may be provided as a separate circuit element to any one or more of the other processing stages or units or processing stage or unit circuits/circuitry, and/or any one or more or all of the processing stages or units and processing stage or unit circuits/circuitry may be at least partially formed of shared processing circuit/circuitry. It will also be appreciated by those skilled in the art that all of the described embodiments of the technology described herein can include, as appropriate, any one or more or all of the optional features described herein. The methods in accordance with the technology described herein may be implemented at least partially using software e.g. computer programs. Thus, further embodiments of the technology described herein comprise computer software specifically adapted to carry out the methods herein described when installed on a data processor, a computer program element comprising computer software code portions for performing the methods herein described when the program element is run on a data processor, and a computer program comprising code adapted to perform all the steps of a method or of the methods herein described when the program is run on a data processing system. The data processing system may be a microprocessor, a programmable FPGA (Field Programmable Gate Array), etc. The technology described herein also extends to a computer software carrier comprising such software which when used to operate a graphics processor, renderer or other system comprising a data processor causes in conjunction with said data processor said processor, renderer or system to carry out the steps of the methods of the technology described herein. Such a computer software carrier could be a physical storage medium such as a ROM chip, CD ROM, RAM, flash memory, or disk, or could be a signal such as an electronic signal over wires, an optical signal or a radio signal such as to a satellite or the like. It will further be appreciated that not all steps of the methods of the technology described herein need be carried out by computer software and thus further embodiments of the technology described herein comprise computer software and such software installed on a computer software carrier for carrying out at least one of the steps of the methods set out herein. The technology described herein may accordingly suitably be embodied as a computer program product for use with a computer system. Such an implementation may comprise a series of computer readable instructions fixed on a tangible, non-transitory medium, such as a computer readable medium, for example, diskette, CD ROM, ROM, RAM, flash memory, or hard disk. It could also comprise a series of computer readable instructions transmittable to a computer system, via a modem or other interface device, over a tangible medium, including but not limited to optical or analogue communications lines, or intangibly using wireless techniques, including but not limited to microwave, infrared or other transmission techniques. The series of computer readable instructions embodies all or part of the functionality previously described herein. Those skilled in the art will appreciate that such computer readable instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Further, such instructions may be stored using any memory technology, present or future, including but not limited to, semiconductor, magnetic, or optical, or transmitted using any communications technology, present or future, including but not limited to optical, infrared, or microwave. It is contemplated that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation, for example, shrink wrapped software, pre-loaded with a computer system, for example, on a system ROM or fixed disk, or distributed from a server or electronic bulletin board over a network, for example, the Internet or World Wide Web. A number of embodiments of the technology described herein will now be described. As discussed above, the technology described herein relates to arrangements in which, upon reaching the end of a rendering tile, it is determined whether any processing steps (e.g. operations) left to be performed by a graphics processing pipeline for the rendering tile can be omitted, e.g. because they will not affect a buffer that will be output when the rendering tile is complete. When it is determined that a processing step can be omitted, that processing step is omitted. This means that processing steps which the pipeline would otherwise perform can be omitted, such that the overall processing effort required to generate a rendering tile can be reduced. FIG.1shows schematically a graphics processing system that may be operated in accordance with an embodiment of the technology described herein. An application2, such as a game, executing on a host processor1may require graphics processing operations to be performed by an associated graphics pipeline that is implemented by means of a graphics processing unit (GPU)3. To do this, the application will generate API (Application Programming Interface) calls that are interpreted by a driver4for the graphics process pipeline3that is running on the host processor1to generate appropriate commands to the graphics processor3to generate graphics output required by the application2. To facilitate this, a set of “commands” will be provided to the graphics processor3in response to commands from the application2running on the host system1for graphics output (e.g. to generate a frame to be displayed). FIG.2shows schematically the graphics processor3in more detail. The graphics processor3shown inFIG.2is a tile-based graphics processor that executes a tile-based graphics processing pipeline, and will thus produce rendering tiles of a render output data array, such as an output frame to be generated. The render output data array may typically be an output frame intended for display on a display device, such as a screen or printer, but may also, for example, comprise intermediate data intended for use in later rendering passes (also known as a “render to texture” output), etc. FIG.2shows the main elements and pipeline stages of the tile-based graphics processing pipeline executed by the tile-based graphics processor3. As will be appreciated by those skilled in the art there may be other elements of the graphics processing pipeline that are not illustrated inFIG.2. It should also be noted here thatFIG.2is only schematic, and that, for example, in practice the shown functional units and pipeline stages may share significant hardware circuits, even though they are shown schematically as separate stages inFIG.2. It will also be appreciated that each of the stages, elements and units, etc., of the graphics processing pipeline as shown inFIG.2may be implemented as desired and will accordingly comprise, e.g., appropriate circuitry and/or processing logic, etc., for performing the necessary operation and functions. As shown inFIG.2, the tile-based graphics processor3includes a geometry processor21, and a renderer22, both of which can access a memory23, which, in the present embodiment, is a main memory of the overall graphics processing system. The memory23stores, inter alia, and as shown inFIG.2, a set of raw geometry data24(which is, for example, provided by the graphics processor driver4or an API running on the host system1(microprocessor)), a set of transformed geometry data25(which is the result of various transformation and processing operations carried out on the raw geometry24), and a set of primitive lists26. The primitive lists26contain data, commands, etc., for the respective primitives. The transformed geometry data25comprises, for example, transformed vertices (vertex data), etc. The geometry processor21comprises, inter alia, a programmable vertex shader27, and a primitive list building unit28. The programmable vertex shader27takes as it input the raw geometry data24stored in the memory23, and processes that data to provide transformed geometry data25(which it then stores in the memory23) comprising the geometry data in a form that is ready for2D placement in the render output (e.g. frame to be displayed). The primitive list building unit (“tiler”)28performs the process of “tiling” to allocate primitives to the primitive lists which are then used by the renderer22to identify the primitives that should be rendered for each rendering tile that is to be rendered to generate the render output (frame to be rendered for display). To do this, the primitive list building unit28takes as its input the transformed and processed vertex (geometry) data25from the programmable vertex shader27(i.e. the positions of the primitives in the frame), builds primitive lists using that data, and stores those lists as the primitive lists26in the memory23. The renderer22includes a primitive selection unit29, a primitive list cache30, a vertex selection unit31, a vertex data cache32, a rasterising unit33, a rendering unit34, and tile buffers35. The primitive selection unit29of the renderer22determines which primitive is to be rendered next. It does this by considering the primitive lists26stored in the memory23, and selecting from one of those lists the next primitive to be rendered. The primitive selection unit29can also place one or more primitive lists in the primitive list cache30as appropriate. The primitive selection unit29passes the primitive that it has selected for rendering next to the vertex selection unit31. In response to this, the vertex selection unit31retrieves the appropriate transformed vertex data for the primitive in question from the transformed geometry data25stored in the memory23, and then provides the primitive (i.e. its transformed vertex data) to the rasterising unit33for processing. The vertex selection unit31can cache vertex data that it has retrieved from the memory23in the vertex data cache32, if desired. The rasterising unit33then rasterises the primitive to fragments, and provides those fragments to the rendering unit34for rendering. The rendering unit34performs a number of fragment processing (rendering) operations, such as texture mapping, blending, shading, etc. on the fragments, to generate rendered fragment data for the fragments representing the primitive, and stores the rendered fragment data in the tile buffers35. The tile buffer25is provided as part of RAM that is located on (local to) the graphics processor3(chip). The tile buffer25stores colour buffers that store an appropriate colour (and other appropriate data, such as Multiple Render Target data, e.g. a surface normal, etc.) for each sampling point that the buffers represent (in essence for each sampling point of a tile that is being processed). Once each tile has been processed, its data is, e.g., exported from the tile buffers35to the main memory23(e.g. to a frame buffer (not shown) in the main memory23) for storage, and the next tile is then processed, and so on, until sufficient tiles have been processed to generate the entire render output (e.g. frame (image) to be displayed). FIG.3shows schematically the rasterising unit33, rendering unit34, and tile buffers35of the renderer22of the tile-based graphics processing pipeline3of the present embodiment in more detail. As illustrated inFIG.3, the rasteriser33operates to rasterise primitives102making up the render output (e.g. the image to be displayed) into graphics fragments for processing. To do this, the rasteriser33receives graphics primitives102to be rendered selected by the primitive selection unit29from the vertex selection unit31, rasterises the primitives102to sampling points, and generates graphics fragments having appropriate positions (representing appropriate sampling positions) for rendering the primitives102. In the present embodiment, a graphics fragment that is generated by the rasteriser33can represent (have associated with it) a set of one or more, such as four, sampling positions. In the present embodiment, the rasteriser33is a hierarchical rasteriser that may iteratively test primitives102against progressively smaller patches (regions) of the render output (target) area (and thus, correspondingly, patches of fragments), starting from a largest patch size, down (potentially) to a minimum patch size, discarding (culling) any patches that are not (at least in part) covered by the primitive102. Each patch that is tested corresponds to a given area of the render output, e.g. frame, being generated (and thus to a given set of fragments). In the present embodiment, the largest patch size that the rasteriser33can test primitives against corresponds to the size of an entire rendering tile, which in the present embodiment corresponds to 16×16 fragments. The minimum patch size corresponds, in the present embodiment, to a 2×2 group of fragments (i.e. to an array of sampling points that would be rasterised to a 2×2 group of fragments). Other arrangements would be possible. The rasterisation stage118of the rasteriser33performs this render output patch testing. To do this, it starts with a largest (16×16) patch of the render output area and tests the patch against the edges of the primitive102in question to determine if the primitive completely covers the largest patch or at least partially covers the largest patch (i.e. at least partially covers any patch of a 2×2 set of smaller patches of the render output that the largest patch is divided into (encompasses)). The edges of the primitive102are represented by appropriate line (edge) equations that have been derived from the vertices of the primitive, and a grid of sampling points is derived for the patch (and for each patch) being tested. The patch sampling points are then used with the line equations representing the edges of the primitive in question to perform an edge test for the edges to determine if the patch is at least partially covered by the primitive. In the present embodiment, the rasterisation stage118determines that a patch of the render output is at least partially covered by a primitive if at least one of the following conditions is met: at least one edge of the patch is within the primitive; at least one edge of the patch is crossed by an edge of the primitive; at least one vertex of the primitive is within the patch; or at least one vertex of the primitive is on a patch edge and, if the vertex is on the patch edge, another vertex of the primitive is on another edge of the patch, or if the vertex is on a corner of the patch, another vertex is on the opposite corner or on one of the opposite edges of the patch. The rasterisation stage determines that a patch of the render output is completely covered by a primitive if that patch is found to entirely pass the edge test for each of (for all of) the edges of the primitive. If it is found that the largest patch is not covered by the primitive102at all, then the patch is not processed further in respect of the primitive102in question (i.e. the entire patch is discarded for the primitive in question), and another (the next) largest patch may then be tested against the primitive, and so on. On the other hand, if the primitive is found to at least partially cover the largest (i.e. at least partially cover any of the smaller patches of the set of plural smaller patches of the render output that the largest patch encompasses (is divided into)), then the largest patch is forwarded by the rasterisation stage118to the hierarchical depth and stencil (ZS) test stage120of the rasteriser33. The hierarchical depth and stencil (ZS) test stage120is operable to perform initial hierarchical depth and stencil tests on each of the patches considered by the rasteriser33to see if those patches can be culled. To do this, the hierarchical ZS test stage120performs an initial depth (Z) test on each at least partially covered patch to see if the patch can be discarded or “culled” at this stage. At the same time, an initial stencil (S) test is carried out. The rasteriser33is accordingly in communication with hierarchical ZS buffer(s)112. The hierarchical ZS buffer(s)112can store depth data (such as a range of depth values and/or depth function data) and a stencil value for each patch size and position that the buffer represents (essentially for each patch size and position that the rasteriser33could consider for the tile that is being processed). In the present embodiment, the hierarchical ZS test stage120performs a hierarchical depth test on a patch, using a depth value range representative of the primitive102that at least partially covers that patch, by taking appropriate depth samples for the patch in respect of the primitive, and comparing the depth samples for the patch with the depth range data already stored in the corresponding entry for that patch position, to try to determine whether that patch will be occluded by or will overdraw other fragments and sampling points to be rendered. If the patch passes the hierarchical depth test, then the depth value ranges stored in that entry of the hierarchical ZS buffer(s)112are updated accordingly. According to the outcome of the depth and stencil tests performed by the hierarchical test stage120, the largest patch may be returned to the rasterisation stage118to be subdivided into its four smaller patches, with each covered such smaller patch (“sub-patch”) then tested against the primitive and processed in the same way (i.e. discarded, or forwarded to the hierarchical depth testing stage120and later returned to the rasterisation stage118and subdivided into a set of smaller patches). This patch testing and discarding or subdivision is continued until the minimum patch size is reached. The present embodiment supports four levels of subdivision (three sub-division iterations) and so can start with largest patches having an area corresponding to 16×16 fragments (corresponding to the size of an entire tile). A 16×16 fragment patch is then (if appropriate) subdivided into four 8×8 fragment patches. Each of those 8×8 fragment patches is then subdivided into respective 4×4 fragment patches (if appropriate). Finally, each 4×4 fragment patch is subdivided into respective 2×2 fragment patches (if appropriate). As in the present embodiment, a 2×2 fragment patch is the minimum patch size that is used, the (potential) subdivision process stops at this point. Other arrangements would be possible. For example, other numbers of subdivision levels, such as three or five, could be used. Once the minimum patch size has been reached (i.e. a patch of 2×2 fragments that covers, at least in part, the primitive has been identified), the rasterisation stage118then tests the individual sampling points in that final patch to see if the sampling points are covered by the primitive102. The rasteriser33then generates and outputs individual fragments for rendering corresponding to the sampling points found to be covered by the primitive (so four fragments if all the 2×2 fragments in the minimum size patch are at least partially covered by the primitive). The rasteriser33also associates with each fragment a coverage mask in the form of a bitmap that indicates, for each sample position of the set of one or more sample positions that is associated with the fragment, whether that sample position is covered (i.e., in effect, whether the fragment is being used to render that sampling point (i.e. whether its data should be stored for that sampling point)). Once a primitive has been tested in this manner, then the rasterisation process moves on to the next primitive for the tile being generated and so on, until all the primitives for the tile in question have been rasterised. The process then moves on to the next rendering tile to be generated, and so on. Once all the primitives for the render output in question have been rasterised, the process then moves on to the next render output, e.g. frame, to be generated, and so on. The rasteriser33is configured in the present embodiment as a pipeline that can contain and process plural patches at the same time. The rasteriser33is also configured to be able to generate plural fragments at a time (simultaneously) (e.g. where a primitive is found to completely cover a patch of the render output that encompasses plural fragments (e.g. plural sampling points or sets of sampling points)). The fragments are still processed individually by the fragment processing parts of the pipeline, such as the fragment shader108. Having the rasteriser produce plural fragments simultaneously helps to create back pressure to thereby keep the rendering pipeline “filled up” with fragments. Other arrangements would be possible. For example, other embodiments are contemplated in which a non-hierarchical rasteriser is used. In these embodiments, the rasteriser may still perform primitive coverage testing and initial depth and/or stencil testing in respect of a region (e.g. tile) of the render output as discussed above, but without the capability to iteratively subdivide the region for further testing. FIG.4shows the primitive processing stages for implementing the rasteriser33of the graphics processing pipeline of the present embodiment in more detail. In the present embodiment, the graphics processing pipeline is configured to perform Patch Forward Pixel Kill (PFPK), e.g. as described in US 2020/0074721. Thus, as shown inFIG.4, the rasterisation stage118is operable to receive an input primitive102, and if that primitive passes the coverage and hierarchical ZS testing discussed above, include the input primitive102in a fragment tracking record or “PFPK (Patch Forward Pixel Kill) tracker”202, as illustrated by arrow204. The graphics processing system maintains metadata for each primitive included in the fragment tracking record202that will be rasterised to fragments. The metadata in the fragment tracking record202is used in order to determine whether a primitive has fragments that can be discarded or “killed” (e.g. removed from any further processing). This can facilitate a reduction in processing effort, e.g. in respect of primitives that will eventually be overdrawn by newly received primitives that enter the rendering pipeline. FIG.5schematically shows the fragment tracking record202according to this embodiment. As shown inFIG.5, the fragment tracking record202comprises plural sets of metadata, such as set of metadata300, with each set of metadata being assigned to a primitive currently being processed. In the present embodiment, each set of metadata in the fragment tracking record202comprises: a valid flag, such as valid flag304, that indicates whether or not that set of metadata is assigned to a primitive that is currently being processed; a discard flag, such as discard flag306, that indicates whether or not the primitive in question has fragments that can be discarded or “killed”, e.g. not processed, and a data field, such data field302, for storing a primitive identifier associated with the primitive in question, and information indicating which buffer(s) the primitive writes to. In the present embodiment, a discard flag306can either indicate that all of the fragments generated from the respective primitive cannot be “killed” (discarded) at all, or can be “killed” (discarded) entirely (e.g. removed from any further processing). However, in other embodiments, more sophisticated arrangements can be used, such as those described in US 2020/0074721. For example, the metadata may indicate specifically which specific fragment processing operations (e.g. such as ZS testing, ZS results writing, fragment rendering and fragment data writing) can be omitted. In the present embodiment, when a new primitive102is received by the rasteriser33and passes the coverage and initial ZS testing, and thus is to be added to the fragment tracking record202by the rasteriser33, a primitive identifier associated with that primitive, together with information indicating one or more buffers that the primitive writes to, is included in an entry in the fragment tracking record202that is pointed to by a pointer308that indicates the next available entry (as indicated by arrow204). The corresponding valid flag304is also set to indicate that the entry corresponds to a valid primitive. The corresponding discard flag306is also initially set to indicate that the primitive has fragments that cannot (at least for the time being) be discarded or “killed”. The pointer308is then moved to the next available entry in the fragment tracking record202as indicated by arrow310. If, during the hierarchical rasterisation processed discussed above, the newly added primitive was determined as only partially covering the largest patch (corresponding to the rendering tile being processed), then nothing more needs to be done for the time being. However, if (as is the case in the embodiment shown inFIG.4) the newly added primitive102was determined as fully covering the largest patch (i.e. corresponding to rendering tile200) and passes the initial ZS testing, then the inclusion of the input primitive102in the fragment tracking record202(as illustrated by arrow204) triggers a search though the metadata entries in the fragment tracking record202, to identify any previously added primitives that can now be discarded or “killed” because they write data to one or more buffers that will inevitably be overwritten by the newly added primitive102. In the present embodiment, as indicated by arrow312, this “kill search” involves a sequential search backwards (i.e. in an opposite order to the order in which the primitives were received and included in the fragment tracking record202) through the metadata entries in the fragment tracking record202. When the sequential search encounters a particular previously added primitive for which data will inevitably be overwritten by the newly added primitive, the discard flag306for that particular previously added primitive is set to indicate that the primitive has fragments that can be discarded or “killed”. The sequential search may terminate when a previously added primitive for which data cannot be overwritten is found, so as to avoid inadvertently discarding or “killing” fragments for earlier received primitives whose processing may impact on the processing of fragments for later received primitives. As shown inFIG.4, the fragments generated by the rasterisation stage118are passed to an output buffer206. At this stage, a fragment discard check or “kill check” signal208is sent between the output buffer206and the fragment tracking record202to determine if any fragments in the output buffer206for a particular primitive can be discarded or “killed” at this early stage, for example because data for those fragments will be overwritten by a subsequently received primitive that has since been input to the rasteriser33. When a kill check is made by the output buffer206in respect of fragments of a particular primitive, a search is conducted through the metadata entries in the fragment tracking record202for the primitive identifier corresponding to that particular primitive. When that particular primitive is found, the status of the discard flag is checked to see whether the fragments of the particular primitive can be discarded or “killed”. If the metadata (discard flag306) indicates that the fragments of the particular primitive can be discarded, then the fragments of the particular primitive are removed from the output buffer206and are not output from the output buffer206for further processing. However, if the metadata (discard flag306) indicates that the fragments of the particular primitive cannot be discarded, then the fragments of the particular primitive can be output from the output buffer206for further processing by the graphics processing system. The generated fragments210that survive the kill check are then passed to the rendering unit34, as shown inFIG.3. In the present embodiment, the rendering unit34includes an early depth and stencil (ZS) test stage106, a rendering stage in the form of a fragment shading pipeline stage108, and a late depth and stencil (ZS) test stage110. Fragments issued (output) by the rasteriser33are subject to an early depth and stencil test in the early ZS testing stage106. This early ZS testing stage106performs depth and stencil tests on the individual (covered) sampling positions associated with the fragments issued by the rasteriser33(i.e. at per sampling point resolution). To do this, the early ZS testing stage106uses per-sampling position depth and stencil values stored in the ZS buffers114. Thus, the ZS buffers114store an appropriate depth (Z) value and stencil (S) value, respectively, for each sampling point that the buffer represents (essentially for each sampling point position of the tile that is being processed). These values are stored in the ZS buffers114when sampling points being tested by early ZS testing stage106and the late ZS testing stage110pass the respective depth and stencil tests (the stencil values can be stored/updated when the tests are failed as well). The early ZS testing stage106is configured to operate in an appropriately conservative manner. Fragments that fail the early ZS testing stage106are culled by the early ZS testing stage106. Fragments that pass the early ZS testing stage106(i.e. fragments having at least one associated covered sampling position that passes the early ZS testing stage106) are sent onwards to the fragment shading stage108. As with the output buffer206, prior to performing (e.g. one or more or each of) one or more fragment processing (depth and/or stencil testing) operations on the fragments it receives, the early ZS testing stage106can send an additional fragment discard or “kill check” signal212to the fragment tracking record202to determine if any fragments for the primitive in question can be discarded or “killed” at this stage, for example because data for those fragments will be overwritten by a subsequent primitive that has since been input to the rasteriser33. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive can be killed, then those fragments will not be ZS tested by the early ZS testing stage106or sent onwards to the fragment shading stage108. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive cannot be killed, then those fragments will be ZS tested by the early ZS testing stage106, and fragments passing the early ZS testing stage106will be sent onwards to the fragment shading stage108. The fragment shading stage108then performs the appropriate fragment processing (rendering) operations on the fragments it receives, so as to process the fragments to generate the appropriate fragment data, etc., for the render output (e.g. for display of the fragments). This fragment processing may include any suitable and desired fragment shading processes, such as executing fragment shader programs on the fragments, applying textures to the fragments, applying blending, fogging or other operations to the fragments, etc., to generate the appropriate fragment data. In the present embodiment, the fragment shading stage108is in the form of a shader pipeline (a programmable fragment shader), but other arrangements, such as the use also or instead of fixed function fragment shading units would be possible, if desired. Again, prior to performing (e.g. one or more or each of) one or more fragment processing (rendering) operations on the fragments it receives, the fragment shading stage108can send an additional fragment discard or “kill check” signal212to the fragment tracking record202to determine if any fragments for the primitive in question can be discarded or “killed” at this stage, for example because data for those fragments will be overwritten by a subsequent primitive that has since been input to the rasteriser33. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive can be killed, then those fragments will not be processed by the fragment shading stage108or sent onwards to the late fragment depth and stencil (ZS) test stage110. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive cannot be killed, then those fragments will processed by the fragment shading stage108, and fragments will be passed to the late fragment depth and stencil (ZS) test stage110. The late fragment depth and stencil (ZS) test stage110then (if it is to be performed, e.g. where early depth and stencil testing for a fragment has not taken place before shading) carries out, inter alia, the end of pipeline depth test on the shaded fragments (on the covered sampling points associated with shaded fragments) to determine whether the sampling points that a rendered fragment represents will overdraw the fragments whose values are currently stored in the ZS buffers114(i.e. determines whether the fragment data for the fragments issuing from the fragment shading stage108should be stored in the tile buffers35(should replace or modify the fragment data in the tile buffer(s)35of the fragments that have already been rendered)). To do this, the late ZS test stage110compares the depth values of (associated with) the fragments issued from the fragment shading stage108with the (per-sampling position) depth values stored in the ZS buffers114for the sampling points in question. The depth values for sampling points that pass the late depth test are also written appropriately to the ZS buffer114to update it. This late ZS test stage110also carries out any necessary “late” alpha and/or stencil tests on the fragments. Alternatively, any necessary “late” alpha and/or stencil tests may be performed by the fragment shading stage108. Fragments that fail the late ZS test stage110are culled by the late ZS test stage110. The fragments that pass the late fragment ZS test are then subjected to any remaining operations necessary on the fragments, such as blending with the framebuffer, dither etc. (not shown). Again, prior to performing (e.g. one or more or each of) one or more fragment processing (depth and/or stencil testing) operations on the fragments it receives, the late ZS testing stage110can send an additional fragment discard or “kill check” signal212to the fragment tracking record202to determine if any fragments for the primitive in question can be discarded or “killed” at this stage, for example because data for those fragments will be overwritten by a subsequent primitive that has since been input to the rasteriser33. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive can be killed, then those fragments will not be ZS tested by the late ZS testing stage110or have their rendered fragment data output to the tile buffer35. If the metadata (discard flag306) for the primitive in question indicates that the fragments of that primitive cannot be killed, then those fragments will be ZS tested by the late ZS testing stage110, and fragments passing the later ZS testing stage110will have their rendered fragment data output to the tile buffer35. When a particular primitive is invalidated, for example because processing of the primitive or all of its fragments is terminated (e.g. the primitive or all of its fragments have been discarded or “killed”) or because processing of the primitive or all of its fragments is completed, a search (as indicated by arrow314inFIG.5) is conducted through the metadata entries in the fragment tracking record202for the primitive identifier corresponding to that particular primitive. When that particular primitive is found, the status of the valid flag304corresponding to that primitive is set to invalid. This makes the entry assigned to the particular primitive available for use by another primitive. The above describes the basic process of the graphics processing pipeline shown inFIGS.2to4. The operation of the graphics processing pipeline in accordance with an embodiment of the technology described herein will now be described. As discussed above, when a primitive102that fully covers the rendering tile200that is being generated is added to the fragment tracking record202, a backwards “kill search”312though the metadata entries in the fragment tracking record202is triggered, to identify any previously added primitives that can now be discarded or “killed” because they write data to one or more buffers that will inevitably be overwritten by the newly added primitive. In the present embodiment, in addition to such a “kill search”312being triggered by a newly received primitive102to be rendered, a “kill search”312is triggered by reaching the end of a rendering tile200(and so, after all of the “new primitive triggered” “kill searches” in respect of that tile200have been performed). In particular, in the present embodiment, a “kill search”312is triggered in response to the primitive selection unit29identifying that there are no primitives in the primitive lists26left to be selected for the rendering tile200currently being generated. As discussed above, since primitives are processed by the graphics processing pipeline in a pipelined manner, when the end of a rendering tile is identified (i.e. when there are no primitives in the primitive lists26left to be selected for the rendering tile), there will typically still be primitives for the rendering tile that have yet to complete their processing through the pipeline. The Applicants have found that it may sometimes be the case that at least some of this processing that remains to be performed when the end of a rendering tile is reached is redundant. They have accordingly found that the triggering a “kill search”312at this point in the rendering process can achieve significant savings in terms of the processing effort required to generate a rendering tile, and thus overall render output, e.g. frame for display. FIG.6schematically illustrates the process of performing such a “kill search”, according to the present embodiment. As shown inFIG.6, at step601, primitives for a rendering tile are provided by the primitive and vertex selection units29,31to subsequent stages of the graphics processing pipeline3for processing. Then, at step602, when there are no primitives in the primitive lists26left to be selected for the rendering tile, it is determined that the end of the current rendering tile has been reached. At this point, as already mentioned above, there should be at least some primitives in the pipeline with processing left to be performed for the rendering tile. This then triggers, at step603, a check to determine which buffers (e.g. which buffers of tile buffers35, and ZS buffers112,114) are to be written out to the main memory23once processing for the rendering tile has been completed (i.e. once all primitives for the rendering tile that are to pass all the way through the pipeline have passed all the way through the pipeline), and which buffers (e.g. which buffers of tile buffers35, and ZS buffers112,114) are not to be written out to the main memory23. It will be appreciated here that it may typically be the case that only a colour buffer stored in the tile buffer35will be output to a frame buffer in the main memory23once a rendering tile has been completed, and other buffers, e.g. depth (Z) and stencil (S) buffers, will not be output to the main memory23. However, it can be the case that multiple e.g. colour, buffers and/or other buffer(s) are output to the memory23. For example, in the case of a depth (Z) only rendering pass, e.g. of a deferred rendering scheme, a depth (Z) buffer may be output to the main memory23for use in a subsequent rendering pass. Other outputs of the pipeline would be possible. At step604, based on the determination of which buffers are and are not to be written out to the main memory23(of step603), a command is created that will trigger a “kill search” through metadata entries in the fragment tracking record202to identify any primitives that can be discarded or “killed” because they will not affect any buffer(s) that will be written out to the main memory23. Then, at step605, the command triggers the “kill search”. This will have the effect that when the so-triggered sequential backwards “kill search” encounters a primitive for which data will not be written out to the main memory23, the discard flag306for that primitive will be set to indicate that the primitive has fragments that can be discarded or “killed”. The fragments for the primitive may then be “killed” (discarded from further processing) in response to a subsequent “kill check”512, in the manner as discussed above. As shown inFIG.6, at step606, once processing for the current rendering tile has been completed (and the output buffer(s) for the rendering tile written out to memory23), the process may be repeated for the next rendering tile making up the current render output, e.g. frame for display, and so on. Once the current render output has been completed, the process may be repeated for the next render output, and so on. FIG.7schematically illustrates the “kill search” command generation604and triggering605steps according to the present embodiment. In the present embodiment, as illustrated inFIG.7, a “kill search” command generated in response to reaching the end of a rendering tile200(at step604) is effectively a “dummy” or “clean-up” primitive502that is provided to the rasterisation stage118of the rasteriser33, after all of the “regular” primitives (that were listed in the primitive list(s)26for the rendering tile) have been provided to the rasterisation stage118for processing. As shown inFIG.7, the dummy primitive502is, in effect, a primitive that fully covers the rendering tile200that is being generated, and that will overwrite all of the buffer(s) that have been determined (at step603) as being buffer(s) that are not going to be written out to the memory23(but will not write to any buffer(s) that are to be written out to the memory23). When the dummy primitive502is received by the rasteriser33, it will accordingly pass the initial ZS testing and will be determined as fully covering the largest patch (i.e. rendering tile200). The dummy primitive502will thus, in the manner described above, trigger a “kill search”504through the metadata entries in the fragment tracking record202, which will identify any primitives that can be discarded or “killed” because they write data (only) to a buffer or buffers that will apparently be overwritten by the dummy primitive502. This “kill search” will accordingly have the effect of identifying zero or more primitives that can be discarded or “killed” because they do not affect (write to) a buffer that will be output to the memory23when the rendering tile200is completed. As discussed above, this “kill search” involves a sequential search backwards (i.e. in an opposite order to the order in which the primitives were received and included in the fragment tracking record202) through the metadata entries in the fragment tracking record202. When the sequential search encounters a primitive for which data will apparently be overwritten by the dummy primitive502, the discard flag306for that particular primitive is set to indicate that the primitive has fragments that can be discarded or “killed”. The sequential search may terminate when a primitive for which data will not apparently be overwritten is found, so as to avoid inadvertently discarding or “killing” fragments for earlier received primitives whose processing may impact on the processing of fragments for later received primitives. Subsequently, when a kill check512is made by a later stage of the pipeline (such as output buffer206, early depth and stencil (ZS) test stage106, fragment shading stage108, or late depth and stencil (ZS) test stage110) in respect of fragments of a particular primitive that remains to be processed for the rendering tile200, a search will be conducted through the metadata entries in the fragment tracking record202for the primitive identifier corresponding to that particular primitive. When that particular primitive is found, the status of the discard flag306will be checked to see whether the fragments of the particular primitive can be discarded or “killed”. If the metadata (discard flag306) now indicates that the fragments of the particular primitive can be discarded (as a result of the effect of the dummy primitive502), then the fragments of the particular primitive are discarded, in a manner as discussed above. In this way, fragment processing that remains to be performed when the end of the rendering tile200is reached may be omitted, such that overall processing effort required to generate the rendering tile200may be saved. Although the above has been described with reference to a graphics processing pipeline that performs Patch Forward Pixel Kill (PFPK), e.g. as described in US 2020/0074721, in other embodiments the graphics processor may be configured according to other Forward Pixel Kill (FPK) arrangements. For example, rather than triggering an update to metadata202, reaching the end of a rendering tile may trigger a signal to fragment processing stages to “kill” the appropriate fragments, with the signal being e.g. as described in US 2014/0168220 and/or US 2014/0354654. It can be seen from the above that embodiments of the technology described herein can reduce the amount of fragment processing performed by a graphics processing pipeline to generate a rendering tile, and thus overall render output, e.g. frame for display. This is achieved, in embodiments of the technology described herein by, when there are (in response to there being) no primitives left to be provided for processing to the graphics processing pipeline for a rendering tile, identifying one or more primitives left in the pipeline for the rendering tile that will not affect an output buffer, and discarding (i.e. “killing”) (fragments for) those primitives from further processing. The foregoing detailed description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the technology to the precise form disclosed. Many modifications and variations are possible in the light of the above teaching. The described embodiments were chosen in order to best explain the principles of the technology and its practical application, to thereby enable others skilled in the art to best utilise the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope be defined by the claims appended hereto.
96,232
11861761
DETAILED DESCRIPTION Embodiments described herein are generally directed to improvements relating to power, latency, bandwidth and performance issues relating to GPU processing/caching. As noted above, currently, atomic reduction (some sort of operation used in histogram like applications), is performed either in Shared Local Memory (SLM) or in L3 cache; however, the current atomic reduction approach is performed/kept in one place and consumes bandwidth and creates latency. While media and GPU share a cache in which media writes through the cache to memory, there is possibility for the GPU to find the needed data via a cache hit thus reducing some DRAM bandwidth; however, there is no guarantee that there is always a cache hit as some other traffic can evict the cache lines desired by the GPU. In the context of standalone IP cores, they could have their own internal cache in certain scenarios where there is no need to share with other IP cores and may take a non-snoop path by-passing the mid-level cache and traversing through the fabric to memory. A disadvantage of such an approach is that it demands additional cache structure inside the IP core, which comes at an area cost. Secondly, it continues to go through the central fabric to access main memory regardless of the IP core not using the mid-level cache, which results in waking up the central fabric. As a result, the central fabric cannot be shut off completely and thus consumes extra power that would reflect in overall SoC power being higher for such workloads. There is no good previous solution for reducing power of the LLC. One way to mitigate is to make the upper level caches larger (to reduce traffic to the LLC), but that increases area and causes data replication across multiple caches. As such performance scaling of larger GPUs is currently limited. System Overview FIG.1is a block diagram illustrating a computing system100configured to implement one or more aspects of the embodiments described herein. The computing system100includes a processing subsystem101having one or more processor(s)102and a system memory104communicating via an interconnection path that may include a memory hub105. The memory hub105may be a separate component within a chipset component or may be integrated within the one or more processor(s)102. The memory hub105couples with an I/O subsystem111via a communication link106. The I/O subsystem111includes an I/O hub107that can enable the computing system100to receive input from one or more input device(s)108. Additionally, the I/O hub107can enable a display controller, which may be included in the one or more processor(s)102, to provide outputs to one or more display device(s)110A. In one embodiment the one or more display device(s)110A coupled with the I/O hub107can include a local, internal, or embedded display device. The processing subsystem101, for example, includes one or more parallel processor(s)112coupled to memory hub105via a bus or other communication link113. The communication link113may be one of any number of standards-based communication link technologies or protocols, such as, but not limited to PCI Express, or may be a vendor specific communications interface or communications fabric. The one or more parallel processor(s)112may form a computationally focused parallel or vector processing system that can include a large number of processing cores and/or processing clusters, such as a many integrated core (MIC) processor. For example, the one or more parallel processor(s)112form a graphics processing subsystem that can output pixels to one of the one or more display device(s)110A coupled via the I/O Hub107. The one or more parallel processor(s)112can also include a display controller and display interface (not shown) to enable a direct connection to one or more display device(s)110B. Within the I/O subsystem111, a system storage unit114can connect to the I/O hub107to provide a storage mechanism for the computing system100. An I/O switch116can be used to provide an interface mechanism to enable connections between the I/O hub107and other components, such as a network adapter118and/or wireless network adapter119that may be integrated into the platform, and various other devices that can be added via one or more add-in device(s)120. The add-in device(s)120may also include, for example, one or more external graphics processor devices, graphics cards, and/or compute accelerators. The network adapter118can be an Ethernet adapter or another wired network adapter. The wireless network adapter119can include one or more of a Wi-Fi, Bluetooth, near field communication (NFC), or other network device that includes one or more wireless radios. The computing system100can include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, and the like, which may also be connected to the I/O hub107. Communication paths interconnecting the various components inFIG.1may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect) based protocols (e.g., PCI-Express), or any other bus or point-to-point communication interfaces and/or protocol(s), such as the NVLink high-speed interconnect, Compute Express Link™ (CXL™) (e.g., CXL.mem), Infinity Fabric (IF), Ethernet (IEEE 802.3), remote direct memory access (RDMA), InfiniBand, Internet Wide Area RDMA Protocol (iWARP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), quick UDP Internet Connections (QUIC), RDMA over Converged Ethernet (RoCE), Intel QuickPath Interconnect (QPI), Intel Ultra Path Interconnect (UPI), Intel On-Chip System Fabric (IOSF), Omnipath, HyperTransport, Advanced Microcontroller Bus Architecture (AMBA) interconnect, OpenCAPI, Gen-Z, Cache Coherent Interconnect for Accelerators (CCIX), 3GPP Long Term Evolution (LTE) (4G), 3GPP 5G, and variations thereof, or wired or wireless interconnect protocols known in the art. In some examples, data can be copied or stored to virtualized storage nodes using a protocol such as non-volatile memory express (NVMe) over Fabrics (NVMe-oF) or NVMe. The one or more parallel processor(s)112may incorporate circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). Alternatively or additionally, the one or more parallel processor(s)112can incorporate circuitry optimized for general purpose processing, while preserving the underlying computational architecture, described in greater detail herein. Components of the computing system100may be integrated with one or more other system elements on a single integrated circuit. For example, the one or more parallel processor(s)112, memory hub105, processor(s)102, and I/O hub107can be integrated into a system on chip (SoC) integrated circuit. Alternatively, the components of the computing system100can be integrated into a single package to form a system in package (SIP) configuration. In one embodiment at least a portion of the components of the computing system100can be integrated into a multi-chip module (MCM), which can be interconnected with other multi-chip modules into a modular computing system. It will be appreciated that the computing system100shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of processor(s)102, and the number of parallel processor(s)112, may be modified as desired. For instance, system memory104can be connected to the processor(s)102directly rather than through a bridge, while other devices communicate with system memory104via the memory hub105and the processor(s)102. In other alternative topologies, the parallel processor(s)112are connected to the I/O hub107or directly to one of the one or more processor(s)102, rather than to the memory hub105. In other embodiments, the I/O hub107and memory hub105may be integrated into a single chip. It is also possible that two or more sets of processor(s)102are attached via multiple sockets, which can couple with two or more instances of the parallel processor(s)112. Some of the particular components shown herein are optional and may not be included in all implementations of the computing system100. For example, any number of add-in cards or peripherals may be supported, or some components may be eliminated. Furthermore, some architectures may use different terminology for components similar to those illustrated inFIG.1. For example, the memory hub105may be referred to as a Northbridge in some architectures, while the I/O hub107may be referred to as a Southbridge. FIG.2Aillustrates a parallel processor200. The parallel processor200may be a GPU, GPGPU or the like as described herein. The various components of the parallel processor200may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), or field programmable gate arrays (FPGA). The illustrated parallel processor200may be one or more of the parallel processor(s)112shown inFIG.1. The parallel processor200includes a parallel processing unit202. The parallel processing unit includes an I/O unit204that enables communication with other devices, including other instances of the parallel processing unit202. The I/O unit204may be directly connected to other devices. For instance, the I/O unit204connects with other devices via the use of a hub or switch interface, such as memory hub105. The connections between the memory hub105and the I/O unit204form a communication link113. Within the parallel processing unit202, the I/O unit204connects with a host interface206and a memory crossbar216, where the host interface206receives commands directed to performing processing operations and the memory crossbar216receives commands directed to performing memory operations. When the host interface206receives a command buffer via the I/O unit204, the host interface206can direct work operations to perform those commands to a front end208. In one embodiment the front end208couples with a scheduler210, which is configured to distribute commands or other work items to a processing cluster array212. The scheduler210ensures that the processing cluster array212is properly configured and in a valid state before tasks are distributed to the processing clusters of the processing cluster array212. The scheduler210may be implemented via firmware logic executing on a microcontroller. The microcontroller implemented scheduler210is configurable to perform complex scheduling and work distribution operations at coarse and fine granularity, enabling rapid preemption and context switching of threads executing on the processing cluster array212. Preferably, the host software can prove workloads for scheduling on the processing cluster array212via one of multiple graphics processing doorbells. In other examples, polling for new workloads or interrupts can be used to identify or indicate availability of work to perform. The workloads can then be automatically distributed across the processing cluster array212by the scheduler210logic within the scheduler microcontroller. The processing cluster array212can include up to “N” processing clusters (e.g., cluster214A, cluster214B, through cluster214N). Each cluster214A-214N of the processing cluster array212can execute a large number of concurrent threads. The scheduler210can allocate work to the clusters214A-214N of the processing cluster array212using various scheduling and/or work distribution algorithms, which may vary depending on the workload arising for each type of program or computation. The scheduling can be handled dynamically by the scheduler210, or can be assisted in part by compiler logic during compilation of program logic configured for execution by the processing cluster array212. Optionally, different clusters214A-214N of the processing cluster array212can be allocated for processing different types of programs or for performing different types of computations. The processing cluster array212can be configured to perform various types of parallel processing operations. For example, the processing cluster array212is configured to perform general-purpose parallel compute operations. For example, the processing cluster array212can include logic to execute processing tasks including filtering of video and/or audio data, performing modeling operations, including physics operations, and performing data transformations. The processing cluster array212is configured to perform parallel graphics processing operations. In such embodiments in which the parallel processor200is configured to perform graphics processing operations, the processing cluster array212can include additional logic to support the execution of such graphics processing operations, including, but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. Additionally, the processing cluster array212can be configured to execute graphics processing related shader programs such as, but not limited to vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. The parallel processing unit202can transfer data from system memory via the I/O unit204for processing. During processing the transferred data can be stored to on-chip memory (e.g., parallel processor memory222) during processing, then written back to system memory. In embodiments in which the parallel processing unit202is used to perform graphics processing, the scheduler210may be configured to divide the processing workload into approximately equal sized tasks, to better enable distribution of the graphics processing operations to multiple clusters214A-214N of the processing cluster array212. In some of these embodiments, portions of the processing cluster array212can be configured to perform different types of processing. For example a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations, to produce a rendered image for display. Intermediate data produced by one or more of the clusters214A-214N may be stored in buffers to allow the intermediate data to be transmitted between clusters214A-214N for further processing. During operation, the processing cluster array212can receive processing tasks to be executed via the scheduler210, which receives commands defining processing tasks from front end208. For graphics processing operations, processing tasks can include indices of data to be processed, e.g., surface (patch) data, primitive data, vertex data, and/or pixel data, as well as state parameters and commands defining how the data is to be processed (e.g., what program is to be executed). The scheduler210may be configured to fetch the indices corresponding to the tasks or may receive the indices from the front end208. The front end208can be configured to ensure the processing cluster array212is configured to a valid state before the workload specified by incoming command buffers (e.g., batch-buffers, push buffers, etc.) is initiated. Each of the one or more instances of the parallel processing unit202can couple with parallel processor memory222. The parallel processor memory222can be accessed via the memory crossbar216, which can receive memory requests from the processing cluster array212as well as the I/O unit204. The memory crossbar216can access the parallel processor memory222via a memory interface218. The memory interface218can include multiple partition units (e.g., partition unit220A, partition unit220B, through partition unit220N) that can each couple to a portion (e.g., memory unit) of parallel processor memory222. The number of partition units220A-220N may be configured to be equal to the number of memory units, such that a first partition unit220A has a corresponding first memory unit224A, a second partition unit220B has a corresponding second memory unit224B, and an Nth partition unit220N has a corresponding Nth memory unit224N. In other embodiments, the number of partition units220A-220N may not be equal to the number of memory devices. The memory units224A-224N can include various types of memory devices, including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. Optionally, the memory units224A-224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Persons skilled in the art will appreciate that the specific implementation of the memory units224A-224N can vary, and can be selected from one of various conventional designs. Render targets, such as frame buffers or texture maps may be stored across the memory units224A-224N, allowing partition units220A-220N to write portions of each render target in parallel to efficiently use the available bandwidth of parallel processor memory222. In some embodiments, a local instance of the parallel processor memory222may be excluded in favor of a unified memory design that utilizes system memory in conjunction with local cache memory. Optionally, any one of the clusters214A-214N of the processing cluster array212has the ability to process data that will be written to any of the memory units224A-224N within parallel processor memory222. The memory crossbar216can be configured to transfer the output of each cluster214A-214N to any partition unit220A-220N or to another cluster214A-214N, which can perform additional processing operations on the output. Each cluster214A-214N can communicate with the memory interface218through the memory crossbar216to read from or write to various external memory devices. In one of the embodiments with the memory crossbar216the memory crossbar216has a connection to the memory interface218to communicate with the I/O unit204, as well as a connection to a local instance of the parallel processor memory222, enabling the processing units within the different processing clusters214A-214N to communicate with system memory or other memory that is not local to the parallel processing unit202. Generally, the memory crossbar216may, for example, be able to use virtual channels to separate traffic streams between the clusters214A-214N and the partition units220A-220N. While a single instance of the parallel processing unit202is illustrated within the parallel processor200, any number of instances of the parallel processing unit202can be included. For example, multiple instances of the parallel processing unit202can be provided on a single add-in card, or multiple add-in cards can be interconnected. For example, the parallel processor200can be an add-in device, such as add-in device120ofFIG.1, which may be a graphics card such as a discrete graphics card that includes one or more GPUs, one or more memory devices, and device-to-device or network or fabric interfaces. The different instances of the parallel processing unit202can be configured to inter-operate even if the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. Optionally, some instances of the parallel processing unit202can include higher precision floating point units relative to other instances. Systems incorporating one or more instances of the parallel processing unit202or the parallel processor200can be implemented in a variety of configurations and form factors, including but not limited to desktop, laptop, or handheld personal computers, servers, workstations, game consoles, and/or embedded systems. An orchestrator can form composite nodes for workload performance using one or more of: disaggregated processor resources, cache resources, memory resources, storage resources, and networking resources. FIG.2Bis a block diagram of a partition unit220. The partition unit220may be an instance of one of the partition units220A-220N ofFIG.2A. As illustrated, the partition unit220includes an L2 cache221, a frame buffer interface225, and a ROP226(raster operations unit). The L2 cache221is a read/write cache that is configured to perform load and store operations received from the memory crossbar216and ROP226. Read misses and urgent write-back requests are output by L2 cache221to frame buffer interface225for processing. Updates can also be sent to the frame buffer via the frame buffer interface225for processing. In one embodiment the frame buffer interface225interfaces with one of the memory units in parallel processor memory, such as the memory units224A-224N ofFIG.2A(e.g., within parallel processor memory222). The partition unit220may additionally or alternatively also interface with one of the memory units in parallel processor memory via a memory controller (not shown). In graphics applications, the ROP226is a processing unit that performs raster operations such as stencil, z test, blending, and the like. The ROP226then outputs processed graphics data that is stored in graphics memory. In some embodiments the ROP226includes or couples with a CODEC227that includes compression logic to compress depth or color data that is written to memory or the L2 cache221and decompress depth or color data that is read from memory or the L2 cache221. The compression logic can be lossless compression logic that makes use of one or more of multiple compression algorithms. The type of compression that is performed by the CODEC227can vary based on the statistical characteristics of the data to be compressed. For example, in one embodiment, delta color compression is performed on depth and color data on a per-tile basis. In one embodiment the CODEC227includes compression and decompression logic that can compress and decompress compute data associated with machine learning operations. The CODEC227can, for example, compress sparse matrix data for sparse machine learning operations. The CODEC227can also compress sparse matrix data that is encoded in a sparse matrix format (e.g., coordinate list encoding (COO), compressed sparse row (CSR), compress sparse column (CSC), etc.) to generate compressed and encoded sparse matrix data. The compressed and encoded sparse matrix data can be decompressed and/or decoded before being processed by processing elements or the processing elements can be configured to consume compressed, encoded, or compressed and encoded data for processing. The ROP226may be included within each processing cluster (e.g., cluster214A-214N ofFIG.2A) instead of within the partition unit220. In such embodiment, read and write requests for pixel data are transmitted over the memory crossbar216instead of pixel fragment data. The processed graphics data may be displayed on a display device, such as one of the one or more display device(s)110ofFIG.1, routed for further processing by the processor(s)102, or routed for further processing by one of the processing entities within the parallel processor200ofFIG.2A. FIG.2Cis a block diagram of a processing cluster214within a parallel processing unit. For example, the processing cluster is an instance of one of the processing clusters214A-214N ofFIG.2A. The processing cluster214can be configured to execute many threads in parallel, where the term “thread” refers to an instance of a particular program executing on a particular set of input data. Optionally, single-instruction, multiple-data (SIMD) instruction issue techniques may be used to support parallel execution of a large number of threads without providing multiple independent instruction units. Alternatively, single-instruction, multiple-thread (SIMT) techniques may be used to support parallel execution of a large number of generally synchronized threads, using a common instruction unit configured to issue instructions to a set of processing engines within each one of the processing clusters. Unlike a SIMD execution regime, where all processing engines typically execute identical instructions, SIMT execution allows different threads to more readily follow divergent execution paths through a given thread program. Persons skilled in the art will understand that a SIMD processing regime represents a functional subset of a SIMT processing regime. Operation of the processing cluster214can be controlled via a pipeline manager232that distributes processing tasks to SIMT parallel processors. The pipeline manager232receives instructions from the scheduler210ofFIG.2Aand manages execution of those instructions via a graphics multiprocessor234and/or a texture unit236. The illustrated graphics multiprocessor234is an exemplary instance of a SIMT parallel processor. However, various types of SIMT parallel processors of differing architectures may be included within the processing cluster214. One or more instances of the graphics multiprocessor234can be included within a processing cluster214. The graphics multiprocessor234can process data and a data crossbar240can be used to distribute the processed data to one of multiple possible destinations, including other shader units. The pipeline manager232can facilitate the distribution of processed data by specifying destinations for processed data to be distributed via the data crossbar240. Each graphics multiprocessor234within the processing cluster214can include an identical set of functional execution logic (e.g., arithmetic logic units, load-store units, etc.). The functional execution logic can be configured in a pipelined manner in which new instructions can be issued before previous instructions are complete. The functional execution logic supports a variety of operations including integer and floating-point arithmetic, comparison operations, Boolean operations, bit-shifting, and computation of various algebraic functions. The same functional-unit hardware could be leveraged to perform different operations and any combination of functional units may be present. The instructions transmitted to the processing cluster214constitute a thread. A set of threads executing across the set of parallel processing engines is a thread group. A thread group executes the same program on different input data. Each thread within a thread group can be assigned to a different processing engine within a graphics multiprocessor234. A thread group may include fewer threads than the number of processing engines within the graphics multiprocessor234. When a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during cycles in which that thread group is being processed. A thread group may also include more threads than the number of processing engines within the graphics multiprocessor234. When the thread group includes more threads than the number of processing engines within the graphics multiprocessor234, processing can be performed over consecutive clock cycles. Optionally, multiple thread groups can be executed concurrently on the graphics multiprocessor234. The graphics multiprocessor234may include an internal cache memory to perform load and store operations. Optionally, the graphics multiprocessor234can forego an internal cache and use a cache memory (e.g., level 1 (L1) cache248) within the processing cluster214. Each graphics multiprocessor234also has access to level 2 (L2) caches within the partition units (e.g., partition units220A-220N ofFIG.2A) that are shared among all processing clusters214and may be used to transfer data between threads. The graphics multiprocessor234may also access off-chip global memory, which can include one or more of local parallel processor memory and/or system memory. Any memory external to the parallel processing unit202may be used as global memory. Embodiments in which the processing cluster214includes multiple instances of the graphics multiprocessor234can share common instructions and data, which may be stored in the L1 cache248. Each processing cluster214may include an MMU245(memory management unit) that is configured to map virtual addresses into physical addresses. In other embodiments, one or more instances of the MMU245may reside within the memory interface218ofFIG.2A. The MMU245includes a set of page table entries (PTEs) used to map a virtual address to a physical address of a tile and optionally a cache line index. The MMU245may include address translation lookaside buffers (TLB) or caches that may reside within the graphics multiprocessor234or the L1 cache or processing cluster214. The physical address is processed to distribute surface data access locality to allow efficient request interleaving among partition units. The cache line index may be used to determine whether a request for a cache line is a hit or miss. In graphics and computing applications, a processing cluster214may be configured such that each graphics multiprocessor234is coupled to a texture unit236for performing texture mapping operations, e.g., determining texture sample positions, reading texture data, and filtering the texture data. Texture data is read from an internal texture L1 cache (not shown) or in some embodiments from the L1 cache within graphics multiprocessor234and is fetched from an L2 cache, local parallel processor memory, or system memory, as needed. Each graphics multiprocessor234outputs processed tasks to the data crossbar240to provide the processed task to another processing cluster214for further processing or to store the processed task in an L2 cache, local parallel processor memory, or system memory via the memory crossbar216. A preROP242(pre-raster operations unit) is configured to receive data from graphics multiprocessor234, direct data to ROP units, which may be located with partition units as described herein (e.g., partition units220A-220N ofFIG.2A). The preROP242unit can perform optimizations for color blending, organize pixel color data, and perform address translations. It will be appreciated that the core architecture described herein is illustrative and that variations and modifications are possible. Any number of processing units, e.g., graphics multiprocessor234, texture units236, preROPs242, etc., may be included within a processing cluster214. Further, while only one processing cluster214is shown, a parallel processing unit as described herein may include any number of instances of the processing cluster214. Optionally, each processing cluster214can be configured to operate independently of other processing clusters214using separate and distinct processing units, L1 caches, L2 caches, etc. FIG.2Dshows an example of the graphics multiprocessor234in which the graphics multiprocessor234couples with the pipeline manager232of the processing cluster214. The graphics multiprocessor234has an execution pipeline including but not limited to an instruction cache252, an instruction unit254, an address mapping unit256, a register file258, one or more general purpose graphics processing unit (GPGPU) cores262, and one or more load/store units266. The GPGPU cores262and load/store units266are coupled with cache memory272and shared memory270via a memory and cache interconnect268. The graphics multiprocessor234may additionally include tensor and/or ray-tracing cores263that include hardware logic to accelerate matrix and/or ray-tracing operations. The instruction cache252may receive a stream of instructions to execute from the pipeline manager232. The instructions are cached in the instruction cache252and dispatched for execution by the instruction unit254. The instruction unit254can dispatch instructions as thread groups (e.g., warps), with each thread of the thread group assigned to a different execution unit within GPGPU core262. An instruction can access any of a local, shared, or global address space by specifying an address within a unified address space. The address mapping unit256can be used to translate addresses in the unified address space into a distinct memory address that can be accessed by the load/store units266. The register file258provides a set of registers for the functional units of the graphics multiprocessor234. The register file258provides temporary storage for operands connected to the data paths of the functional units (e.g., GPGPU cores262, load/store units266) of the graphics multiprocessor234. The register file258may be divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file258. For example, the register file258may be divided between the different warps being executed by the graphics multiprocessor234. The GPGPU cores262can each include floating point units (FPUs) and/or integer arithmetic logic units (ALUs) that are used to execute instructions of the graphics multiprocessor234. In some implementations, the GPGPU cores262can include hardware logic that may otherwise reside within the tensor and/or ray-tracing cores263. The GPGPU cores262can be similar in architecture or can differ in architecture. For example and in one embodiment, a first portion of the GPGPU cores262include a single precision FPU and an integer ALU while a second portion of the GPGPU cores include a double precision FPU. Optionally, the FPUs can implement the IEEE 754-2008 standard for floating point arithmetic or enable variable precision floating point arithmetic. The graphics multiprocessor234can additionally include one or more fixed function or special function units to perform specific functions such as copy rectangle or pixel blending operations. One or more of the GPGPU cores can also include fixed or special function logic. The GPGPU cores262may include SIMD logic capable of performing a single instruction on multiple sets of data. Optionally, GPGPU cores262can physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. The SIMD instructions for the GPGPU cores can be generated at compile time by a shader compiler or automatically generated when executing programs written and compiled for single program multiple data (SPMD) or SIMT architectures. Multiple threads of a program configured for the SIMT execution model can be executed via a single SIMD instruction. For example and in one embodiment, eight SIMT threads that perform the same or similar operations can be executed in parallel via a single SIMD8 logic unit. The memory and cache interconnect268is an interconnect network that connects each of the functional units of the graphics multiprocessor234to the register file258and to the shared memory270. For example, the memory and cache interconnect268is a crossbar interconnect that allows the load/store unit266to implement load and store operations between the shared memory270and the register file258. The register file258can operate at the same frequency as the GPGPU cores262, thus data transfer between the GPGPU cores262and the register file258is very low latency. The shared memory270can be used to enable communication between threads that execute on the functional units within the graphics multiprocessor234. The cache memory272can be used as a data cache for example, to cache texture data communicated between the functional units and the texture unit236. The shared memory270can also be used as a program managed cached. The shared memory270and the cache memory272can couple with the data crossbar240to enable communication with other components of the processing cluster. Threads executing on the GPGPU cores262can programmatically store data within the shared memory in addition to the automatically cached data that is stored within the cache memory272. FIG.3A-3Cillustrate additional graphics multiprocessors, according to embodiments.FIG.3A-3Billustrate graphics multiprocessors325,350, which are related to the graphics multiprocessor234ofFIG.2Cand may be used in place of one of those. Therefore, the disclosure of any features in combination with the graphics multiprocessor234herein also discloses a corresponding combination with the graphics multiprocessor(s)325,350, but is not limited to such.FIG.3Cillustrates a graphics processing unit (GPU)380which includes dedicated sets of graphics processing resources arranged into multi-core groups365A-365N, which correspond to the graphics multiprocessors325,350. The illustrated graphics multiprocessors325,350and the multi-core groups365A-365N can be streaming multiprocessors (SM) capable of simultaneous execution of a large number of execution threads. The graphics multiprocessor325ofFIG.3Aincludes multiple additional instances of execution resource units relative to the graphics multiprocessor234ofFIG.2D. For example, the graphics multiprocessor325can include multiple instances of the instruction unit332A-332B, register file334A-334B, and texture unit(s)344A-344B. The graphics multiprocessor325also includes multiple sets of graphics or compute execution units (e.g., GPGPU core336A-336B, tensor core337A-337B, ray-tracing core338A-338B) and multiple sets of load/store units340A-340B. The execution resource units have a common instruction cache330, texture and/or data cache memory342, and shared memory346. The various components can communicate via an interconnect fabric327. The interconnect fabric327may include one or more crossbar switches to enable communication between the various components of the graphics multiprocessor325. The interconnect fabric327may be a separate, high-speed network fabric layer upon which each component of the graphics multiprocessor325is stacked. The components of the graphics multiprocessor325communicate with remote components via the interconnect fabric327. For example, the cores336A-336B,337A-337B, and338A-338B can each communicate with shared memory346via the interconnect fabric327. The interconnect fabric327can arbitrate communication within the graphics multiprocessor325to ensure a fair bandwidth allocation between components. The graphics multiprocessor350ofFIG.3Bincludes multiple sets of execution resources356A-356D, where each set of execution resource includes multiple instruction units, register files, GPGPU cores, and load store units, as illustrated inFIG.2DandFIG.3A. The execution resources356A-356D can work in concert with texture unit(s)360A-360D for texture operations, while sharing an instruction cache354, and shared memory353. For example, the execution resources356A-356D can share an instruction cache354and shared memory353, as well as multiple instances of a texture and/or data cache memory358A-358B. The various components can communicate via an interconnect fabric352similar to the interconnect fabric327ofFIG.3A. Persons skilled in the art will understand that the architecture described inFIGS.1,2A-2D, and3A-3Bare descriptive and not limiting as to the scope of the present embodiments. Thus, the techniques described herein may be implemented on any properly configured processing unit, including, without limitation, one or more mobile application processors, one or more desktop or server central processing units (CPUs) including multi-core CPUs, one or more parallel processing units, such as the parallel processing unit202ofFIG.2A, as well as one or more graphics processors or special purpose processing units, without departure from the scope of the embodiments described herein. The parallel processor or GPGPU as described herein may be communicatively coupled to host/processor cores to accelerate graphics operations, machine-learning operations, pattern analysis operations, and various general-purpose GPU (GPGPU) functions. The GPU may be communicatively coupled to the host processor/cores over a bus or other interconnect (e.g., a high-speed interconnect such as PCIe, NVLink, or other known protocols, standardized protocols, or proprietary protocols). In other embodiments, the GPU may be integrated on the same package or chip as the cores and communicatively coupled to the cores over an internal processor bus/interconnect (i.e., internal to the package or chip). Regardless of the manner in which the GPU is connected, the processor cores may allocate work to the GPU in the form of sequences of commands/instructions contained in a work descriptor. The GPU then uses dedicated circuitry/logic for efficiently processing these commands/instructions. FIG.3Cillustrates a graphics processing unit (GPU)380which includes dedicated sets of graphics processing resources arranged into multi-core groups365A-365N. While the details of only a single multi-core group365A are provided, it will be appreciated that the other multi-core groups365B-365N may be equipped with the same or similar sets of graphics processing resources. Details described with respect to the multi-core groups365A-365N may also apply to any graphics multiprocessor234,325,350described herein. As illustrated, a multi-core group365A may include a set of graphics cores370, a set of tensor cores371, and a set of ray tracing cores372. A scheduler/dispatcher368schedules and dispatches the graphics threads for execution on the various cores370,371,372. A set of register files369store operand values used by the cores370,371,372when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating-point data elements) and tile registers for storing tensor/matrix values. The tile registers may be implemented as combined sets of vector registers. One or more combined level 1 (L1) caches and shared memory units373store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group365A. One or more texture units374can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache375shared by all or a subset of the multi-core groups365A-365N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache375may be shared across a plurality of multi-core groups365A-365N. One or more memory controllers367couple the GPU380to a memory366which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory). Input/output (I/O) circuitry363couples the GPU380to one or more I/O devices362such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices362to the GPU380and memory366. One or more I/O memory management units (IOMMUs)364of the I/O circuitry363couple the I/O devices362directly to the system memory366. Optionally, the IOMMU364manages multiple sets of page tables to map virtual addresses to physical addresses in system memory366. The I/O devices362, CPU(s)361, and GPU(s)380may then share the same virtual address space. In one implementation of the IOMMU364, the IOMMU364supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within system memory366). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated inFIG.3C, each of the cores370,371,372and/or multi-core groups365A-365N may include translation lookaside buffers (TLBs) to cache guest virtual to guest physical translations, guest physical to host physical translations, and guest virtual to host physical translations. The CPUs361, GPUs380, and I/O devices362may be integrated on a single semiconductor chip and/or chip package. The illustrated memory366may be integrated on the same chip or may be coupled to the memory controllers367via an off-chip interface. In one implementation, the memory366comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles described herein are not limited to this specific implementation. The tensor cores371may include a plurality of execution units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores371may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). For example, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image. In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores371. The training of neural networks, in particular, requires a significant number matrix dot product operations. In order to process an inner-product formulation of an N×N×N matrix multiply, the tensor cores371may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed. Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores371to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes). Supported formats additionally include 64-bit floating point (FP64) and non-IEEE floating point formats such as the bfloat16 format (e.g., Brain floating point), a 16-bit floating point format with one sign bit, eight exponents bits, and eight significand bits, of which seven are explicitly stored. One embodiment includes support for a reduced precision tensor-float format (TF32), which has the range of FP32 (8-bits) with the precision of FP16 (10-bits). Reduced precision TF32 operations can be performed on FP32 inputs and produce FP32 outputs at higher performance relative to FP32 and increased precision relative to FP16. In one embodiment the tensor cores371support a sparse mode of operation for matrices in which the vast majority of values are zero. The tensor cores371include support for sparse input matrices that are encoded in a sparse matrix representation (e.g., coordinate list encoding (COO), compressed sparse row (CSR), compress sparse column (CSC), etc.). The tensor cores371also include support for compressed sparse matrix representations in the event that the sparse matrix representation may be further compressed. Compressed, encoded, and/or compressed and encoded matrix data, along with associated compression and/or encoding metadata, can be ready by the tensor cores371and the non-zero values can be extracted. For example, for a given input matrix A, a non-zero value can be loaded from the compressed and/or encoded representation of at least a portion of matrix A. Based on the location in matrix A for the non-zero value, which may be determined from index or coordinate metadata associated with the non-zero value, a corresponding value in input matrix B may be loaded. Depending on the operation to be performed (e.g., multiply), the load of the value from input matrix B may be bypassed if the corresponding value is a zero value. In one embodiment, the pairings of values for certain operations, such as multiply operations, may be pre-scanned by scheduler logic and only operations between non-zero inputs are scheduled. Depending on the dimensions of matrix A and matrix B and the operation to be performed, output matrix C may be dense or sparse. Where output matrix C is sparse, and depending on the configuration of the tensor cores371, output matrix C may be output in a compressed format, a sparse encoding, or a compressed sparse encoding. The ray tracing cores372may accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores372may include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores372may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores372perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores371. For example, the tensor cores371may implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores372. However, the CPU(s)361, graphics cores370, and/or ray tracing cores372may also implement all or a portion of the denoising and/or deep learning algorithms. In addition, as described above, a distributed approach to denoising may be employed in which the GPU380is in a computing device coupled to other computing devices over a network or high-speed interconnect. In this distributed approach, the interconnected computing devices may share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications. The ray tracing cores372may process all BVH traversal and/or ray-primitive intersections, saving the graphics cores370from being overloaded with thousands of instructions per ray. For example, each ray tracing core372includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and/or a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, for example, the multi-core group365A can simply launch a ray probe, and the ray tracing cores372independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores370,371are freed to perform other graphics or compute work while the ray tracing cores372perform the traversal and intersection operations. Optionally, each ray tracing core372may include a traversal unit to perform BVH testing operations and/or an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a “hit”, “no hit”, or “multiple hit” response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores370and tensor cores371) are freed to perform other forms of graphics work. In one optional embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores370and ray tracing cores372. The ray tracing cores372(and/or other cores370,371) may include hardware support for a ray tracing instruction set such as Microsoft's DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores372, graphics cores370and tensor cores371is Vulkan 1.1.85. Note, however, that the underlying principles described herein are not limited to any particular ray tracing ISA. In general, the various cores372,371,370may support a ray tracing instruction set that includes instructions/functions for one or more of ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, a preferred embodiment includes ray tracing instructions to perform one or more of the following functions: Ray Generation—Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment. Closest Hit—A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene. Any Hit—An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point. Intersection—An intersection instruction performs a ray-primitive intersection test and outputs a result. Per-primitive Bounding box Construction—This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure). Miss—Indicates that a ray misses all geometry within a scene, or specified region of a scene. Visit—Indicates the children volumes a ray will traverse. Exceptions—Includes various types of exception handlers (e.g., invoked for various error conditions). In one embodiment the ray tracing cores372may be adapted to accelerate general-purpose compute operations that can be accelerated using computational techniques that are analogous to ray intersection tests. A compute framework can be provided that enables shader programs to be compiled into low level instructions and/or primitives that perform general-purpose compute operations via the ray tracing cores. Exemplary computational problems that can benefit from compute operations performed on the ray tracing cores372include computations involving beam, wave, ray, or particle propagation within a coordinate space. Interactions associated with that propagation can be computed relative to a geometry or mesh within the coordinate space. For example, computations associated with electromagnetic signal propagation through an environment can be accelerated via the use of instructions or primitives that are executed via the ray tracing cores. Diffraction and reflection of the signals by objects in the environment can be computed as direct ray-tracing analogies. Ray tracing cores372can also be used to perform computations that are not directly analogous to ray tracing. For example, mesh projection, mesh refinement, and volume sampling computations can be accelerated using the ray tracing cores372. Generic coordinate space calculations, such as nearest neighbor calculations can also be performed. For example, the set of points near a given point can be discovered by defining a bounding box in the coordinate space around the point. BVH and ray probe logic within the ray tracing cores372can then be used to determine the set of point intersections within the bounding box. The intersections constitute the origin point and the nearest neighbors to that origin point. Computations that are performed using the ray tracing cores372can be performed in parallel with computations performed on the graphics cores372and tensor cores371. A shader compiler can be configured to compile a compute shader or other general-purpose graphics processing program into low level primitives that can be parallelized across the graphics cores370, tensor cores371, and ray tracing cores372. Techniques for GPU to Host Processor Interconnection FIG.4Aillustrates an exemplary architecture in which a plurality of GPUs410-413, e.g. such as the parallel processors200shown inFIG.2A, are communicatively coupled to a plurality of multi-core processors405-406over high-speed links440A-440D (e.g., buses, point-to-point interconnects, etc.). The high-speed links440A-440D may support a communication throughput of 4 GB/s, 30 GB/s, 80 GB/s or higher, depending on the implementation. Various interconnect protocols may be used including, but not limited to, PCIe 4.0 or 5.0 and NVLink 2.0. However, the underlying principles described herein are not limited to any particular communication protocol or throughput. Two or more of the GPUs410-413may be interconnected over high-speed links442A-442B, which may be implemented using the same or different protocols/links than those used for high-speed links440A-440D. Similarly, two or more of the multi-core processors405-406may be connected over high speed link443which may be symmetric multi-processor (SMP) buses operating at 20 GB/s, 30 GB/s, 120 GB/s or lower or higher speeds. Alternatively, all communication between the various system components shown inFIG.4Amay be accomplished using the same protocols/links (e.g., over a common interconnection fabric). As mentioned, however, the underlying principles described herein are not limited to any particular type of interconnect technology. Each multi-core processor405-406may be communicatively coupled to a processor memory401-402, via memory interconnects430A-430B, respectively, and each GPU410-413is communicatively coupled to GPU memory420-423over GPU memory interconnects450A-450D, respectively. The memory interconnects430A-430B and450A-450D may utilize the same or different memory access technologies. By way of example, and not limitation, the processor memories401-402and GPU memories420-423may be volatile memories such as dynamic random-access memories (DRAMs) (including stacked DRAMs), Graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR6), or High Bandwidth Memory (HBM) and/or may be non-volatile memories such as 3D XPoint/Optane or Nano-Ram. For example, some portion of the memories may be volatile memory and another portion may be non-volatile memory (e.g., using a two-level memory (2LM) hierarchy). A memory subsystem as described herein may be compatible with a number of memory technologies, such as Double Data Rate versions released by JEDEC (Joint Electronic Device Engineering Council). As described below, although the various processors405-406and GPUs410-413may be physically coupled to a particular memory401-402,420-423, respectively, a unified memory architecture may be implemented in which the same virtual system address space (also referred to as the “effective address” space) is distributed among all of the various physical memories. For example, processor memories401-402may each comprise 64 GB of the system memory address space and GPU memories420-423may each comprise 32 GB of the system memory address space (resulting in a total of 256 GB addressable memory in this example). FIG.4Billustrates additional optional details for an interconnection between a multi-core processor407and a graphics acceleration module446. The graphics acceleration module446may include one or more GPU chips integrated on a line card which is coupled to the processor407via the high-speed link440. Alternatively, the graphics acceleration module446may be integrated on the same package or chip as the processor407. The illustrated processor407includes a plurality of cores460A-460D, each with a translation lookaside buffer461A-461D and one or more caches462A-462D. The cores may include various other components for executing instructions and processing data which are not illustrated to avoid obscuring the underlying principles of the components described herein (e.g., instruction fetch units, branch prediction units, decoders, execution units, reorder buffers, etc.). The caches462A-462D may comprise level 1 (L1) and level 2 (L2) caches. In addition, one or more shared caches456may be included in the caching hierarchy and shared by sets of the cores460A-460D. For example, one embodiment of the processor407includes 24 cores, each with its own L1 cache, twelve shared L2 caches, and twelve shared L3 caches. In this embodiment, one of the L2 and L3 caches are shared by two adjacent cores. The processor407and the graphics accelerator integration module446connect with system memory441, which may include processor memories401-402. Coherency is maintained for data and instructions stored in the various caches462A-462D,456and system memory441via inter-core communication over a coherence bus464. For example, each cache may have cache coherency logic/circuitry associated therewith to communicate to over the coherence bus464in response to detected reads or writes to particular cache lines. In one implementation, a cache snooping protocol is implemented over the coherence bus464to snoop cache accesses. Cache snooping/coherency techniques are well understood by those of skill in the art and will not be described in detail here to avoid obscuring the underlying principles described herein. A proxy circuit425may be provided that communicatively couples the graphics acceleration module446to the coherence bus464, allowing the graphics acceleration module446to participate in the cache coherence protocol as a peer of the cores. In particular, an interface435provides connectivity to the proxy circuit425over high-speed link440(e.g., a PCIe bus, NVLink, etc.) and an interface437connects the graphics acceleration module446to the high-speed link440. In one implementation, an accelerator integration circuit436provides cache management, memory access, context management, and interrupt management services on behalf of a plurality of graphics processing engines431,432, N of the graphics acceleration module446. The graphics processing engines431,432, N may each comprise a separate graphics processing unit (GPU). Alternatively, the graphics processing engines431,432, N may comprise different types of graphics processing engines within a GPU such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit engines. In other words, the graphics acceleration module may be a GPU with a plurality of graphics processing engines431-432, N or the graphics processing engines431-432, N may be individual GPUs integrated on a common package, line card, or chip. The accelerator integration circuit436may include a memory management unit (MMU)439for performing various memory management functions such as virtual-to-physical memory translations (also referred to as effective-to-real memory translations) and memory access protocols for accessing system memory441. The MMU439may also include a translation lookaside buffer (TLB) (not shown) for caching the virtual/effective to physical/real address translations. In one implementation, a cache438stores commands and data for efficient access by the graphics processing engines431,432, N. The data stored in cache438and graphics memories433-434, M may be kept coherent with the core caches462A-462D,456and system memory441. As mentioned, this may be accomplished via proxy circuit425which takes part in the cache coherency mechanism on behalf of cache438and memories433-434, M (e.g., sending updates to the cache438related to modifications/accesses of cache lines on processor caches462A-462D,456and receiving updates from the cache438). A set of registers445store context data for threads executed by the graphics processing engines431-432, N and a context management circuit448manages the thread contexts. For example, the context management circuit448may perform save and restore operations to save and restore contexts of the various threads during contexts switches (e.g., where a first thread is saved and a second thread is restored so that the second thread can be execute by a graphics processing engine). For example, on a context switch, the context management circuit448may store current register values to a designated region in memory (e.g., identified by a context pointer). It may then restore the register values when returning to the context. An interrupt management circuit447, for example, may receive and processes interrupts received from system devices. In one implementation, virtual/effective addresses from a graphics processing engine431are translated to real/physical addresses in system memory441by the MMU439. Optionally, the accelerator integration circuit436supports multiple (e.g., 4, 8, 16) graphics accelerator modules446and/or other accelerator devices. The graphics accelerator module446may be dedicated to a single application executed on the processor407or may be shared between multiple applications. Optionally, a virtualized graphics execution environment is provided in which the resources of the graphics processing engines431-432, N are shared with multiple applications, virtual machines (VMs), or containers. The resources may be subdivided into “slices” which are allocated to different VMs and/or applications based on the processing requirements and priorities associated with the VMs and/or applications. VMs and containers can be used interchangeably herein. A virtual machine (VM) can be software that runs an operating system and one or more applications. A VM can be defined by specification, configuration files, virtual disk file, non-volatile random access memory (NVRAM) setting file, and the log file and is backed by the physical resources of a host computing platform. A VM can include an operating system (OS) or application environment that is installed on software, which imitates dedicated hardware. The end user has the same experience on a virtual machine as they would have on dedicated hardware. Specialized software, called a hypervisor, emulates the PC client or server's CPU, memory, hard disk, network and other hardware resources completely, enabling virtual machines to share the resources. The hypervisor can emulate multiple virtual hardware platforms that are isolated from each other, allowing virtual machines to run Linux®, Windows® Server, VMware ESXi, and other operating systems on the same underlying physical host. A container can be a software package of applications, configurations and dependencies so the applications run reliably on one computing environment to another. Containers can share an operating system installed on the server platform and run as isolated processes. A container can be a software package that contains everything the software needs to run such as system tools, libraries, and settings. Containers are not installed like traditional software programs, which allows them to be isolated from the other software and the operating system itself. The isolated nature of containers provides several benefits. First, the software in a container will run the same in different environments. For example, a container that includes PHP and MySQL can run identically on both a Linux® computer and a Windows® machine. Second, containers provide added security since the software will not affect the host operating system. While an installed application may alter system settings and modify resources, such as the Windows registry, a container can only modify settings within the container. Thus, the accelerator integration circuit436acts as a bridge to the system for the graphics acceleration module446and provides address translation and system memory cache services. In one embodiment, to facilitate the bridging functionality, the accelerator integration circuit436may also include shared I/O497(e.g., PCIe, USB, or others) and hardware to enable system control of voltage, clocking, performance, thermals, and security. The shared I/O497may utilize separate physical connections or may traverse the high-speed link440. In addition, the accelerator integration circuit436may provide virtualization facilities for the host processor to manage virtualization of the graphics processing engines, interrupts, and memory management. Because hardware resources of the graphics processing engines431-432, N are mapped explicitly to the real address space seen by the host processor407, any host processor can address these resources directly using an effective address value. One optional function of the accelerator integration circuit436is the physical separation of the graphics processing engines431-432, N so that they appear to the system as independent units. One or more graphics memories433-434, M may be coupled to each of the graphics processing engines431-432, N, respectively. The graphics memories433-434, M store instructions and data being processed by each of the graphics processing engines431-432, N. The graphics memories433-434, M may be volatile memories such as DRAMs (including stacked DRAMs), GDDR memory (e.g., GDDR5, GDDR6), or HBM, and/or may be non-volatile memories such as 3D XPoint/Optane, Samsung Z-NAND, or Nano-Ram. To reduce data traffic over the high-speed link440, biasing techniques may be used to ensure that the data stored in graphics memories433-434, M is data which will be used most frequently by the graphics processing engines431-432, N and preferably not used by the cores460A-460D (at least not frequently). Similarly, the biasing mechanism attempts to keep data needed by the cores (and preferably not the graphics processing engines431-432, N) within the caches462A-462D,456of the cores and system memory441. According to a variant shown inFIG.4Cthe accelerator integration circuit436is integrated within the processor407. The graphics processing engines431-432, N communicate directly over the high-speed link440to the accelerator integration circuit436via interface437and interface435(which, again, may be utilize any form of bus or interface protocol). The accelerator integration circuit436may perform the same operations as those described with respect toFIG.4B, but potentially at a higher throughput given its close proximity to the coherence bus464and caches462A-462D,456. The embodiments described may support different programming models including a dedicated-process programming model (no graphics acceleration module virtualization) and shared programming models (with virtualization). The latter may include programming models which are controlled by the accelerator integration circuit436and programming models which are controlled by the graphics acceleration module446. In the embodiments of the dedicated process model, graphics processing engines431,432, . . . N may be dedicated to a single application or process under a single operating system. The single application can funnel other application requests to the graphics engines431,432, . . . N, providing virtualization within a VM/partition. In the dedicated-process programming models, the graphics processing engines431,432, N, may be shared by multiple VM/application partitions. The shared models require a system hypervisor to virtualize the graphics processing engines431-432, N to allow access by each operating system. For single-partition systems without a hypervisor, the graphics processing engines431-432, N are owned by the operating system. In both cases, the operating system can virtualize the graphics processing engines431-432, N to provide access to each process or application. For the shared programming model, the graphics acceleration module446or an individual graphics processing engine431-432, N selects a process element using a process handle. The process elements may be stored in system memory441and be addressable using the effective address to real address translation techniques described herein. The process handle may be an implementation-specific value provided to the host process when registering its context with the graphics processing engine431-432, N (that is, calling system software to add the process element to the process element linked list). The lower 16-bits of the process handle may be the offset of the process element within the process element linked list. FIG.4Dillustrates an exemplary accelerator integration slice490. As used herein, a “slice” comprises a specified portion of the processing resources of the accelerator integration circuit436. Application effective address space482within system memory441stores process elements483. The process elements483may be stored in response to GPU invocations481from applications480executed on the processor407. A process element483contains the process state for the corresponding application480. A work descriptor (WD)484contained in the process element483can be a single job requested by an application or may contain a pointer to a queue of jobs. In the latter case, the WD484is a pointer to the job request queue in the application's address space482. The graphics acceleration module446and/or the individual graphics processing engines431-432, N can be shared by all or a subset of the processes in the system. For example, the technologies described herein may include an infrastructure for setting up the process state and sending a WD484to a graphics acceleration module446to start a job in a virtualized environment. In one implementation, the dedicated-process programming model is implementation-specific. In this model, a single process owns the graphics acceleration module446or an individual graphics processing engine431. Because the graphics acceleration module446is owned by a single process, the hypervisor initializes the accelerator integration circuit436for the owning partition and the operating system initializes the accelerator integration circuit436for the owning process at the time when the graphics acceleration module446is assigned. In operation, a WD fetch unit491in the accelerator integration slice490fetches the next WD484which includes an indication of the work to be done by one of the graphics processing engines of the graphics acceleration module446. Data from the WD484may be stored in registers445and used by the MMU439, interrupt management circuit447and/or context management circuit448as illustrated. For example, the MMU439may include segment/page walk circuitry for accessing segment/page tables486within the OS virtual address space485. The interrupt management circuit447may process interrupt events492received from the graphics acceleration module446. When performing graphics operations, an effective address493generated by a graphics processing engine431-432, N is translated to a real address by the MMU439. The same set of registers445may be duplicated for each graphics processing engine431-432, N and/or graphics acceleration module446and may be initialized by the hypervisor or operating system. Each of these duplicated registers may be included in an accelerator integration slice490. In one embodiment, each graphics processing engine431-432, N may be presented to the hypervisor496as a distinct graphics processor device. QoS settings can be configured for clients of a specific graphics processing engine431-432, N and data isolation between the clients of each engine can be enabled. Exemplary registers that may be initialized by the hypervisor are shown in Table 1. TABLE 1Hypervisor Initialized Registers1Slice Control Register2Real Address (RA) Scheduled Processes Area Pointer3Authority Mask Override Register4Interrupt Vector Table Entry Offset5Interrupt Vector Table Entry Limit6State Register7Logical Partition ID8Real address (RA) Hypervisor Accelerator Utilization Record Pointer9Storage Description Register Exemplary registers that may be initialized by the operating system are shown in Table 2. TABLE 2Operating System Initialized Registers1Process and Thread Identification2Effective Address (EA) Context Save/Restore Pointer3Virtual Address (VA) Accelerator Utilization Record Pointer4Virtual Address (VA) Storage Segment Table Pointer5Authority Mask6Work descriptor Each WD484may be specific to a particular graphics acceleration module446and/or graphics processing engine431-432, N. It contains all the information a graphics processing engine431-432, N requires to do its work or it can be a pointer to a memory location where the application has set up a command queue of work to be completed. FIG.4Eillustrates additional optional details of a shared model. It includes a hypervisor real address space498in which a process element list499is stored. The hypervisor real address space498is accessible via a hypervisor496which virtualizes the graphics acceleration module engines for the operating system495. The shared programming models allow for all or a subset of processes from all or a subset of partitions in the system to use a graphics acceleration module446. There are two programming models where the graphics acceleration module446is shared by multiple processes and partitions: time-sliced shared and graphics directed shared. In this model, the system hypervisor496owns the graphics acceleration module446and makes its function available to all operating systems495. For a graphics acceleration module446to support virtualization by the system hypervisor496, the graphics acceleration module446may adhere to the following requirements: 1) An application's job request must be autonomous (that is, the state does not need to be maintained between jobs), or the graphics acceleration module446must provide a context save and restore mechanism. 2) An application's job request is guaranteed by the graphics acceleration module446to complete in a specified amount of time, including any translation faults, or the graphics acceleration module446provides the ability to preempt the processing of the job. 3) The graphics acceleration module446must be guaranteed fairness between processes when operating in the directed shared programming model. For the shared model, the application480may be required to make an operating system495system call with a graphics acceleration module446type, a work descriptor (WD), an authority mask register (AMR) value, and a context save/restore area pointer (CSRP). The graphics acceleration module446type describes the targeted acceleration function for the system call. The graphics acceleration module446type may be a system-specific value. The WD is formatted specifically for the graphics acceleration module446and can be in the form of a graphics acceleration module446command, an effective address pointer to a user-defined structure, an effective address pointer to a queue of commands, or any other data structure to describe the work to be done by the graphics acceleration module446. In one embodiment, the AMR value is the AMR state to use for the current process. The value passed to the operating system is similar to an application setting the AMR. If the accelerator integration circuit436and graphics acceleration module446implementations do not support a User Authority Mask Override Register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing the AMR in the hypervisor call. The hypervisor496may optionally apply the current Authority Mask Override Register (AMOR) value before placing the AMR into the process element483. The CSRP may be one of the registers445containing the effective address of an area in the application's address space482for the graphics acceleration module446to save and restore the context state. This pointer is optional if no state is required to be saved between jobs or when a job is preempted. The context save/restore area may be pinned system memory. Upon receiving the system call, the operating system495may verify that the application480has registered and been given the authority to use the graphics acceleration module446. The operating system495then calls the hypervisor496with the information shown in Table 3. TABLE 3OS to Hypervisor Call Parameters1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN) Upon receiving the hypervisor call, the hypervisor496verifies that the operating system495has registered and been given the authority to use the graphics acceleration module446. The hypervisor496then puts the process element483into the process element linked list for the corresponding graphics acceleration module446type. The process element may include the information shown in Table 4. TABLE 4Process Element Information1A work descriptor (WD)2An Authority Mask Register (AMR) value (potentially masked).3An effective address (EA) Context Save/Restore Area Pointer (CSRP)4A process ID (PID) and optional thread ID (TID)5A virtual address (VA) accelerator utilization record pointer (AURP)6The virtual address of the storage segment table pointer (SSTP)7A logical interrupt service number (LISN)8Interrupt vector table, derived from the hypervisor call parameters.9A state register (SR) value10A logical partition ID (LPID)11A real address (RA) hypervisor accelerator utilization record pointer12The Storage Descriptor Register (SDR) The hypervisor may initialize a plurality of accelerator integration slice490registers445. As illustrated inFIG.4F, in one optional implementation a unified memory addressable via a common virtual memory address space used to access the physical processor memories401-402and GPU memories420-423is employed. In this implementation, operations executed on the GPUs410-413utilize the same virtual/effective memory address space to access the processors memories401-402and vice versa, thereby simplifying programmability. A first portion of the virtual/effective address space may be allocated to the processor memory401, a second portion to the second processor memory402, a third portion to the GPU memory420, and so on. The entire virtual/effective memory space (sometimes referred to as the effective address space) may thereby be distributed across each of the processor memories401-402and GPU memories420-423, allowing any processor or GPU to access any physical memory with a virtual address mapped to that memory. Bias/coherence management circuitry494A-494E within one or more of the MMUs439A-439E may be provided that ensures cache coherence between the caches of the host processors (e.g.,405) and the GPUs410-413and implements biasing techniques indicating the physical memories in which certain types of data should be stored. While multiple instances of bias/coherence management circuitry494A-494E are illustrated inFIG.4F, the bias/coherence circuitry may be implemented within the MMU of one or more host processors405and/or within the accelerator integration circuit436. \ The GPU-attached memory420-423may be mapped as part of system memory, and accessed using shared virtual memory (SVM) technology, but without suffering the typical performance drawbacks associated with full system cache coherence. The ability to GPU-attached memory420-423to be accessed as system memory without onerous cache coherence overhead provides a beneficial operating environment for GPU offload. This arrangement allows the host processor405software to setup operands and access computation results, without the overhead of tradition I/O DMA data copies. Such traditional copies involve driver calls, interrupts and memory mapped I/O (MMIO) accesses that are all inefficient relative to simple memory accesses. At the same time, the ability to access GPU attached memory420-423without cache coherence overheads can be critical to the execution time of an offloaded computation. In cases with substantial streaming write memory traffic, for example, cache coherence overhead can significantly reduce the effective write bandwidth seen by a GPU410-413. The efficiency of operand setup, the efficiency of results access, and the efficiency of GPU computation all play a role in determining the effectiveness of GPU offload. A selection between GPU bias and host processor bias may be driven by a bias tracker data structure. A bias table may be used, for example, which may be a page-granular structure (i.e., controlled at the granularity of a memory page) that includes 1 or 2 bits per GPU-attached memory page. The bias table may be implemented in a stolen memory range of one or more GPU-attached memories420-423, with or without a bias cache in the GPU410-413(e.g., to cache frequently/recently used entries of the bias table). Alternatively, the entire bias table may be maintained within the GPU. In one implementation, the bias table entry associated with each access to the GPU-attached memory420-423is accessed prior the actual access to the GPU memory, causing the following operations. First, local requests from the GPU410-413that find their page in GPU bias are forwarded directly to a corresponding GPU memory420-423. Local requests from the GPU that find their page in host bias are forwarded to the processor405(e.g., over a high-speed link as discussed above). Optionally, requests from the processor405that find the requested page in host processor bias complete the request like a normal memory read. Alternatively, requests directed to a GPU-biased page may be forwarded to the GPU410-413. The GPU may then transition the page to a host processor bias if it is not currently using the page. The bias state of a page can be changed either by a software-based mechanism, a hardware-assisted software-based mechanism, or, for a limited set of cases, a purely hardware-based mechanism. One mechanism for changing the bias state employs an API call (e.g. OpenCL), which, in turn, calls the GPU's device driver which, in turn, sends a message (or enqueues a command descriptor) to the GPU directing it to change the bias state and, for some transitions, perform a cache flushing operation in the host. The cache flushing operation is required for a transition from host processor405bias to GPU bias, but is not required for the opposite transition. Cache coherency may be maintained by temporarily rendering GPU-biased pages uncacheable by the host processor405. To access these pages, the processor405may request access from the GPU410which may or may not grant access right away, depending on the implementation. Thus, to reduce communication between the host processor405and GPU410it is beneficial to ensure that GPU-biased pages are those which are required by the GPU but not the host processor405and vice versa. Graphics Processing Pipeline FIG.5illustrates a graphics processing pipeline500. A graphics multiprocessor, such as graphics multiprocessor234as inFIG.2D, graphics multiprocessor325ofFIG.3A, graphics multiprocessor350ofFIG.3Bcan implement the illustrated graphics processing pipeline500. The graphics multiprocessor can be included within the parallel processing subsystems as described herein, such as the parallel processor200ofFIG.2A, which may be related to the parallel processor(s)112ofFIG.1and may be used in place of one of those. The various parallel processing systems can implement the graphics processing pipeline500via one or more instances of the parallel processing unit (e.g., parallel processing unit202ofFIG.2A) as described herein. For example, a shader unit (e.g., graphics multiprocessor234ofFIG.2C) may be configured to perform the functions of one or more of a vertex processing unit504, a tessellation control processing unit508, a tessellation evaluation processing unit512, a geometry processing unit516, and a fragment/pixel processing unit524. The functions of data assembler502, primitive assemblers506,514,518, tessellation unit510, rasterizer522, and raster operations unit526may also be performed by other processing engines within a processing cluster (e.g., processing cluster214ofFIG.2A) and a corresponding partition unit (e.g., partition unit220A-220N ofFIG.2A). The graphics processing pipeline500may also be implemented using dedicated processing units for one or more functions. It is also possible that one or more portions of the graphics processing pipeline500are performed by parallel processing logic within a general-purpose processor (e.g., CPU). Optionally, one or more portions of the graphics processing pipeline500can access on-chip memory (e.g., parallel processor memory222as inFIG.2A) via a memory interface528, which may be an instance of the memory interface218ofFIG.2A. The graphics processor pipeline500may also be implemented via a multi-core group365A as inFIG.3C. The data assembler502is a processing unit that may collect vertex data for surfaces and primitives. The data assembler502then outputs the vertex data, including the vertex attributes, to the vertex processing unit504. The vertex processing unit504is a programmable execution unit that executes vertex shader programs, lighting and transforming vertex data as specified by the vertex shader programs. The vertex processing unit504reads data that is stored in cache, local or system memory for use in processing the vertex data and may be programmed to transform the vertex data from an object-based coordinate representation to a world space coordinate space or a normalized device coordinate space. A first instance of a primitive assembler506receives vertex attributes from the vertex processing unit504. The primitive assembler506readings stored vertex attributes as needed and constructs graphics primitives for processing by tessellation control processing unit508. The graphics primitives include triangles, line segments, points, patches, and so forth, as supported by various graphics processing application programming interfaces (APIs). The tessellation control processing unit508treats the input vertices as control points for a geometric patch. The control points are transformed from an input representation from the patch (e.g., the patch's bases) to a representation that is suitable for use in surface evaluation by the tessellation evaluation processing unit512. The tessellation control processing unit508can also compute tessellation factors for edges of geometric patches. A tessellation factor applies to a single edge and quantifies a view-dependent level of detail associated with the edge. A tessellation unit510is configured to receive the tessellation factors for edges of a patch and to tessellate the patch into multiple geometric primitives such as line, triangle, or quadrilateral primitives, which are transmitted to a tessellation evaluation processing unit512. The tessellation evaluation processing unit512operates on parameterized coordinates of the subdivided patch to generate a surface representation and vertex attributes for each vertex associated with the geometric primitives. A second instance of a primitive assembler514receives vertex attributes from the tessellation evaluation processing unit512, reading stored vertex attributes as needed, and constructs graphics primitives for processing by the geometry processing unit516. The geometry processing unit516is a programmable execution unit that executes geometry shader programs to transform graphics primitives received from primitive assembler514as specified by the geometry shader programs. The geometry processing unit516may be programmed to subdivide the graphics primitives into one or more new graphics primitives and calculate parameters used to rasterize the new graphics primitives. The geometry processing unit516may be able to add or delete elements in the geometry stream. The geometry processing unit516outputs the parameters and vertices specifying new graphics primitives to primitive assembler518. The primitive assembler518receives the parameters and vertices from the geometry processing unit516and constructs graphics primitives for processing by a viewport scale, cull, and clip unit520. The geometry processing unit516reads data that is stored in parallel processor memory or system memory for use in processing the geometry data. The viewport scale, cull, and clip unit520performs clipping, culling, and viewport scaling and outputs processed graphics primitives to a rasterizer522. The rasterizer522can perform depth culling and other depth-based optimizations. The rasterizer522also performs scan conversion on the new graphics primitives to generate fragments and output those fragments and associated coverage data to the fragment/pixel processing unit524. The fragment/pixel processing unit524is a programmable execution unit that is configured to execute fragment shader programs or pixel shader programs. The fragment/pixel processing unit524transforming fragments or pixels received from rasterizer522, as specified by the fragment or pixel shader programs. For example, the fragment/pixel processing unit524may be programmed to perform operations included but not limited to texture mapping, shading, blending, texture correction and perspective correction to produce shaded fragments or pixels that are output to a raster operations unit526. The fragment/pixel processing unit524can read data that is stored in either the parallel processor memory or the system memory for use when processing the fragment data. Fragment or pixel shader programs may be configured to shade at sample, pixel, tile, or other granularities depending on the sampling rate configured for the processing units. The raster operations unit526is a processing unit that performs raster operations including, but not limited to stencil, z-test, blending, and the like, and outputs pixel data as processed graphics data to be stored in graphics memory (e.g., parallel processor memory222as inFIG.2A, and/or system memory104as inFIG.1), to be displayed on the one or more display device(s)110or for further processing by one of the one or more processor(s)102or parallel processor(s)112. The raster operations unit526may be configured to compress z or color data that is written to memory and decompress z or color data that is read from memory. Machine Learning Overview The architecture described above can be applied to perform training and inference operations using machine learning models. Machine learning has been successful at solving many kinds of tasks. The computations that arise when training and using machine learning algorithms (e.g., neural networks) lend themselves naturally to efficient parallel implementations. Accordingly, parallel processors such as general-purpose graphic processing units (GPGPUs) have played a significant role in the practical implementation of deep neural networks. Parallel graphics processors with single instruction, multiple thread (SIMT) architectures are designed to maximize the amount of parallel processing in the graphics pipeline. In an SIMT architecture, groups of parallel threads attempt to execute program instructions synchronously together as often as possible to increase processing efficiency. The efficiency provided by parallel machine learning algorithm implementations allows the use of high capacity networks and enables those networks to be trained on larger datasets. A machine learning algorithm is an algorithm that can learn based on a set of data. For example, machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition. An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., “fed forward”) to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients (“weights”) respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms. Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the “correct” labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered “trained” when the errors for each of the outputs generated from the instances of the training data set are minimized. The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices. FIG.6is a generalized diagram of a machine learning software stack600. A machine learning application602is any logic that can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application602can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application602can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation. Example machine learning applications602include, but are not limited to, voice-based virtual assistants, image or facial recognition algorithms, autonomous navigation, and the software tools that are used to train the machine learning models used by the machine learning applications602. Hardware acceleration for the machine learning application602can be enabled via a machine learning framework604. The machine learning framework604can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework604, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework604. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework604can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations. Examples of a machine learning framework604include, but are not limited to, TensorFlow, TensorRT, PyTorch, MXNet, Caffee, and other high-level machine learning frameworks. The machine learning framework604can process input data received from the machine learning application602and generate the appropriate input to a compute framework606. The compute framework606can abstract the underlying instructions provided to the GPGPU driver608to enable the machine learning framework604to take advantage of hardware acceleration via the GPGPU hardware610without requiring the machine learning framework604to have intimate knowledge of the architecture of the GPGPU hardware610. Additionally, the compute framework606can enable hardware acceleration for the machine learning framework604across a variety of types and generations of the GPGPU hardware610. Exemplary compute frameworks606include the CUDA compute framework and associated machine learning libraries, such as the CUDA Deep Neural Network (cuDNN) library. The machine learning software stack600can also include communication libraries or frameworks to facilitate multi-GPU and multi-node compute. GPGPU Machine Learning Acceleration FIG.7illustrates a general-purpose graphics processing unit700, which may be the parallel processor200ofFIG.2Aor the parallel processor(s)112ofFIG.1. The general-purpose processing unit (GPGPU)700may be configured to provide support for hardware acceleration of primitives provided by a machine learning framework to accelerate the processing the type of computational workloads associated with training deep neural networks. Additionally, the GPGPU700can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks. Primitives are also supported to accelerate inference operations for deployed neural networks. The GPGPU700includes a host interface702to enable a connection with a host processor. The host interface702may be a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU700receives commands from the host processor and uses a global scheduler704to distribute execution threads associated with those commands to a set of processing clusters706A-706H. The processing clusters706A-706H share a cache memory708. The cache memory708can serve as a higher-level cache for cache memories within the processing clusters706A-706H. The illustrated processing clusters706A-706H may correspond with processing clusters214A-214N as inFIG.2A. The GPGPU700includes memory714A-714B coupled with the processing clusters706A-706H via a set of memory controllers712A-712B. The memory714A-714B can include various types of memory devices including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. The memory714A-714B may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM). Each of the processing clusters706A-706H may include a set of graphics multiprocessors, such as the graphics multiprocessor234ofFIG.2D, graphics multiprocessor325ofFIG.3A, graphics multiprocessor350ofFIG.3B, or may include a multi-core group365A-365N as inFIG.3C. The graphics multiprocessors of the compute cluster include multiple types of integer and floating-point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example, at least a subset of the floating-point units in each of the processing clusters706A-706H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating-point units can be configured to perform 64-bit floating point operations. Multiple instances of the GPGPU700can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. For example, the multiple instances of the GPGPU700communicate over the host interface702. In one embodiment the GPGPU700includes an I/O hub709that couples the GPGPU700with a GPU link710that enables a direct connection to other instances of the GPGPU. The GPU link710may be coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU700. Optionally, the GPU link710couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. The multiple instances of the GPGPU700may be located in separate data processing systems and communicate via a network device that is accessible via the host interface702. The GPU link710may be configured to enable a connection to a host processor in addition to or as an alternative to the host interface702. While the illustrated configuration of the GPGPU700can be configured to train neural networks, an alternate configuration of the GPGPU700can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration, the GPGPU700includes fewer of the processing clusters706A-706H relative to the training configuration. Additionally, memory technology associated with the memory714A-714B may differ between inferencing and training configurations. In one embodiment, the inferencing configuration of the GPGPU700can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks. FIG.8illustrates a multi-GPU computing system800. The multi-GPU computing system800can include a processor802coupled to multiple GPGPUs806A-806D via a host interface switch804. The host interface switch804may be a PCI express switch device that couples the processor802to a PCI express bus over which the processor802can communicate with the set of GPGPUs806A-806D. Each of the multiple GPGPUs806A-806D can be an instance of the GPGPU700ofFIG.7. The GPGPUs806A-806D can interconnect via a set of high-speed point to point GPU to GPU links816. The high-speed GPU to GPU links can connect to each of the GPGPUs806A-806D via a dedicated GPU link, such as the GPU link710as inFIG.7. The P2P GPU links816enable direct communication between each of the GPGPUs806A-806D without requiring communication over the host interface bus to which the processor802is connected. With GPU-to-GPU traffic directed to the P2P GPU links, the host interface bus remains available for system memory access or to communicate with other instances of the multi-GPU computing system800, for example, via one or more network devices. While inFIG.8the GPGPUs806A-806D connect to the processor802via the host interface switch804, the processor802may alternatively include direct support for the P2P GPU links816and connect directly to the GPGPUs806A-806D. In one embodiment the P2P GPU link816enable the multi-GPU computing system800to operate as a single logical GPU. Machine Learning Neural Network Implementations The computing architecture described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is well-known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described. A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of “filters” (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network. Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for an RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed. The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general. The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques. Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task. Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network. FIG.9A-9Billustrate an exemplary convolutional neural network.FIG.9Aillustrates various layers within a CNN. As shown inFIG.9A, an exemplary CNN used to model image processing can receive input902describing the red, green, and blue (RGB) components of an input image. The input902can be processed by multiple convolutional layers (e.g., convolutional layer904, convolutional layer906). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers908. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers908can be used to generate an output result from the network. The activations within the fully connected layers908can be computed using matrix multiplication instead of convolution. Not all CNN implementations make use of fully connected layers908. For example, in some implementations the convolutional layer906can generate output for the CNN. The convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers908. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images. FIG.9Billustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer912of a CNN can be processed in three stages of a convolutional layer914. The three stages can include a convolution stage916, a detector stage918, and a pooling stage920. The convolution layer914can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNN. In the convolution stage916performs several convolutions in parallel to produce a set of linear activations. The convolution stage916can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage916defines a set of linear activations that are processed by successive stages of the convolutional layer914. The linear activations can be processed by a detector stage918. In the detector stage918, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as ƒ(x)=max (0, x), such that the activation is thresholded at zero. The pooling stage920uses a pooling function that replaces the output of the convolutional layer906with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage920, including max pooling, average pooling, and 12-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages. The output from the convolutional layer914can then be processed by the next layer922. The next layer922can be an additional convolutional layer or one of the fully connected layers908. For example, the first convolutional layer904ofFIG.9Acan output to the second convolutional layer906, while the second convolutional layer can output to a first layer of the fully connected layers908. FIG.10illustrates an exemplary recurrent neural network1000. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN1000can be described has having an input layer1002that receives an input vector, hidden layers1004to implement a recurrent function, a feedback mechanism1005to enable a ‘memory’ of previous states, and an output layer1006to output a result. The RNN1000operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism1005. For a given time step, the state of the hidden layers1004is defined by the previous state and the input at the current time step. An initial input (x1) at a first time step can be processed by the hidden layer1004. A second input (x2) can be processed by the hidden layer1004using state information that is determined during the processing of the initial input (x1). A given state can be computed as st=ƒ(Uxt+Wst-1), where U and W are parameter matrices. The function ƒ is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function ƒ(x)=max(0, x). However, the specific mathematical function used in the hidden layers1004can vary depending on the specific implementation details of the RNN1000. In addition to the basic CNN and RNN networks described, acceleration for variations on those networks may be enabled. One example RNN variant is the long short term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network. In further embodiments, acceleration for reinforcement learning is enabled. In reinforcement learning, an artificial agent learn by interacting with its environment. The agent is configured to optimize certain objectives to maximize cumulative rewards. FIG.11illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset1102. Various training frameworks1104have been developed to enable hardware acceleration of the training process. For example, the machine learning framework604ofFIG.6may be configured as a training framework604. The training framework604can hook into an untrained neural network1106and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural net1108. To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner. Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset1102includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework1104can adjust to adjust the weights that control the untrained neural network1106. The training framework1104can provide tools to monitor how well the untrained neural network1106is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural net1108. The trained neural network1108can then be deployed to implement any number of machine learning operations to generate an inference result1114based on input of new data1112. Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset1102will include input data without any associated output data. The untrained neural network1106can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network1108capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data. Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset1102includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network1108to adapt to the new data1112without forgetting the knowledge instilled within the network during initial training. Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process. FIG.12Ais a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes, such as the highly parallel general-purpose graphics processing unit700as inFIG.7. As illustrated, distributed learning can be performed with model parallelism1202, data parallelism1204, or a combination of model and data parallelism1206. In model parallelism1202, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks. In data parallelism1204, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes. Combined model and data parallelism1206can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model. Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization. FIG.12Bis a block diagram illustrating a programmable network interface1210and data processing unit. The programmable network interface1210is a programmable network engine that can be used to accelerate network-based compute tasks within a distributed environment. The programmable network interface1210can couple with a host system via host interface1270. The programmable network interface1210can be used to accelerate network or storage operations for CPUs or GPUs of the host system. The host system can be, for example, a node of a distributed learning system used to perform distributed training, for example, as shown inFIG.12A. The host system can also be a data center node within a data center. In one embodiment, access to remote storage containing model data can be accelerated by the programmable network interface1210. For example, the programmable network interface1210can be configured to present remote storage devices as local storage devices to the host system. The programmable network interface1210can also accelerate remote direct memory access (RDMA) operations performed between GPUs of the host system with GPUs of remote systems. In one embodiment, the programmable network interface1210can enable storage functionality such as, but not limited to NVME-oF. The programmable network interface1210can also accelerate encryption, data integrity, compression, and other operations for remote storage on behalf of the host system, allowing remote storage to approach the latencies of storage devices that are directly attached to the host system. The programmable network interface1210can also perform resource allocation and management on behalf of the host system. Storage security operations can be offloaded to the programmable network interface1210and performed in concert with the allocation and management of remote storage resources. Network-based operations to manage access to the remote storage that would otherwise by performed by a processor of the host system can instead be performed by the programmable network interface1210. In one embodiment, network and/or data security operations can be offloaded from the host system to the programmable network interface1210. Data center security policies for a data center node can be handled by the programmable network interface1210instead of the processors of the host system. For example, the programmable network interface1210can detect and mitigate against an attempted network-based attack (e.g., DDoS) on the host system, preventing the attack from compromising the availability of the host system. The programmable network interface1210can include a system on a chip (SoC1220) that executes an operating system via multiple processor cores1222. The processor cores1222can include general-purpose processor (e.g., CPU) cores. In one embodiment the processor cores1222can also include one or more GPU cores. The SoC1220can execute instructions stored in a memory device1240. A storage device1250can store local operating system data. The storage device1250and memory device1240can also be used to cache remote data for the host system. Network ports1260A-1260B enable a connection to a network or fabric and facilitate network access for the SoC1220and, via the host interface1270, for the host system. The programmable network interface1210can also include an I/O interface1275, such as a USB interface. The I/O interface1275can be used to couple external devices to the programmable network interface1210or as a debug interface. The programmable network interface1210also includes a management interface1230that enables software on the host device to manage and configure the programmable network interface1210and/or SoC1220. In one embodiment the programmable network interface1210may also include one or more accelerators or GPUs1245to accept offload of parallel compute tasks from the SoC1220, host system, or remote systems coupled via the network ports1260A-1260B. Exemplary Machine Learning Applications Machine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors. Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles. Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR. Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages. The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training. Exemplary parallel processors suited for training include the general-purpose graphics processing unit700ofFIG.7and the multi-GPU computing system800ofFIG.8. On the contrary, deployed machine learning platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles. Additionally, machine learning techniques can be applied to accelerate or enhance graphics processing activities. For example, a machine learning model can be trained to recognize output generated by a GPU accelerated application and generate an upscaled version of that output. Such techniques can be applied to accelerate the generation of high resolution images for a gaming application. Various other graphics pipeline activities can benefit from the use of machine learning. For example, machine learning models can be trained to perform tessellation operations on geometry data to increase the complexity of geometric models, allowing fine-detailed geometry to be automatically generated from geometry of relatively lower detail. FIG.13illustrates an exemplary inferencing system on a chip (SOC)1300suitable for performing inferencing using a trained model. The SOC1300can integrate processing components including a media processor1302, a vision processor1304, a GPGPU1306and a multi-core processor1308. The GPGPU1306may be a GPGPU as described herein, such as the GPGPU700, and the multi-core processor1308may be a multi-core processor described herein, such as the multi-core processors405-406. The SOC1300can additionally include on-chip memory1305that can enable a shared on-chip data pool that is accessible by each of the processing components. The processing components can be optimized for low power operation to enable deployment to a variety of machine learning platforms, including autonomous vehicles and autonomous robots. For example, one implementation of the SOC1300can be used as a portion of the main control system for an autonomous vehicle. Where the SOC1300is configured for use in autonomous vehicles the SOC is designed and configured for compliance with the relevant functional safety standards of the deployment jurisdiction. During operation, the media processor1302and vision processor1304can work in concert to accelerate computer vision operations. The media processor1302can enable low latency decode of multiple high-resolution (e.g.,4K,8K) video streams. The decoded video streams can be written to a buffer in the on-chip memory1305. The vision processor1304can then parse the decoded video and perform preliminary processing operations on the frames of the decoded video in preparation of processing the frames using a trained image recognition model. For example, the vision processor1304can accelerate convolution operations for a CNN that is used to perform image recognition on the high-resolution video data, while back end model computations are performed by the GPGPU1306. The multi-core processor1308can include control logic to assist with sequencing and synchronization of data transfers and shared memory operations performed by the media processor1302and the vision processor1304. The multi-core processor1308can also function as an application processor to execute software applications that can make use of the inferencing compute capability of the GPGPU1306. For example, at least a portion of the navigation and driving logic can be implemented in software executing on the multi-core processor1308. Such software can directly issue computational workloads to the GPGPU1306or the computational workloads can be issued to the multi-core processor1308, which can offload at least a portion of those operations to the GPGPU1306. The GPGPU1306can include compute clusters such as a low power configuration of the processing clusters706A-706H within general-purpose graphics processing unit700. The compute clusters within the GPGPU1306can support instruction that are specifically optimized to perform inferencing computations on a trained neural network. For example, the GPGPU1306can support instructions to perform low precision computations such as 8-bit and 4-bit integer vector operations. Additional System Overview FIG.14is a block diagram of a processing system1400. The elements ofFIG.14having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. System1400may be used in a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors1402or processor cores1407. The system1400may be a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices such as within Internet-of-things (IoT) devices with wired or wireless connectivity to a local or wide area network. The system1400may be a processing system having components that correspond with those ofFIG.1. For example, in different configurations, processor(s)1402or processor core(s)1407may correspond with processor(s)102ofFIG.1. Graphics processor(s)1408may correspond with parallel processor(s)112ofFIG.1. External graphics processor1418may be one of the add-in device(s)120ofFIG.1. The system1400can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. The system1400may be part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system1400can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. The processing system1400may include or be part of a television or set top box device. The system1400can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane or glider (or any combination thereof). The self-driving vehicle may use system1400to process the environment sensed around the vehicle. The one or more processors1402may include one or more processor cores1407to process instructions which, when executed, perform operations for system or user software. The least one of the one or more processor cores1407may be configured to process a specific instruction set1409. The instruction set1409may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores1407may process a different instruction set1409, which may include instructions to facilitate the emulation of other instruction sets. Processor core1407may also include other processing devices, such as a Digital Signal Processor (DSP). The processor1402may include cache memory1404. Depending on the architecture, the processor1402can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor1402. In some embodiments, the processor1402also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores1407using known cache coherency techniques. A register file1406can be additionally included in processor1402and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor1402. The one or more processor(s)1402may be coupled with one or more interface bus(es)1410to transmit communication signals such as address, data, or control signals between processor1402and other components in the system1400. The interface bus1410, in one of these embodiments, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses. For example, the processor(s)1402may include an integrated memory controller1416and a platform controller hub1430. The memory controller1416facilitates communication between a memory device and other components of the system1400, while the platform controller hub (PCH)1430provides connections to I/O devices via a local I/O bus. The memory device1420can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. The memory device1420can, for example, operate as system memory for the system1400, to store data1422and instructions1421for use when the one or more processors1402executes an application or process. Memory controller1416also couples with an optional external graphics processor1418, which may communicate with the one or more graphics processors1408in processors1402to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator1412which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, the accelerator1412may be a matrix multiplication accelerator used to optimize machine learning or compute operations. The accelerator1412can be a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor1408. In one embodiment, an external accelerator1419may be used in place of or in concert with the accelerator1412. A display device1411may be provided that can connect to the processor(s)1402. The display device1411can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). The display device1411can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications. The platform controller hub1430may enable peripherals to connect to memory device1420and processor1402via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller1446, a network controller1434, a firmware interface1428, a wireless transceiver1426, touch sensors1425, a data storage device1424(e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint/Optane, etc.). The data storage device1424can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors1425can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver1426can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface1428enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller1434can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus1410. The audio controller1446may be a multi-channel high definition audio controller. In some of these embodiments the system1400includes an optional legacy I/O controller1440for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub1430can also connect to one or more Universal Serial Bus (USB) controllers1442connect input devices, such as keyboard and mouse1443combinations, a camera1444, or other USB input devices. It will be appreciated that the system1400shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller1416and platform controller hub1430may be integrated into a discrete external graphics processor, such as the external graphics processor1418. The platform controller hub1430and/or memory controller1416may be external to the one or more processor(s)1402. For example, the system1400can include an external memory controller1416and platform controller hub1430, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s)1402. For example, circuit boards (“sleds”) can be used on which components such as CPUs, memory, and other components are placed are designed for increased thermal performance. Processing components such as the processors may be located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity. A data center can utilize a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local. A power supply or source can provide voltage and/or current to system1400or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, the power source includes a DC power source, such as an external AC to DC converter. A power source or power supply may also include wireless charging hardware to charge via proximity to a charging field. The power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source. FIG.15A-15Cillustrate computing systems and graphics processors. The elements ofFIG.15A-15Chaving the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. FIG.15Ais a block diagram of a processor1500, which may be a variant of one of the processors1402and may be used in place of one of those. Therefore, the disclosure of any features in combination with the processor1500herein also discloses a corresponding combination with the processor(s)1402, but is not limited to such. The processor1500may have one or more processor cores1502A-1502N, an integrated memory controller1514, and an integrated graphics processor1508. Where an integrated graphics processor1508is excluded, the system that includes the processor will include a graphics processor device within a system chipset or coupled via a system bus. Processor1500can include additional cores up to and including additional core1502N represented by the dashed lined boxes. Each of processor cores1502A-1502N includes one or more internal cache units1504A-1504N. In some embodiments each processor core1502A-1502N also has access to one or more shared cache units1506. The internal cache units1504A-1504N and shared cache units1506represent a cache memory hierarchy within the processor1500. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units1506and1504A-1504N. The processor1500may also include a set of one or more bus controller units1516and a system agent core1510. The one or more bus controller units1516manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core1510provides management functionality for the various processor components. The system agent core1510may include one or more integrated memory controllers1514to manage access to various external memory devices (not shown). For example, one or more of the processor cores1502A-1502N may include support for simultaneous multi-threading. The system agent core1510includes components for coordinating and operating cores1502A-1502N during multi-threaded processing. System agent core1510may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores1502A-1502N and graphics processor1508. The processor1500may additionally include graphics processor1508to execute graphics processing operations. In some of these embodiments, the graphics processor1508couples with the set of shared cache units1506, and the system agent core1510, including the one or more integrated memory controllers1514. The system agent core1510may also include a display controller1511to drive graphics processor output to one or more coupled displays. The display controller1511may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor1508. A ring-based interconnect unit1512may be used to couple the internal components of the processor1500. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some of these embodiments with a ring-based interconnect1512, the graphics processor1508couples with the ring-based interconnect1512via an I/O link1513. The exemplary I/O link1513represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module1518, such as an eDRAM module. Optionally, each of the processor cores1502A-1502N and graphics processor1508can use embedded memory modules1518as a shared Last Level Cache. The processor cores1502A-1502N may, for example, be homogenous cores executing the same instruction set architecture. Alternatively, the processor cores1502A-1502N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores1502A-1502N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. The processor cores1502A-1502N may be heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. As another example, the processor cores1502A-1502N are heterogeneous in terms of computational capability. Additionally, processor1500can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components. FIG.15Bis a block diagram of hardware logic of a graphics processor core1519, according to some embodiments described herein. The graphics processor core1519, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core1519is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core1519can include a fixed function block1530coupled with multiple sub-cores1521A-1521F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. The fixed function block1530may include a geometry/fixed function pipeline1531that can be shared by all sub-cores in the graphics processor core1519, for example, in lower performance and/or lower power graphics processor implementations. The geometry/fixed function pipeline1531may include a 3D fixed function pipeline (e.g., 3D pipeline1612as inFIG.16Adescribed below) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers (e.g., unified return buffer1718inFIG.17, as described below). The fixed function block1530may also include a graphics SoC interface1532, a graphics microcontroller1533, and a media pipeline1534. The graphics SoC interface1532provides an interface between the graphics processor core1519and other processor cores within a system on a chip integrated circuit. The graphics microcontroller1533is a programmable sub-processor that is configurable to manage various functions of the graphics processor core1519, including thread dispatch, scheduling, and pre-emption. The media pipeline1534(e.g., media pipeline1616ofFIG.16AandFIG.17) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline1534implement media operations via requests to compute or sampling logic within the sub-cores1521-1521F. The SoC interface1532may enable the graphics processor core1519to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface1532can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core1519and CPUs within the SoC. The SoC interface1532can also implement power management controls for the graphics processor core1519and enable an interface between a clock domain of the graphic core1519and other clock domains within the SoC. Optionally, the SoC interface1532enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline1534, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline1531, geometry and fixed function pipeline1537) when graphics processing operations are to be performed. The graphics microcontroller1533can be configured to perform various scheduling and management tasks for the graphics processor core1519. In one configuration the graphics microcontroller1533can, for example, perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays1522A-1522F,1524A-1524F within the sub-cores1521A-1521F. In this workload scheduling, host software executing on a CPU core of an SoC including the graphics processor core1519can submit workloads to one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. Optionally, the graphics microcontroller1533can also facilitate low-power or idle states for the graphics processor core1519, providing the graphics processor core1519with the ability to save and restore registers within the graphics processor core1519across low-power state transitions independently from the operating system and/or graphics driver software on the system. The graphics processor core1519may have more than or fewer than the illustrated sub-cores1521A-1521F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core1519can also include shared function logic1535, shared and/or cache memory1536, a geometry/fixed function pipeline1537, as well as additional fixed function logic1538to accelerate various graphics and compute processing operations. The shared function logic1535can include logic units associated with the shared function logic1720ofFIG.17(e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core1519. The shared and/or cache memory1536can be a last-level cache for the set of N sub-cores1521A-1521F within the graphics processor core1519, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline1537can be included instead of the geometry/fixed function pipeline1531within the fixed function block1530and can include the same or similar logic units. The graphics processor core1519may include additional fixed function logic1538that can include various fixed function acceleration logic for use by the graphics processor core1519. Optionally, the additional fixed function logic1538includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline1538,1531, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic1538. For example, the cull pipeline may be a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example, the cull pipeline logic within the additional fixed function logic1538can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase. Optionally, the additional fixed function logic1538can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing. Within each graphics sub-core1521A-1521F a set of execution resources is included that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores1521A-1521F include multiple EU arrays1522A-1522F,1524A-1524F, thread dispatch and inter-thread communication (TD/IC) logic1523A-1523F, a 3D (e.g., texture) sampler1525A-1525F, a media sampler1526A-1526F, a shader processor1527A-1527F, and shared local memory (SLM)1528A-1528F. The EU arrays1522A-1522F,1524A-1524F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic1523A-1523F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler1525A-1525F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler1526A-1526F can perform similar read operations based on the type and format associated with media data. For example, each graphics sub-core1521A-1521F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores1521A-1521F can make use of shared local memory1528A-1528F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. FIG.15Cis a block diagram of general-purpose graphics processing unit (GPGPU)1570that can be configured as a graphics processor, e.g. the graphics processor1508, and/or compute accelerator, according to embodiments described herein. The GPGPU1570can interconnect with host processors (e.g., one or more CPU(s)1546) and memory1571,1572via one or more system and/or memory busses. Memory1571may be system memory that can be shared with the one or more CPU(s)1546, while memory1572is device memory that is dedicated to the GPGPU1570. For example, components within the GPGPU1570and device memory1572may be mapped into memory addresses that are accessible to the one or more CPU(s)1546. Access to memory1571and1572may be facilitated via a memory controller1568. The memory controller1568may include an internal direct memory access (DMA) controller1569or can include logic to perform operations that would otherwise be performed by a DMA controller. The GPGPU1570includes multiple cache memories, including an L2 cache1553, L1 cache1554, an instruction cache1555, and shared memory1556, at least a portion of which may also be partitioned as a cache memory. The GPGPU1570also includes multiple compute units1560A-1560N. Each compute unit1560A-1560N includes a set of vector registers1561, scalar registers1562, vector logic units1563, and scalar logic units1564. The compute units1560A-1560N can also include local shared memory1565and a program counter1566. The compute units1560A-1560N can couple with a constant cache1567, which can be used to store constant data, which is data that will not change during the run of kernel or shader program that executes on the GPGPU1570. The constant cache1567may be a scalar data cache and cached data can be fetched directly into the scalar registers1562. During operation, the one or more CPU(s)1546can write commands into registers or memory in the GPGPU1570that has been mapped into an accessible address space. The command processors1557can read the commands from registers or memory and determine how those commands will be processed within the GPGPU1570. A thread dispatcher1558can then be used to dispatch threads to the compute units1560A-1560N to perform those commands. Each compute unit1560A-1560N can execute threads independently of the other compute units. Additionally, each compute unit1560A-1560N can be independently configured for conditional computation and can conditionally output the results of computation to memory. The command processors1557can interrupt the one or more CPU(s)1546when the submitted commands are complete. FIG.16A-16Cillustrate block diagrams of additional graphics processor and compute accelerator architectures provided by embodiments described herein, e.g. in accordance withFIG.15A-15C. The elements ofFIG.16A-16Chaving the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. FIG.16Ais a block diagram of a graphics processor1600, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores, or other semiconductor devices such as, but not limited to, memory devices or network interfaces. The graphics processor1600may be a variant of the graphics processor1508and may be used in place of the graphics processor1508. Therefore, the disclosure of any features in combination with the graphics processor1508herein also discloses a corresponding combination with the graphics processor1600, but is not limited to such. The graphics processor may communicate via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. Graphics processor1600may include a memory interface1614to access memory. Memory interface1614can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. Optionally, graphics processor1600also includes a display controller1602to drive display output data to a display device1618. Display controller1602includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device1618can be an internal or external display device. In one embodiment the display device1618is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. Graphics processor1600may include a video codec engine1606to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. Graphics processor1600may include a block image transfer (BLIT) engine1603to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, alternatively, 2D graphics operations may be performed using one or more components of graphics processing engine (GPE)1610. In some embodiments, GPE1610is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. GPE1610may include a 3D pipeline1612for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline1612includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media subsystem1615. While 3D pipeline1612can be used to perform media operations, an embodiment of GPE1610also includes a media pipeline1616that is specifically used to perform media operations, such as video post-processing and image enhancement. Media pipeline1616may include fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine1606. Media pipeline1616may additionally include a thread spawning unit to spawn threads for execution on 3D/Media subsystem1615. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media subsystem1615. The 3D/Media subsystem1615may include logic for executing threads spawned by 3D pipeline1612and media pipeline1616. The pipelines may send thread execution requests to 3D/Media subsystem1615, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. The 3D/Media subsystem1615may include one or more internal caches for thread instructions and data. Additionally, the 3D/Media subsystem1615may also include shared memory, including registers and addressable memory, to share data between threads and to store output data. FIG.16Billustrates a graphics processor1620, being a variant of the graphics processor1600and may be used in place of the graphics processor1600and vice versa. Therefore, the disclosure of any features in combination with the graphics processor1600herein also discloses a corresponding combination with the graphics processor1620, but is not limited to such. The graphics processor1620has a tiled architecture, according to embodiments described herein. The graphics processor1620may include a graphics processing engine cluster1622having multiple instances of the graphics processing engine1610ofFIG.16Awithin a graphics engine tile1610A-1610D. Each graphics engine tile1610A-1610D can be interconnected via a set of tile interconnects1623A-1623F. Each graphics engine tile1610A-1610D can also be connected to a memory module or memory device1626A-1626D via memory interconnects1625A-1625D. The memory devices1626A-1626D can use any graphics memory technology. For example, the memory devices1626A-1626D may be graphics double data rate (GDDR) memory. The memory devices1626A-1626D may be high-bandwidth memory (HBM) modules that can be on-die with their respective graphics engine tile1610A-1610D. The memory devices1626A-1626D may be stacked memory devices that can be stacked on top of their respective graphics engine tile1610A-1610D. Each graphics engine tile1610A-1610D and associated memory1626A-1626D may reside on separate chiplets, which are bonded to a base die or base substrate, as described in further detail inFIG.24B-24D. The graphics processor1620may be configured with a non-uniform memory access (NUMA) system in which memory devices1626A-1626D are coupled with associated graphics engine tiles1610A-1610D. A given memory device may be accessed by graphics engine tiles other than the tile to which it is directly connected. However, access latency to the memory devices1626A-1626D may be lowest when accessing a local tile. In one embodiment, a cache coherent NUMA (ccNUMA) system is enabled that uses the tile interconnects1623A-1623F to enable communication between cache controllers within the graphics engine tiles1610A-1610D to keep a consistent memory image when more than one cache stores the same memory location. The graphics processing engine cluster1622can connect with an on-chip or on-package fabric interconnect1624. In one embodiment the fabric interconnect1624includes a network processor, network on a chip (NoC), or another switching processor to enable the fabric interconnect1624to act as a packet switched fabric interconnect that switches data packets between components of the graphics processor1620. The fabric interconnect1624can enable communication between graphics engine tiles1610A-1610D and components such as the video codec engine1606and one or more copy engines1604. The copy engines1604can be used to move data out of, into, and between the memory devices1626A-1626D and memory that is external to the graphics processor1620(e.g., system memory). The fabric interconnect1624can also be used to interconnect the graphics engine tiles1610A-1610D. The graphics processor1620may optionally include a display controller1602to enable a connection with an external display device1618. The graphics processor may also be configured as a graphics or compute accelerator. In the accelerator configuration, the display controller1602and display device1618may be omitted. The graphics processor1620can connect to a host system via a host interface1628. The host interface1628can enable communication between the graphics processor1620, system memory, and/or other system components. The host interface1628can be, for example, a PCI express bus or another type of host system interface. For example, the host interface1628may be an NVLink or NVSwitch interface. The host interface1628and fabric interconnect1624can cooperate to enable multiple instances of the graphics processor1620to act as single logical device. Cooperation between the host interface1628and fabric interconnect1624can also enable the individual graphics engine tiles1610A-1610D to be presented to the host system as distinct logical graphics devices. FIG.16Cillustrates a compute accelerator1630, according to embodiments described herein. The compute accelerator1630can include architectural similarities with the graphics processor1620ofFIG.16Band is optimized for compute acceleration. A compute engine cluster1632can include a set of compute engine tiles1640A-1640D that include execution logic that is optimized for parallel or vector-based general-purpose compute operations. The compute engine tiles1640A-1640D may not include fixed function graphics processing logic, although in some embodiments one or more of the compute engine tiles1640A-1640D can include logic to perform media acceleration. The compute engine tiles1640A-1640D can connect to memory1626A-1626D via memory interconnects1625A-1625D. The memory1626A-1626D and memory interconnects1625A-1625D may be similar technology as in graphics processor1620, or can be different. The graphics compute engine tiles1640A-1640D can also be interconnected via a set of tile interconnects1623A-1623F and may be connected with and/or interconnected by a fabric interconnect1624. In one embodiment the compute accelerator1630includes a large L3 cache1636that can be configured as a device-wide cache. The compute accelerator1630can also connect to a host processor and memory via a host interface1628in a similar manner as the graphics processor1620ofFIG.16B. The compute accelerator1630can also include an integrated network interface1642. In one embodiment the network interface1642includes a network processor and controller logic that enables the compute engine cluster1632to communicate over a physical layer interconnect1644without requiring data to traverse memory of a host system. In one embodiment, one of the compute engine tiles1640A-1640D is replaced by network processor logic and data to be transmitted or received via the physical layer interconnect1644may be transmitted directly to or from memory1626A-1626D. Multiple instances of the compute accelerator1630may be joined via the physical layer interconnect1644into a single logical device. Alternatively, the various compute engine tiles1640A-1640D may be presented as distinct network accessible compute accelerator devices. Graphics Processing Engine FIG.17is a block diagram of a graphics processing engine1710of a graphics processor in accordance with some embodiments. The graphics processing engine (GPE)1710may be a version of the GPE1610shown inFIG.16A, and may also represent a graphics engine tile1610A-1610D ofFIG.16B. The elements ofFIG.17having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. For example, the 3D pipeline1612and media pipeline1616ofFIG.16Aare also illustrated inFIG.17. The media pipeline1616is optional in some embodiments of the GPE1710and may not be explicitly included within the GPE1710. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE1710. GPE1710may couple with or include a command streamer1703, which provides a command stream to the 3D pipeline1612and/or media pipelines1616. Alternatively or additionally, the command streamer1703may be directly coupled to a unified return buffer1718. The unified return buffer1718may be communicatively coupled to a graphics core array1714. Optionally, the command streamer1703is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. The command streamer1703may receive commands from the memory and sends the commands to 3D pipeline1612and/or media pipeline1616. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline1612and media pipeline1616. The ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline1612can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline1612and/or image data and memory objects for the media pipeline1616. The 3D pipeline1612and media pipeline1616process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to the graphics core array1714. The graphics core array1714may include one or more blocks of graphics cores (e.g., graphics core(s)1715A, graphics core(s)1715B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic. In various embodiments the 3D pipeline1612can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array1714. The graphics core array1714provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s)1715A-1715B of the graphics core array1714includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. The graphics core array1714may include execution logic to perform media functions, such as video and/or image processing. The execution units may include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s)1407ofFIG.14or core1502A-1502N as inFIG.15A. Output data generated by threads executing on the graphics core array1714can output data to memory in a unified return buffer (URB)1718. The URB1718can store data for multiple threads. The URB1718may be used to send data between different threads executing on the graphics core array1714. The URB1718may additionally be used for synchronization between threads on the graphics core array1714and fixed function logic within the shared function logic1720. Optionally, the graphics core array1714may be scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE1710. The execution resources may be dynamically scalable, such that execution resources may be enabled or disabled as needed. The graphics core array1714couples with shared function logic1720that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic1720are hardware logic units that provide specialized supplemental functionality to the graphics core array1714. In various embodiments, shared function logic1720includes but is not limited to sampler1721, math1722, and inter-thread communication (ITC)1723logic. Additionally, one or more cache(s)1725within the shared function logic1720may be implemented. A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core array1714. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic1720and shared among the execution resources within the graphics core array1714. The precise set of functions that are shared between the graphics core array1714and included within the graphics core array1714varies across embodiments. Specific shared functions within the shared function logic1720that are used extensively by the graphics core array1714may be included within shared function logic1716within the graphics core array1714. Optionally, the shared function logic1716within the graphics core array1714can include some or all logic within the shared function logic1720. All logic elements within the shared function logic1720may be duplicated within the shared function logic1716of the graphics core array1714. Alternatively, the shared function logic1720is excluded in favor of the shared function logic1716within the graphics core array1714. Execution Units FIG.18A-18Billustrate thread execution logic1800including an array of processing elements employed in a graphics processor core according to embodiments described herein. The elements ofFIG.18A-18Bhaving the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such.FIG.18A-18Billustrates an overview of thread execution logic1800, which may be representative of hardware logic illustrated with each sub-core1521A-1521F ofFIG.15B.FIG.18Ais representative of an execution unit within a general-purpose graphics processor, whileFIG.18Bis representative of an execution unit that may be used within a compute accelerator. As illustrated inFIG.18A, thread execution logic1800may include a shader processor1802, a thread dispatcher1804, instruction cache1806, a scalable execution unit array including a plurality of graphics execution units1808A-1808N, a sampler1810, shared local memory1811, a data cache1812, and a data port1814. Optionally, the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of graphics execution units1808A,1808B,1808C,1808D, through1808N-1and1808N) based on the computational requirements of a workload. The included components may be interconnected via an interconnect fabric that links to each of the components. Thread execution logic1800may include one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache1806, data port1814, sampler1810, and graphics execution units1808A-1808N. Each execution unit (e.g.1808A) may be a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units1808A-1808N is scalable to include any number individual execution units. In some embodiments the graphics execution units1808A-1808N may be primarily used to execute shader programs. A shader processor1802can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher1804. The thread dispatcher may include logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution units in the graphics execution units1808A-1808N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. Optionally, the thread dispatcher1804can also process runtime thread spawning requests from the executing shader programs. In some embodiments, the graphics execution units1808A-1808N may support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the graphics execution units1808A-1808N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units1808A-1808N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader, such as vertex shader2107illustrated inFIG.21. Various embodiments can apply to use execution by use of Single Instruction Multiple Thread (SIMT) as an alternate to use of SIMD or in addition to use of SIMD. Reference to a SIMD core or operation can apply also to SIMT or apply to SIMD in combination with SIMT. Each execution unit in graphics execution units1808A-1808N operates on arrays of data elements. The number of data elements is the “execution size,” or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs), Floating-Point Units (FPUs), or other logic units (e.g., tensor cores, ray tracing cores, etc.) for a particular graphics processor. Additionally, the graphics execution units1808A-1808N may support integer and floating-point data types. The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 184-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible. Optionally, one or more execution units can be combined into a fused graphics execution unit1809A-1809N having thread control logic (1807A-1807N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit1809A-1809N includes at least two execution units. For example, fused execution unit1809A includes a first EU1808A, second EU1808B, and thread control logic1807A that is common to the first EU1808A and the second EU1808B. The thread control logic1807A controls threads executed on the fused graphics execution unit1809A, allowing each EU within the fused execution units1809A-1809N to execute using a common instruction pointer register. One or more internal instruction caches (e.g.,1806) are included in the thread execution logic1800to cache thread instructions for the execution units. One or more data caches (e.g.,1812) may be included in the thread execution logic1800to cache thread data during thread execution. Threads executing on the execution logic1800can also store explicitly managed data in the shared local memory1811. A sampler1810may be included to provide texture sampling for 3D operations and media sampling for media operations. Sampler1810may include specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit. During execution, the graphics and media pipelines send thread initiation requests to thread execution logic1800via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor1802is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). A pixel shader or fragment shader may calculate the values of the various vertex attributes that are to be interpolated across the rasterized object. The pixel processor logic within the shader processor1802may then execute an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor1802dispatches threads to an execution unit (e.g.,1808A) via thread dispatcher1804. Shader processor1802may use texture sampling logic in the sampler1810to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. In addition, the data port1814may provide a memory access mechanism for the thread execution logic1800to output processed data to memory for further processing on a graphics processor output pipeline. The data port1814may include or couple to one or more cache memories (e.g., data cache1812) to cache data for memory access via the data port1814. Optionally, the execution logic1800can also include a ray tracer1805that can provide ray tracing acceleration functionality. The ray tracer1805can support a ray tracing instruction set that includes instructions/functions for ray generation. The ray tracing instruction set can be similar to or different from the ray-tracing instruction set supported by the ray tracing cores372inFIG.3C. FIG.18Billustrates exemplary internal details of an execution unit1808. A graphics execution unit1808can include an instruction fetch unit1837, a general register file array (GRF)1824, an architectural register file array (ARF)1826, a thread arbiter1822, a send unit1830, a branch unit1832, a set of SIMD floating point units (FPUs)1834, and optionally a set of dedicated integer SIMD ALUs1835. The GRF1824and ARF1826includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit1808. Per thread architectural state may be maintained in the ARF1826, while data used during thread execution is stored in the GRF1824. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF1826. The graphics execution unit1808may have an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture may have a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. The number of logical threads that may be executed by the graphics execution unit1808is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread. Optionally, the graphics execution unit1808can co-issue multiple instructions, which may each be different instructions. The thread arbiter1822of the graphics execution unit thread1808can dispatch the instructions to one of the send unit1830, branch unit1832, or SIMD FPU(s)1834for execution. Each execution thread can access 128 general-purpose registers within the GRF1824, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. Each execution unit thread may have access to 4 Kbytes within the GRF1824, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. The graphics execution unit1808may be partitioned into seven hardware threads that can independently perform computational operations, although the number of threads per execution unit can also vary according to embodiments, for example, up to 16 hardware threads may be supported. In an exemplary embodiment, in which seven threads may access 4 Kbytes, the GRF1824can store a total of 28 Kbytes. In another exemplary embodiment, where 16 threads may access 4 Kbytes, the GRF1824can store a total of 64 Kbytes. The number of threads per execution unit are, however, not limited to those examples and may be more or less than the given numbers. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures. Additionally or alternatively, memory operations, sampler operations, and other longer-latency system communications may be dispatched via “send” instructions that are executed by the message passing send unit1830. Branch instructions may be dispatched to a dedicated branch unit1832to facilitate SIMD divergence and eventual convergence. The graphics execution unit1808may include one or more SIMD floating point units (FPU(s))1834to perform floating-point operations. The FPU(s)1834may also support integer computation. In some instances, the FPU(s)1834can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. Optionally, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 184-bit floating-point. A set of 8-bit integer SIMD ALUs1835may also be present, and may be specifically optimized to perform operations associated with machine learning computations. Optionally, arrays of multiple instances of the graphics execution unit1808can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can choose the exact number of execution units per sub-core grouping. The execution unit1808may execute instructions across a plurality of execution channels. In addition, each thread executed on the graphics execution unit1808may be executed on a different channel. FIG.19illustrates a further exemplary execution unit1900. The elements ofFIG.19having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The execution unit1900may be a compute-optimized execution unit for use in, for example, a compute engine tile1640A-1640D as inFIG.16C, but is not limited as such. The execution unit1900may also be used in a graphics engine tile1610A-1610D as inFIG.16B. The execution unit1900may include a thread control unit1901, a thread state unit1902, an instruction fetch/prefetch unit1903, and an instruction decode unit1904. The execution unit1900may additionally include a register file1906that stores registers that can be assigned to hardware threads within the execution unit. The execution unit1900may additionally include a send unit1907and a branch unit1908. The send unit1907and branch unit1908may operate similarly as the send unit1830and a branch unit1832of the graphics execution unit1808ofFIG.18B. The execution unit1900can also include a compute unit1910that includes multiple different types of functional units. The compute unit1910may also include an ALU1911, a systolic array1912, and a math unit1913. The ALU1911includes an array of arithmetic logic units. The ALU1911can be configured to perform 64-bit, 32-bit, and 16-bit integer and floating-point operations across multiple processing lanes and data channels and for multiple hardware and/or software threads. The ALU1911can perform integer and floating-point operations simultaneously (e.g., within the same clock cycle). The systolic array1912includes a W wide and D deep network of data processing units that can be used to perform vector or other data-parallel operations in a systolic manner. The systolic array1912can be configured to perform various matrix operations, including as dot product, outer product, and general matrix-matrix multiplication (GEMM) operations. The systolic array1912may support 16-bit floating point operations, as well as 8-bit, 4-bit, 2-bit, and binary integer operations. The systolic array1912may be configured to accelerate machine learning operations. The systolic array1912can be configured with support for bfloat16, (brain floating point) 16-bit floating point format or a tensor float 32-bit floating point format (TF32) that have different numbers of mantissa and exponent bits relative to Institute of Electrical and Electronics Engineers (IEEE) 754 formats. FP64 formats can also be supported. In one embodiment, the systolic array1912includes hardware to accelerate sparse matrix operations. Multiplication operations for sparse regions of input data can be bypassed without sacrificing throughput. Block sparsity within input matrices can be detected and operations having known output values can be bypassed. In one embodiment, the systolic array1912includes hardware to enable operations on sparse data having a compressed representation. A compressed representation of a sparse matrix stores non-zero values and metadata that defines the position of the non-zero values within the matrix. Exemplary compressed representations include but are not limited to compressed tensor representations such as compressed sparse row (CSR), compressed sparse column (CSC), compressed sparse fiber (CSF) representations. Support for compressed representations enable operations to be performed on input in a compressed tensor format without requiring the compressed representation to be decompressed or decoded. In such embodiment, operations can be performed only on non-zero input values and the resulting non-zero output values can be mapped into an output matrix. In some embodiments, hardware support is also provided for machine-specific lossless data compression formats that are used when transmitting data within hardware or across system busses. Such data may be retained in a compressed format for sparse input data and the systolic array1912can used the compression metadata for the compressed data to enable operations to be performed on only non-zero values, or to enable blocks of zero data input to be bypassed for multiply operations. The math unit1913can be configured to perform a specific subset of mathematical operations in an efficient and lower-power manner than then ALU unit1911. The math unit1913can include math logic found in shared function logic of a graphics processing engine provided by other embodiments described, e.g., the math logic1722of the shared function logic1720ofFIG.17. The math unit1913can be configured to perform 32-bit and 64-bit floating point operations. The thread control unit1901includes logic to control the execution of threads within the execution unit. The thread control unit1901can include thread arbitration logic to start, stop, and preempt execution of threads within the execution unit1900. The thread state unit1902can be used to store thread state for threads assigned to execute on the execution unit1900. Storing the thread state within the execution unit1900enables the rapid pre-emption of threads when those threads become blocked or idle. The instruction fetch/prefetch unit1903can fetch instructions from an instruction cache of higher-level execution logic (e.g., instruction cache1806as inFIG.18A). The instruction fetch/prefetch unit1903can also issue prefetch requests for instructions to be loaded into the instruction cache based on an analysis of currently executing threads. The instruction decode unit1904can be used to decode instructions to be executed by the compute units. The instruction decode unit1904can be used as a secondary decoder to decode complex instructions into constituent micro-operations. The execution unit1900additionally includes a register file1906that can be used by hardware threads executing on the execution unit1900. Registers in the register file1906can be divided across the logic used to execute multiple simultaneous threads within the compute unit1910of the execution unit1900. The number of logical threads that may be executed by the graphics execution unit1900is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread. The size of the register file1906can vary across embodiments based on the number of supported hardware threads. Register renaming may be used to dynamically allocate registers to hardware threads. FIG.20is a block diagram illustrating graphics processor instruction formats2000. The graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments the graphics processor instruction formats2000described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed. Thus, a single instructions may cause hardware to perform multiple micro-operations The graphics processor execution units as described herein may natively support instructions in a 128-bit instruction format2010. A 64-bit compacted instruction format2030is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format2010provides access to all instruction options, while some options and operations are restricted in the 64-bit format2030. The native instructions available in the 64-bit format2030vary by embodiment. The instruction is compacted in part using a set of index values in an index field2013. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format2010. Other sizes and formats of instruction can be used. For each format, instruction opcode2012defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. Instruction control field2014may enable control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format2010an exec-size field2016limits the number of data channels that will be executed in parallel. An exec-size field2016may not be available for use in the 64-bit compact instruction format2030. Some execution unit instructions have up to three operands including two source operands, src02020, src12022, and one destination2018. The execution units may support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC22024), where the instruction opcode2012determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction. The 128-bit instruction format2010may include an access/address mode field2026specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction. The 128-bit instruction format2010may also include an access/address mode field2026, which specifies an address mode and/or an access mode for the instruction. The access mode may be used to define a data access alignment for the instruction. Access modes including a 16-byte aligned access mode and a 1-byte aligned access mode may be supported, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands. The address mode portion of the access/address mode field2026may determine whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction. Instructions may be grouped based on opcode2012bit-fields to simplify Opcode decode2040. For an 8-bit opcode, bits4,5, and6allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. A move and logic opcode group2042may include data movement and logic instructions (e.g., move (mov), compare (cmp)). Move and logic group2042may share the five least significant bits (LSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group2044(e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group2046includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group2048includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math instruction group2048performs the arithmetic operations in parallel across data channels. The vector math group2050includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. The illustrated opcode decode2040, in one embodiment, can be used to determine which portion of an execution unit will be used to execute a decoded instruction. For example, some instructions may be designated as systolic instructions that will be performed by a systolic array. Other instructions, such as ray-tracing instructions (not shown) can be routed to a ray-tracing core or ray-tracing logic within a slice or partition of execution logic. Graphics Pipeline FIG.21is a block diagram of graphics processor2100, according to another embodiment. The elements ofFIG.21having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The graphics processor2100may include different types of graphics processing pipelines, such as a geometry pipeline2120, a media pipeline2130, a display engine2140, thread execution logic2150, and a render output pipeline2170. Graphics processor2100may be a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor may be controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor2100via a ring interconnect2102. Ring interconnect2102may couple graphics processor2100to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect2102are interpreted by a command streamer2103, which supplies instructions to individual components of the geometry pipeline2120or the media pipeline2130. Command streamer2103may direct the operation of a vertex fetcher2105that reads vertex data from memory and executes vertex-processing commands provided by command streamer2103. The vertex fetcher2105may provide vertex data to a vertex shader2107, which performs coordinate space transformation and lighting operations to each vertex. Vertex fetcher2105and vertex shader2107may execute vertex-processing instructions by dispatching execution threads to execution units2152A-2152B via a thread dispatcher2131. The execution units2152A-2152B may be an array of vector processors having an instruction set for performing graphics and media operations. The execution units2152A-2152B may have an attached L1 cache2151that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions. A geometry pipeline2120may include tessellation components to perform hardware-accelerated tessellation of 3D objects. A programmable hull shader2111may configure the tessellation operations. A programmable domain shader2117may provide back-end evaluation of tessellation output. A tessellator2113may operate at the direction of hull shader2111and contain special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline2120. In addition, if tessellation is not used, tessellation components (e.g., hull shader2111, tessellator2113, and domain shader2117) can be bypassed. The tessellation components can operate based on data received from the vertex shader2107. Complete geometric objects may be processed by a geometry shader2119via one or more threads dispatched to execution units2152A-2152B, or can proceed directly to the clipper2129. The geometry shader may operate on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader2119receives input from the vertex shader2107. The geometry shader2119may be programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled. Before rasterization, a clipper2129processes vertex data. The clipper2129may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. A rasterizer and depth test component2173in the render output pipeline2170may dispatch pixel shaders to convert the geometric objects into per pixel representations. The pixel shader logic may be included in thread execution logic2150. Optionally, an application can bypass the rasterizer and depth test component2173and access un-rasterized vertex data via a stream out unit2123. The graphics processor2100has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units2152A-2152B and associated logic units (e.g., L1 cache2151, sampler2154, texture cache2158, etc.) interconnect via a data port2156to perform memory access and communicate with render output pipeline components of the processor. A sampler2154, caches2151,2158and execution units2152A-2152B each may have separate memory access paths. Optionally, the texture cache2158can also be configured as a sampler cache. The render output pipeline2170may contain a rasterizer and depth test component2173that converts vertex-based objects into an associated pixel-based representation. The rasterizer logic may include a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache2178and depth cache2179are also available in some embodiments. A pixel operations component2177performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine2141, or substituted at display time by the display controller2143using overlay display planes. A shared L3 cache2175may be available to all graphics components, allowing the sharing of data without the use of main system memory. The media pipeline2130may include a media engine2137and a video front-end2134. Video front-end2134may receive pipeline commands from the command streamer2103. The media pipeline2130may include a separate command streamer. Video front-end2134may process media commands before sending the command to the media engine2137. Media engine2137may include thread spawning functionality to spawn threads for dispatch to thread execution logic2150via thread dispatcher2131. The graphics processor2100may include a display engine2140. This display engine2140may be external to processor2100and may couple with the graphics processor via the ring interconnect2102, or some other interconnect bus or fabric. Display engine2140may include a 2D engine2141and a display controller2143. Display engine2140may contain special purpose logic capable of operating independently of the 3D pipeline. Display controller2143may couple with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector. The geometry pipeline2120and media pipeline2130maybe configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). A driver software for the graphics processor may translate API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. Support may be provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. Support may also be provided for the Direct3D library from the Microsoft Corporation. A combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor. Graphics Pipeline Programming FIG.22Ais a block diagram illustrating a graphics processor command format2200used for programming graphics processing pipelines, such as, for example, the pipelines described herein in conjunction withFIG.16A,17,21.FIG.22Bis a block diagram illustrating a graphics processor command sequence2210according to an embodiment. The solid lined boxes inFIG.22Aillustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format2200ofFIG.22Aincludes data fields to identify a client2202, a command operation code (opcode)2204, and data2206for the command. A sub-opcode2205and a command size2208are also included in some commands. Client2202may specify the client unit of the graphics device that processes the command data. A graphics processor command parser may examine the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. The graphics processor client units may include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit may have a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode2204and, if present, sub-opcode2205to determine the operation to perform. The client unit performs the command using information in data field2206. For some commands an explicit command size2208is expected to specify the size of the command. The command parser may automatically determine the size of at least some of the commands based on the command opcode. Commands may be aligned via multiples of a double word. Other command formats can also be used. The flow diagram inFIG.22Billustrates an exemplary graphics processor command sequence2210. Software or firmware of a data processing system that features an exemplary graphics processor may use a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only and is not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence. The graphics processor command sequence2210may begin with a pipeline flush command2212to cause any active graphics pipeline to complete the currently pending commands for the pipeline. Optionally, the 3D pipeline2222and the media pipeline2224may not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked ‘dirty’ can be flushed to memory. Pipeline flush command2212can be used for pipeline synchronization or before placing the graphics processor into a low power state. A pipeline select command2213may be used when a command sequence requires the graphics processor to explicitly switch between pipelines. A pipeline select command2213may be required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. A pipeline flush command2212may be required immediately before a pipeline switch via the pipeline select command2213. A pipeline control command2214may configure a graphics pipeline for operation and may be used to program the 3D pipeline2222and the media pipeline2224. The pipeline control command2214may configure the pipeline state for the active pipeline. The pipeline control command2214may be used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands. Commands related to the return buffer state2216may be used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. The graphics processor may also use one or more return buffers to store output data and to perform cross thread communication. The return buffer state2216may include selecting the size and number of return buffers to use for a set of pipeline operations. The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination2220, the command sequence is tailored to the 3D pipeline2222beginning with the 3D pipeline state2230or the media pipeline2224beginning at the media pipeline state2240. The commands to configure the 3D pipeline state2230include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. The 3D pipeline state2230commands may also be able to selectively disable or bypass certain pipeline elements if those elements will not be used. A 3D primitive2232command may be used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive2232command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive2232command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. The 3D primitive2232command may be used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline2222dispatches shader execution threads to graphics processor execution units. The 3D pipeline2222may be triggered via an execute2234command or event. A register may write trigger command executions. An execution may be triggered via a ‘go’ or ‘kick’ command in the command sequence. Command execution may be triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations. The graphics processor command sequence2210may follow the media pipeline2224path when performing media operations. In general, the specific use and manner of programming for the media pipeline2224depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. The media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. The media pipeline may also include elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives. Media pipeline2224may be configured in a similar manner as the 3D pipeline2222. A set of commands to configure the media pipeline state2240are dispatched or placed into a command queue before the media object commands2242. Commands for the media pipeline state2240may include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. Commands for the media pipeline state2240may also support the use of one or more pointers to “indirect” state elements that contain a batch of state settings. Media object commands2242may supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. Optionally, all media pipeline states must be valid before issuing a media object command2242. Once the pipeline state is configured and media object commands2242are queued, the media pipeline2224is triggered via an execute command2244or an equivalent execute event (e.g., register write). Output from media pipeline2224may then be post processed by operations provided by the 3D pipeline2222or the media pipeline2224. GPGPU operations may be configured and executed in a similar manner as media operations. Graphics Software Architecture FIG.23illustrates an exemplary graphics software architecture for a data processing system2300. Such a software architecture may include a 3D graphics application2310, an operating system2320, and at least one processor2330. Processor2330may include a graphics processor2332and one or more general-purpose processor core(s)2334. The processor2330may be a variant of the processor1402or any other of the processors described herein. The processor2330may be used in place of the processor1402or any other of the processors described herein. Therefore, the disclosure of any features in combination with the processor1402or any other of the processors described herein also discloses a corresponding combination with the graphics processor2330, but is not limited to such. Moreover, the elements ofFIG.23having the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. The graphics application2310and operating system2320are each executed in the system memory2350of the data processing system. 3D graphics application2310may contain one or more shader programs including shader instructions2312. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application may also include executable instructions2314in a machine language suitable for execution by the general-purpose processor core2334. The application may also include graphics objects2316defined by vertex data. The operating system2320may be a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system2320can support a graphics API2322such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system2320uses a front-end shader compiler2324to compile any shader instructions2312in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. High-level shaders may be compiled into low-level shaders during the compilation of the 3D graphics application2310. The shader instructions2312may be provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API. User mode graphics driver2326may contain a back-end shader compiler2327to convert the shader instructions2312into a hardware specific representation. When the OpenGL API is in use, shader instructions2312in the GLSL high-level language are passed to a user mode graphics driver2326for compilation. The user mode graphics driver2326may use operating system kernel mode functions2328to communicate with a kernel mode graphics driver2329. The kernel mode graphics driver2329may communicate with graphics processor2332to dispatch commands and instructions. IP Core Implementations One or more aspects may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein. FIG.24Ais a block diagram illustrating an IP core development system2400that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system2400may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility2430can generate a software simulation2410of an IP core design in a high-level programming language (e.g., C/C++). The software simulation2410can be used to design, test, and verify the behavior of the IP core using a simulation model2412. The simulation model2412may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design2415can then be created or synthesized from the simulation model2412. The RTL design2415is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design2415, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary. The RTL design2415or equivalent may be further synthesized by the design facility into a hardware model2420, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rdparty fabrication facility2465using non-volatile memory2440(e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection2450or wireless connection2460. The fabrication facility2465may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein. FIG.24Billustrates a cross-section side view of an integrated circuit package assembly2470. The integrated circuit package assembly2470illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly2470includes multiple units of hardware logic2472,2474connected to a substrate2480. The logic2472,2474may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic2472,2474can be implemented within a semiconductor die and coupled with the substrate2480via an interconnect structure2473. The interconnect structure2473may be configured to route electrical signals between the logic2472,2474and the substrate2480, and can include interconnects such as, but not limited to bumps or pillars. The interconnect structure2473may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic2472,2474. Optionally, the substrate2480may be an epoxy-based laminate substrate. The substrate2480may also include other suitable types of substrates. The package assembly2470can be connected to other electrical devices via a package interconnect2483. The package interconnect2483may be coupled to a surface of the substrate2480to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module. The units of logic2472,2474may be electrically coupled with a bridge2482that is configured to route electrical signals between the logic2472,2474. The bridge2482may be a dense interconnect structure that provides a route for electrical signals. The bridge2482may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic2472,2474. Although two units of logic2472,2474and a bridge2482are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge2482may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations. FIG.24Cillustrates a package assembly2490that includes multiple units of hardware logic chiplets connected to a substrate2480(e.g., base die). A graphics processing unit, parallel processor, and/or compute accelerator as described herein can be composed from diverse silicon chiplets that are separately manufactured. In this context, a chiplet is an at least partially packaged integrated circuit that includes distinct units of logic that can be assembled with other chiplets into a larger package. A diverse set of chiplets with different IP core logic can be assembled into a single device. Additionally the chiplets can be integrated into a base die or base chiplet using active interposer technology. The concepts described herein enable the interconnection and communication between the different forms of IP within the GPU. IP cores can be manufactured using different process technologies and composed during manufacturing, which avoids the complexity of converging multiple IPs, especially on a large SoC with several flavors IPs, to the same manufacturing process. Enabling the use of multiple process technologies improves the time to market and provides a cost-effective way to create multiple product SKUs. Additionally, the disaggregated IPs are more amenable to being power gated independently, components that are not in use on a given workload can be powered off, reducing overall power consumption. In various embodiments a package assembly2490can include fewer or greater number of components and chiplets that are interconnected by a fabric2485or one or more bridges2487. The chiplets within the package assembly2490may have a 2.5D arrangement using Chip-on-Wafer-on-Substrate stacking in which multiple dies are stacked side-by-side on a silicon interposer that includes through-silicon vias (TSVs) to couple the chiplets with the substrate2480, which includes electrical connections to the package interconnect2483. In one embodiment, silicon interposer is an active interposer2489that includes embedded logic in addition to TSVs. In such embodiment, the chiplets within the package assembly2490are arranged using 3D face to face die stacking on top of the active interposer2489. The active interposer2489can include hardware logic for I/O2491, cache memory2492, and other hardware logic2493, in addition to interconnect fabric2485and a silicon bridge2487. The fabric2485enables communication between the various logic chiplets2472,2474and the logic2491,2493within the active interposer2489. The fabric2485may be an NoC interconnect or another form of packet switched fabric that switches data packets between components of the package assembly. For complex assemblies, the fabric2485may be a dedicated chiplet enables communication between the various hardware logic of the package assembly2490. Bridge structures2487within the active interposer2489may be used to facilitate a point to point interconnect between, for example, logic or I/O chiplets2474and memory chiplets2475. In some implementations, bridge structures2487may also be embedded within the substrate2480. The hardware logic chiplets can include special purpose hardware logic chiplets2472, logic or I/O chiplets2474, and/or memory chiplets2475. The hardware logic chiplets2472and logic or I/O chiplets2474may be implemented at least partly in configurable logic or fixed-functionality logic hardware and can include one or more portions of any of the processor core(s), graphics processor(s), parallel processors, or other accelerator devices described herein. The memory chiplets2475can be DRAM (e.g., GDDR, HBM) memory or cache (SRAM) memory. Cache memory2492within the active interposer2489(or substrate2480) can act as a global cache for the package assembly2490, part of a distributed global cache, or as a dedicated cache for the fabric2485 Each chiplet can be fabricated as separate semiconductor die and coupled with a base die that is embedded within or coupled with the substrate2480. The coupling with the substrate2480can be performed via an interconnect structure2473. The interconnect structure2473may be configured to route electrical signals between the various chiplets and logic within the substrate2480. The interconnect structure2473can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure2473may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic, I/O and memory chiplets. In one embodiment, an additional interconnect structure couples the active interposer2489with the substrate2480. The substrate2480may be an epoxy-based laminate substrate, however, it is not limited to that and the substrate2480may also include other suitable types of substrates. The package assembly2490can be connected to other electrical devices via a package interconnect2483. The package interconnect2483may be coupled to a surface of the substrate2480to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module. A logic or I/O chiplet2474and a memory chiplet2475may be electrically coupled via a bridge2487that is configured to route electrical signals between the logic or I/O chiplet2474and a memory chiplet2475. The bridge2487may be a dense interconnect structure that provides a route for electrical signals. The bridge2487may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic or I/O chiplet2474and a memory chiplet2475. The bridge2487may also be referred to as a silicon bridge or an interconnect bridge. For example, the bridge2487is an Embedded Multi-die Interconnect Bridge (EMIB). Alternatively, the bridge2487may simply be a direct connection from one chiplet to another chiplet. FIG.24Dillustrates a package assembly2494including interchangeable chiplets2495, according to an embodiment. The interchangeable chiplets2495can be assembled into standardized slots on one or more base chiplets2496,2498. The base chiplets2496,2498can be coupled via a bridge interconnect2497, which can be similar to the other bridge interconnects described herein and may be, for example, an EMIB. Memory chiplets can also be connected to logic or I/O chiplets via a bridge interconnect. I/O and logic chiplets can communicate via an interconnect fabric. The base chiplets can each support one or more slots in a standardized format for one of logic or I/O or memory/cache. SRAM and power delivery circuits may be fabricated into one or more of the base chiplets2496,2498, which can be fabricated using a different process technology relative to the interchangeable chiplets2495that are stacked on top of the base chiplets. For example, the base chiplets2496,2498can be fabricated using a larger process technology, while the interchangeable chiplets can be manufactured using a smaller process technology. One or more of the interchangeable chiplets2495may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package assembly2494based on the power, and/or performance targeted for the product that uses the package assembly2494. Additionally, logic chiplets with a different number of type of functional units can be selected at time of assembly based on the power, and/or performance targeted for the product. Additionally, chiplets containing IP logic cores of differing types can be inserted into the interchangeable chiplet slots, enabling hybrid processor designs that can mix and match different technology IP blocks. Exemplary System on a Chip Integrated Circuit FIG.25-26Billustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. The elements ofFIG.25-26Bhaving the same or similar names as the elements of any other figure herein describe the same elements as in the other figures, can operate or function in a manner similar to that, can comprise the same components, and can be linked to other entities, as those described elsewhere herein, but are not limited to such. FIG.25is a block diagram illustrating an exemplary system on a chip integrated circuit2500that may be fabricated using one or more IP cores. Exemplary integrated circuit2500includes one or more application processor(s)2505(e.g., CPUs), at least one graphics processor2510, which may be a variant of the graphics processor1408,1508,2510, or of any graphics processor described herein and may be used in place of any graphics processor described. Therefore, the disclosure of any features in combination with a graphics processor herein also discloses a corresponding combination with the graphics processor2510, but is not limited to such. The integrated circuit2500may additionally include an image processor2515and/or a video processor2520, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit2500may include peripheral or bus logic including a USB controller2525, UART controller2530, an SPI/SDIO controller2535, and an I2S/I2C controller2540. Additionally, the integrated circuit can include a display device2545coupled to one or more of a high-definition multimedia interface (HDMI) controller2550and a mobile industry processor interface (MIPI) display interface2555. Storage may be provided by a flash memory subsystem2560including flash memory and a flash memory controller. Memory interface may be provided via a memory controller2565for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine2570. FIG.26A-26Bare block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. The graphics processors may be variants of the graphics processor1408,1508,2510, or any other graphics processor described herein. The graphics processors may be used in place of the graphics processor1408,1508,2510, or any other of the graphics processors described herein. Therefore, the disclosure of any features in combination with the graphics processor1408,1508,2510, or any other of the graphics processors described herein also discloses a corresponding combination with the graphics processors ofFIG.26A-26B, but is not limited to such.FIG.26Aillustrates an exemplary graphics processor2610of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment.FIG.26Billustrates an additional exemplary graphics processor2640of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor2610ofFIG.26Ais an example of a low power graphics processor core. Graphics processor2640ofFIG.26Bis an example of a higher performance graphics processor core. For example, each of graphics processor2610and graphics processor2640can be a variant of the graphics processor2510ofFIG.25, as mentioned at the outset of this paragraph. As shown inFIG.26A, graphics processor2610includes a vertex processor2605and one or more fragment processor(s)2615A-2615N (e.g.,2615A,2615B,2615C,2615D, through2615N-1, and2615N). Graphics processor2610can execute different shader programs via separate logic, such that the vertex processor2605is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s)2615A-2615N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor2605performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s)2615A-2615N use the primitive and vertex data generated by the vertex processor2605to produce a framebuffer that is displayed on a display device. The fragment processor(s)2615A-2615N may be optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API. Graphics processor2610additionally includes one or more memory management units (MMUs)2620A-2620B, cache(s)2625A-2625B, and circuit interconnect(s)2630A-2630B. The one or more MMU(s)2620A-2620B provide for virtual to physical address mapping for the graphics processor2610, including for the vertex processor2605and/or fragment processor(s)2615A-2615N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s)2625A-2625B. The one or more MMU(s)2620A-2620B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s)2505, image processor2515, and/or video processor2520ofFIG.25, such that each processor2505-2520can participate in a shared or unified virtual memory system. Components of graphics processor2610may correspond with components of other graphics processors described herein. The one or more MMU(s)2620A-2620B may correspond with MMU245ofFIG.2C. Vertex processor2605and fragment processor2615A-2615N may correspond with graphics multiprocessor234. The one or more circuit interconnect(s)2630A-2630B enable graphics processor2610to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. The one or more circuit interconnect(s)2630A-2630B may correspond with the data crossbar240ofFIG.2C. Further correspondence may be found between analogous components of the graphics processor2610and the various graphics processor architectures described herein. As shownFIG.26B, graphics processor2640includes the one or more MMU(s)2620A-2620B, cache(s)2625A-2625B, and circuit interconnect(s)2630A-2630B of the graphics processor2610ofFIG.26A. Graphics processor2640includes one or more shader cores2655A-2655N (e.g.,2655A,2655B,2655C,2655D,2655E,2655F, through2655N-1, and2655N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor2640includes an inter-core task manager2645, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores2655A-2655N and a tiling unit2658to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. Shader cores2655A-2655N may correspond with, for example, graphics multiprocessor234as inFIG.2D, or graphics multiprocessors325,350ofFIGS.3A and3Brespectively, or multi-core group365A ofFIG.3C. The foregoing description and drawings are to be regarded in an illustrative rather than a restrictive sense. Persons skilled in the art will understand that various modifications and changes may be made to the embodiments described herein without departing from the broader spirit and scope of the features set forth in the appended claims. Atomic Reduction According to one embodiment, atomic reduction can be improved by performing it hierarchically (e.g., (i) performing reduction in the execution unit, (ii) performing the next level of reduction in L1, and (iii) performing a third level of reduction in L3. In this manner, both latency and bandwidth can be reduced. Streaming Buffer In one embodiment, a streaming buffer can be logically interposed between multiple GPU IP cores (e.g., a media IP core and a GPU shader core). Depending upon the particular implementation, the streaming buffer may be a memory-side cache shared between the IP cores. FIG.27is a block diagram of illustrating the bandwidth and latency costs sought to be avoided by various embodiments. In this example, the GPU2700includes a media IP core (e.g., media2710) and a compute core2730(e.g., a GPU, a CPU, an AI core of a GPU) both of which may be involved in the processing of data relating to the same media stream (e.g., a video stream, an audio stream, or a multimedia stream, containing both video and audio tracks). The media stream may be produced by an external device (e.g., an autonomous vehicle sensor (Light Detection and Ranging (LiDAR)), a video camera, a streaming server, or the like) and stored in a memory (e.g., DRAM2740) for processing by various IP cores (e.g., the media IP core2710and the compute core2730). Assuming the end-to-end processing to be performed on the media stream at issue involves sequential processing by the media IP core2710and the compute core2730, the media2710may perform one or more read operations (e.g., read2711) from memory to retrieve a unit of data for processing, process the unit of data, and then perform one or more write operations (e.g., write2712) to store the results of the processing performed by the media IP core2710. For its part, the compute core2730may be reliant on the output of the media IP core2710and may perform one or more read operations (e.g., read2731) to retrieve a unit of data (e.g., data previously processed by the media IP core2710) for processing, process the unit of data, and then perform one or more write operations (e.g., write2711) to store the results of the processing performed by the compute core2730. As will be appreciated by those skilled in the art, in such a scenario in which the compute core2730processes output of the media IP core2710, the write2712to DRAM2740and the read2731from DRAM2740represent unnecessary use of memory bandwidth. In an effort to reduce such unnecessary memory bandwidth and reduce latency, an intermediate buffer may be interposed between an media IP core and a compute core as described below with reference toFIGS.28A-D. FIGS.28A-Dare block diagrams illustrating use of a streaming buffer2870between a producer IP2860and a consumer IP2880according to an embodiment. In the context of the present example, when pixels (e.g., from video tiles) are written to DRAM2890, which may be a system memory and/or a dedicated graphics memory, to be used as input for processing by the consumer IP core2860, the consumer IP core2860can write2862its output results to streaming buffer2870, thereby allowing the consumer IP2880to avoid reading from DRAM2840for various usage scenarios. Depending upon the particular implementation, the producer IP2860and/or the consumer IP2880may represent a compute core (e.g., a CPU or a GPU). Alternatively, one or more of the producer IP2860, the streaming buffer2870, and the consumer IP2880may be contained within a GPU. For example, the consumer IP2880may be an AI core (e.g., a shader core of GPU2850). In one embodiment, the producer IP core2860may represent a media IP operable to write2862its output results (e.g., results of media decoding, media encoding, media transcoding, media downscaling, and/or media color space conversion) to streaming buffer2870, thereby allowing the consumer IP2880(which may represent an AI core, such as a GPU shader core) to avoid reading from DRAM2840for various usage scenarios, including performing media analytics processing (e.g., AI inference and the like). A non-limiting example of media analytics processing is described below with reference toFIG.29. Depending upon the particular implementation, the streaming buffer2870may be one or more of a combination of SRAM, DRAM, and cache sized to the working set (e.g., an analytical processing unit, such as one or more image frames or a portion of an image frame) of the consumer IP2880. Signaling regarding the availability of data in the streaming buffer2870for processing by the consumer IP2880and signaling regarding consumption of the data by the consumer IP2880may involve the use of empty signal2874and full signal2876originated by the streaming buffer2870or a handshake2875between the producer IP2860and the consumer IP2880. In some embodiments, the streaming buffer2870includes some hardware/software hooks to notify the producer IP2860and the consumer IP2880via the full signal2876and the empty signal2874when the streaming buffer2870is full or empty, respectively. Alternatively, the consumer IP2860and the producer IP2880may implement the handshake2875through the exchange of information or signals. In this manner, the respective IPs can be notified to either consume the data from the streaming buffer2870or fill the streaming buffer2870with data. This proposed arrangement provides lower latency and lower SoC power due to reduced traffic to the DRAM2890. FIG.29is a flow diagram illustrating media analytics processing according to an embodiment. In the context of the present example, the processing and decision blocks on the left-hand side of the flow diagram represent processing performed by a producer IP (e.g., producer IP2860, such as a media IP) (and potentially a streaming buffer (e.g., streaming buffer2870)) and the processing blocks on the right-hand side of the flow diagram represent processing performed by a consumer IP (e.g., a CPU, a GPU, or an AI-specific core, such as a shader core, of a GPU (e.g., GPU2850)) (and potentially the streaming buffer2870). For sake of brevity, processing of a single unit of data (e.g., one or more video frames) is shown; however, it is to be understood the media analytics processing may be repeated for any number of units of data. At block2910, a unit of data is read from memory (e.g., DRAM2890). In one embodiment, a media IP reads a working set (e.g., one or more image frames) from memory, which may be populated from an external source (e.g., one or more closed-circuit television (CCTV) cameras, traffic cameras and/or online video feeds.) At block2920, depending upon the iteration, a first/next portion of the unit of data is processed by the media IP. According to one embodiment, the working set of an AI-specific core is smaller than that of the working set of the media IP. For example, a discrete unit of data processed by the media IP may be one or more image frames and the discrete unit processed by the AI-specific core may be a portion of an image frame (e.g., half of an image frame, a line of an image frame, or some other portion of an image frame). According to one embodiment, the image frames may be compressed and stored in a first of multiple potential digital compression and coding formats (e.g., in accordance with one of the Moving Picture Experts Group (MPEG) or Joint Photographic Experts Group (JPEG) standards). The processing performed by the media IP may involve one or more of media decoding, media downscaling, an media color space conversion, and/or media transcoding (e.g., to a second of the multiple potential digital compression and coding formats). According to one embodiment, the one or more frames may be processed in raster scan order to facilitate processing by the AI-specific core. In the context of media analytics, the results of media decoding, media downscaling, and media color space conversion performed by a producer IP may be written to the streaming buffer for use by a consumer IP. At block2930, the processed portion of the unit of data is written by the media IP core to an intermediate streaming buffer (e.g., streaming buffer2870logically interposed between the media IP and the AI-specific core. Depending upon the particular implementation, the portion may be a subset of an image frame (e.g., half of an image frame or one or more lines of an image frame). At block2940, the AI-specific core is notified of the availability of data available for processing in the streaming buffer. As those skilled in the art will appreciate, there are a variety of mechanisms by which such a notification may be performed. For example, in one embodiment, the media IP may signal the availability of the data to the AI-specific core via a data available signal of a handshake (e.g., handshake2875) I/O control method that synchronizes the processing of the media IP and the AI-specific core. Alternatively, this portion of the synchronization method may be implemented by the streaming buffer in the form of a full signal (e.g., full signal2876) to the AI-specific core. At block2950, responsive to the notification of block2940, the available portion of the data is read by the AI-specific core from the streaming buffer. At block2960, the media IP is notified the data has been read. As those skilled in the art will appreciate, there are a variety of mechanisms by which such a notification may be performed. For example, in one embodiment, the AI-specific core may signal the consumption of the data to the media IP via a ready signal of the handshake I/O control method. Alternatively, this portion of the synchronization method may be implemented by the streaming buffer in the form of an empty signal (e.g., empty signal2876) to the media IP. At block2980, media analytics processing may be performed by the AI-specific core. The media analytics processing may represent any of a broad variety of media analytics. Non-limiting examples of media analytics include real-time video content analysis, video mining, real-time monitoring, video surveillance, object recognition (e.g., recognition of vehicles, license plates, pedestrians, bicycle riders, and the like), vehicle counting, monitoring vehicle traffic, medical image processing, facial recognition, detecting walking patterns and/or direction of gaze of people). At bock2990, the media analytics processing results for the portion of data at issue is written to memory. At decision block2970, a determination is made regarding whether more portions of the unit of data read in block2910are available for processing. If so, media analytics processing continues with the next portion at block2920; otherwise processing of the unit of data read at block2910is complete. While difficult to depict in flow diagram form, those skilled in the art will appreciate subject to the particular synchronization method employed, the processing and decision blocks on the left-hand side of the flow diagram and the processing blocks on the right-hand side of the flow diagram may be performed in parallel, thereby reducing overall latency as the AI-specific core need not wait for an entire frame to be processed by the media IP, but rather may begin processing individual portions of the frame (e.g., half a frame or one or more lines of the frame) as the media IP outputs them to the streaming buffer. While the above example is described with reference to media analytics processing involving a media IP as the producer IP and an AI-specific core as the consumer IP, the systems and methods described herein are equally applicable to other configurations and applications. For example, in the context of video recording, a producer IP may output video data to the streaming buffer for use by a media IP that performs media encoding. In the context of screen recording, a display may store a raw frame to the streaming buffer for use by a media IP that performs media encoding. In the context of game streaming, a GPU (e.g., GPU2850) may write data to the streaming buffer for use by a media IP that performs media encoding. In the context of super resolution based transcoding, a first media IP may store the results of media decoding to the streaming buffer for processing by a GPU, CPU, or AI-specific core, the output of which is stored to the streaming buffer for further processing by the first media IP or a second media IP that performs media encoding. FIGS.30A-30Bare block diagrams illustrating a streaming buffer implementing a double buffering technique in accordance with an embodiment. In the context of the present example,FIG.30Ashows a write pointer3072and a read pointer3074of a streaming buffer3070(which may correspond to streaming buffer2870ofFIGS.28A-D) in a first state in which data (e.g., media processing results) originating from a media IP core3060(which may correspond to producer IP2860) is written to a first buffer (e.g., buffer3071a) of a double buffer and in which data read by a shader core3080(which may correspond to consumer IP2880) is read from a second buffer (e.g., buffer3071b) of the double buffer. As noted above, the buffers3071a-bmay be sized in accordance with the working set of the shader core3080. FIG.30Bshows the write pointer3072and the read pointer3074in a second state in which data originating from a media IP core3060(which may correspond to producer IP2860) is written to the second buffer of the double buffer and in which data read by a shader core3080(which may correspond to consumer IP2880) is read from the second buffer of the double buffer. The streaming buffer may transition between the first state and the second state in accordance with to the synchronization method employed (e.g., (i) handshake2875or (ii) empty signal2874and full signal2876). Those skilled in the art will appreciate there are numerous buffering schemes that may be used. For example, rather than a double buffering scheme, a circular buffer may be implemented within the streaming buffer. Low Power Local Cache According to one embodiment, a cache (e.g., a mid-level cache) that is shared by multiple IP cores via a fabric can be used as a local cache when only a single IP core is active. FIG.31Ais a block diagram illustrating a first usage scenario for a cache3125shared by multiple IP cores3110a-nthat are part of a central fabric3120, according to an embodiment. In the context of the present example, a GPU3100is shown including multiple IP cores3110a-nin which the workload at issue involves communication among one or more of the IP cores3110a-n, access to cache3125and/or otherwise makes use of fabric3120. As such, in such a usage scenario, power is provided to the fabric3120and the cache3125. For sake of brevity, numerous potential graphics execution resources are not shown inFIGS.31A and31B. Those skilled in the art will appreciate the GPU3100may include one or more graphics engine(s), graphics processor cores, and other graphics execution resources (not shown) as described herein. Such graphics execution resources can be presented in the forms including but not limited to execution units, shader engines, fragment processors, vertex processors, streaming multiprocessors, graphics processor clusters, or any collection of computing resources suitable for the processing of graphics resources or image resources, or performing general purpose computational operations in a heterogeneous processor. FIG.31Bis a block diagrams illustrating a second usage scenario for a cache3125shared by multiple IP cores3110a-nthat are part of a central fabric3120, according to an embodiment. In this example, only a single IP (e.g., IP3110a) is active (as illustrated by the dashed lines). This situation may arise, for example, when a media IP core is processing certain workloads, for example, media decode/encode/transcode, that don't involve communication with other IPs (e.g., IPs3110b-n) and represent standalone workloads. In such a scenario, going through the central fabric3120to talk to the mid-level cache would incur not only cache power but also the central fabric power thus penalizing the SoC power for the workload and impacting battery life. As such, according to one embodiment, the fabric3120can be powered off (as indicated by the dashed outline) and a low power access path3111is provided to the cache3125outside of the fabric3120thus avoiding burning the entire fabric power and resulting in lower SoC power usage when only a single IP core requires use of the cache. This allows the active IP (e.g., IP3110ain the context of the present example) make use of cache3125as a local cache without consuming fabric power. Per-Shader Module Local Cache According to one embodiment, cache banks are partitioned differently based on workload characteristics, for example, by determining the size of a local partition that will be optimal for a given application. In one embodiment, a GPU's last level cache (e.g., L2/L3 cache) can be reconfigured as per sub-slice local caches, based on the workload demand. The global last-level cache is usually distributed among multiple banks in the middle of the die. Some banks are closer to a given sub-slice (a/k/a shader module (SM)), while most banks will be further in the die. When the cache is organized as a global resource, the average latency to the cache bank will be high due the data being distributed across the banks. Based on the workload demands, each cache bank can be partitioned into a “global” portion and a “local” portion. The local portion serves as a local cache for the sub-slice(s) closest to that bank. As a result, the latency and power for a sub-slice to access its local bank will be low. If the local cache misses, a lookup in the global cache will be performed. For some Machine Learning (ML)/Deep Learning (DL) kernels, configuring a global/shared cache into many private/local caches (per-SM) is a much more power/performance efficient solution. This approach is also applicable to die-stacked GPUs as illustrated inFIG.32, where the top dies contain the compute resources (sub-slices) and the bottom die has the last-level cache. The banks directly below a set of sub-slices will serve as the local cache banks for the top sub-slices (low latency). FIG.32is a block diagram illustrating a chiplet and base die stacked approach according to an embodiment. According to one embodiment, an administrator may specify configuration information to form an association among last-level cache banks (e.g., L2 cache banks3225a-n) of a base chiplet3220and individual sub-slices of a set of sub-slices contained in chiplets3210. The configuration information may be read when the graphics card is booted and the GPU(s) may be configured accordingly. In the context of the present example, those last-level cache banks of a base chiplet3220directly below a set of sub-slices contained in the chiplets3210may serve as the local cache banks for the set of sub-slices so as to reduce access latency. Depending upon the particular implementation, the last-level cache banks may be divided into equal-sized partitions in which each tile gets one partition. Alternatively the allocation of partitions may be asymmetric. In some embodiments, multiple (e.g., two) chiplets3210may share one set (partition) of the last-level cache banks and the remainder (e.g., two) chiplets3210may share another set (partition) of the last-level cache banks. In some embodiments, memory channels (not shown) associated with the chiplets3210(e.g., one or multiple per chiplet) and other resources may be partitioned in a similar manner. Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the present embodiments. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the concept but to illustrate it. The scope of the embodiments is not to be determined by the specific examples provided above but only by the claims below. If it is said that an element “A” is coupled to or with element “B,” element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A “causes” a component, feature, structure, process, or characteristic B, it means that “A” is at least a partial cause of “B” but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing “B.” If the specification indicates that a component, feature, structure, process, or characteristic “may”, “might”, or “could” be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, this does not mean there is only one of the described elements. An embodiment is an implementation or example. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of “an embodiment,” “one embodiment,” or “some embodiments” are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various novel aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiments requires more features than are expressly recited in each claim. Rather, as the following claims reflect, novel aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment. The following clauses and/or examples pertain to further embodiments or examples. Specifics in the examples may be used anywhere in one or more embodiments. The various features of the different embodiments or examples may be variously combined with some features included and others excluded to suit a variety of different applications. Examples may include subject matter such as a method, means for performing acts of the method, at least one machine-readable medium including instructions that, when performed by a machine cause the machine to perform acts of the method, or of an apparatus or system for facilitating hybrid communication according to embodiments and examples described herein. Some embodiments pertain to Example 1 that includes a system comprising: a producer intellectual property (IP); a compute core; a streaming buffer logically interposed between the producer IP and the compute core; wherein the producer IP is operable to consume data from memory and output results to the streaming buffer; and wherein the compute core is operable to perform AI inference processing based on data consumed from the streaming buffer and output AI inference processing results to the memory. Example 2 includes the subject matter of Example 1, wherein the compute core comprises a central processing unit (CPU) or a graphics processing unit (GPU). Example 3 includes the subject matter of Examples 1-2, wherein the compute core comprises an AI-specific core. Example 4 includes the subject matter of Examples 1-3, wherein the AI-specific core comprises a shader core of a graphics processing unit (GPU). Example 5 includes the subject matter of Examples 1-4, wherein the streaming buffer comprises a local cache. Example 6 includes the subject matter of Examples 1-5, wherein the memory comprises dynamic random access memory. Example 7 includes the subject matter of Examples 1-5, wherein the memory comprises a system memory. Example 8 includes the subject matter of Examples 1-5, wherein the memory comprises a dedicated graphics memory Example 9 includes the subject matter of Examples 1-8, wherein the streaming buffer is operable to notify the producer IP when the streaming buffer contains less than a first predetermined or configurable threshold of data. Example 10 includes the subject matter of Examples 1-9, wherein the streaming buffer is operable to notify the compute core when the streaming buffer contains greater than a second predetermined or configurable threshold of data. Example 11 includes the subject matter of Examples 1-10, wherein the producer IP comprises a media IP and the compute core comprises an AI-specific core of a GPU. Example 12 includes the subject matter of Example 11, wherein the media IP and the AI-specific core exchange handshake signals, including a first handshake signal from the media IP to the AI-specific core indicative of availability of an analytical processing unit of data in the streaming buffer for the AI inference processing by the AI-specific core and a second handshake signal from the AI-specific core to the media IP indicative of the data having been read by the AI-specific core. Example 13 includes the subject matter of Examples 1-12, wherein the analytical processing unit comprises an image frame or a portion of an image frame. Example 14 includes the subject matter of Example 11, wherein the streaming buffer facilitates parallel processing by the media IP and the AI-specific core by implementing a double buffer including a first buffer designated by a write pointer to which a first analytical processing unit of data is written by the media IP and a second buffer designated by a read pointer from which a second analytical processing unit of data is read by the AI-specific core. Some embodiments pertain to Example 15 that includes a method for performing media analytics processing, the method comprising: reading, by a media intellectual property (IP) a unit of data from memory; and for each analytical processing unit of data within the unit of data upon which an artificial intelligence (AI)-specific core of a graphics processing unit (GPU) is configured to operate, facilitating parallel processing by the media IP and the AI-specific core by: performing, by the media IP, media processing on the analytical processing unit; responsive to a first signal, writing, by the media IP, a result of the media processing to a streaming buffer logically interposed between the media IP and the AI-specific core; notifying the AI-specific core regarding availability of the data in the streaming buffer via a second signal; responsive to the second signal, reading, by the AI-specific core, the data from the streaming buffer; notifying the media IP regarding consumption of the data by the AI-specific core via the first signal; performing, by the AI-specific core, media analytics processing on the data; and writing, by the AI-specific core, a result of the media analytics processing to the memory. Example 16 includes the subject matter of Example 15, wherein the streaming buffer comprises a cache. Example 17 includes the subject matter of Examples 15-16, wherein the memory comprises dynamic random access memory. Example 18 includes the subject matter of Examples 15-16, wherein the memory comprises a system memory of a computer system. Example 19 includes the subject matter of Examples 15-16, wherein the memory comprises a dedicated graphics memory. Example 20 includes the subject matter of Examples 15-19, wherein said notifying the AI-specific core regarding availability of the data in the streaming buffer is performed by the streaming buffer. Example 21 includes the subject matter of Examples 15-19, wherein said notifying the AI-specific core regarding availability of the data in the streaming buffer is performed by the media IP. Example 22 includes the subject matter of Examples 15-21, wherein said notifying the media IP regarding consumption of the data by the AI-specific core is performed by the streaming buffer. Example 23 includes the subject matter of Examples 15-21, wherein said notifying the media IP regarding consumption of the data by the AI-specific core is performed by the AI-specific core. Example 24 includes the subject matter of Examples 15-23, wherein the unit of data comprises an image frame and wherein the analytical processing unit comprises a portion of an image frame. Example 25 includes the subject matter of Examples 15-24, wherein the media processing comprises one or more of encoding, decoding, or transcoding media to, from, or between one or more media encoding formats. Example 26 includes the subject matter of Examples 15-25, wherein the media analytics processing comprises performing artificial intelligence (AI) inferences. Some embodiments pertain to Example 27 that includes a power-saving method comprising: observing, by a graphics processing unit (GPU), a state of a plurality of intellectual property (IP) cores that have access to a common cache via a central fabric; and responsive to the observed state being indicative of performance of a standalone workload by a first IP core of the plurality of IP cores, treating the common cache as a local cache of the first IP core by powering off the central fabric and causing the first IP core to access the common cache via a low power access path between the first IP core and the common cache that is outside of the central fabric. Example 28 includes the subject matter of Example 27, wherein the first IP core comprises a media IP core and wherein the standalone workload comprises media decoding, media encoding or media transcoding. Example 29 includes the subject matter of Examples 27-28, wherein the observed state comprises the first IP core being active and the other IP cores of the plurality of IP cores being inactive. Some embodiments pertain to Example 30 that includes a graphics processing unit (GPU) comprising: a plurality of cache banks; a plurality of shader modules coupled to the plurality of cache banks; and wherein each cache bank of the plurality of cache banks is reconfigurable to operate as part of a global last level cache or as a local cache for a particular shader module of the plurality of shader modules based on at least one of a workload demand on the particular shader module and a distance between the cache bank and the particular shader module. Example 31 includes the subject matter of Example 30, wherein the GPU comprises a die-stacked GPU. Example 32 includes the subject matter of Examples 30-31, wherein the plurality of cache banks comprise level 2 (L2) cache banks. Example 33 includes the subject matter of Examples 30-31, wherein the plurality of cache banks comprise level 3 (L3) cache banks. The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
287,093
11861762
DETAILED DESCRIPTION This disclosure describes one or more embodiments of a class-specific object editing system that generates synthesized digital images utilizing class-specific generator neural networks. Specifically, in one or more embodiments, the class-specific object editing system generates (or otherwise obtains) a synthesized digital image including one or more identifiable objects. Additionally, in response to identifying one or more objects in a synthesized digital image, the class-specific object editing system selects class-specific generator neural networks corresponding to classes of objects identified in the synthesized digital image. The class-specific object editing system utilizes the selected class-specific generator neural networks to generate synthesized objects corresponding to the identified objects. The class-specific object editing system then replaces the identified objects in the synthesized digital image with the synthesized images from the class-specific generator neural networks. By replacing objects in a synthesized digital image with objects synthesized via class-specific generator neural networks, the class-specific object editing system improves the accuracy of synthesized digital images. As mentioned, in one or more embodiments, the class-specific object editing system generates a synthesized digital image. For instance, the class-specific object editing system utilizes an image synthesis neural network to generate a synthesized digital image including one or more objects (e.g., foreground objects). In one or more embodiments, the image synthesis neural network generates a conditional synthesized digital image based on at least one map indicating positions and/or locations of the one or more objects. To illustrate, the class-specific object editing system utilizes the image synthesis neural network to generate a synthesized digital image based on a semantic label map. In some embodiments, the class-specific object editing system also utilizes an edge map with the semantic label map to generate the synthesized digital image including one or more objects indicated by the semantic label map and the edge map. After generating or otherwise obtaining a synthesized digital image, in one or more embodiments, the class-specific object editing system determines objects and object classes in the synthesized digital image. To illustrate, the class-specific object editing system utilizes a semantic label map and/or an edge map associated with the synthesized digital image to determine one or more objects. The class-specific object editing system also determines classes of the objects identified in the synthesized digital image, such as by determining labels of object instances associated with the objects from the semantic label map. In alternative embodiments which lack a semantic label map, the class-specific object editing system utilizes an object detection neural network to detect the one or more objects and their locations within the synthesized digital image. For example, the class-specific object editing system utilizes one or more of the object detection neural networks described in U.S. patent application Ser. No. 16/388,115, “Robust Training of Large-Scale Object Detectors with Noisy Data,” filed on Apr. 8, 2019; U.S. Pat. No. 10,216,766, “Large-Scale Image Tagging Using Image-To-Topic Embedding,” filed on Mar. 20, 2017; or in U.S. patent application Ser. No. 15/921,492, “Detecting Objects Using A Weakly Supervised Model,” filed on Mar. 14, 2018, the entire contents of the foregoing patent and applications are hereby incorporated by reference in their entirety. In still further embodiments, the class-specific object editing system detect the one or more objects and their locations within the synthesized digital image based on user input (e.g., receives user input indicating a bounding box containing an object and a label for the object). In connection with determining objects and object classes in a synthesized digital image, the class-specific object editing system also selects class-specific generator neural networks corresponding to the objects and object classes. Specifically, the class-specific object editing system selects class-specific generator neural networks trained to generate synthesized objects of specific classes corresponding to the identified object classes. Accordingly, the class-specific object editing system selects separate class-specific generator neural networks to synthesize different objects based on different classes objects in a synthesized digital image—such as identifying a first class-specific generator neural network corresponding to a first object class and a second class-specific generator neural network corresponding to a second object class. Furthermore, in one or more embodiments, the class-specific object editing system generates synthesized objects utilizing selected class-specific generator neural networks. For example, in response to selecting a plurality of class-specific generator neural networks corresponding to a plurality of object classes in a synthesized digital image, the class-specific object editing system utilizes the selected class-specific generator neural networks to generate a plurality of different synthesized objects. To illustrate, the class-specific object editing system crops the synthesized digital image to a particular object and then utilizes the corresponding class-specific generator neural network to generate a synthesized object based on the cropped portion of the synthesized digital image. In additional embodiments, the class-specific object editing system also crops a semantic label map to an object label corresponding to the particular object and provides the cropped portion of the semantic label map to generate the synthesized object. The class-specific object editing system thus utilizes information about an object and context information corresponding to the object from the cropped portion of the synthesized digital image to generate a new synthesized object. In one or more embodiments, the class-specific object editing system replaces one or more objects in a synthesized digital image with one or more synthesized objects. In particular, after generating a synthesized object utilizing a class-specific generator neural network, the class-specific object editing system replaces a corresponding object with the synthesized object at a particular location within the synthesized digital image. For example, the class-specific object editing system inserts the synthesized object into the particular location utilizing alpha blending. Because the class-specific object editing system utilizes context information to generate synthesized objects, the class-specific object editing system inserts the synthesized objects into the synthesized digital image to blend into the rest of the image. The disclosed class-specific object editing system provides a number of benefits over conventional systems. For example, the class-specific object editing system improves the accuracy of computing systems that generate synthesized digital images. In contrast to existing systems that utilize a single generator neural network to generate synthesized digital images, the class-specific object editing system utilizes a plurality of class-specific generator neural networks to generate and modify synthesized digital images. Specifically, conventional systems that utilize a single generator neural network tend to allocate resources toward generating larger content such as background content and neglect details of smaller objects in the foreground. By generating separate synthesized objects of different classes using separate class-specific generator neural networks, the class-specific object editing system generates synthesized digital images with accurate and improved details of individual objects. More specifically, the class-specific generator neural networks provide improved textural details and better shape integrity for a variety of object classes relative to conventional systems. Furthermore, the class-specific object editing system improves the flexibility of computing systems that generate synthesized digital images. In particular, as previously mentioned, conventional systems that rely on a single generator neural network with spatially-adaptive normalization are limited to lower resolution image synthesis. The class-specific object editing system, however, utilizes a modified generator neural network structure that generates higher quality images that are easily scaled to high resolutions. More specifically, the class-specific object editing system utilizes an encoder to extract hierarchical feature representations at a plurality of different resolutions to modulate the generator neural network. Additionally, the class-specific object editing system provides the hierarchical feature representations to a plurality of class-specific generator neural networks to provide accurate details for individual foreground objects at different resolutions. The class-specific object editing system also provides improved flexibility in generating objects in out-of-distribution/context scene images (e.g., by placing objects in locations those objects are not typically found). In addition, the class-specific object editing system improves the efficiency of computing systems that train and implement generator neural networks for generating synthesized digital images. For example, conventional systems that utilize spatially-adaptive normalization to generate synthesized digital images can require less resources and time to train generator neural networks. By utilizing an encoder to extract hierarchical feature representations in connection with generating a synthesized digital image (e.g., from a semantic label map) to modulate a generator neural network, the class-specific object editing system also results in a generator neural network that is less memory intensive and faster to train than the conventional generator neural networks. Turning now to the figures,FIG.1includes an embodiment of a system environment100in which a class-specific object editing system102(or “object editing system102”). In particular, the system environment100includes server device(s)104and a client device106in communication via a network108. Moreover, as shown, the server device(s)104include a digital image system110, which includes the class-specific object editing system102. Furthermore,FIG.1illustrates that the class-specific object editing system102includes class-specific generator neural networks112. Additionally, the client device106includes a digital image application114, which optionally includes the digital image system110, the class-specific object editing system102, and the class-specific generator neural networks112. As shown inFIG.1, the server device(s)104includes or hosts the digital image system110. Specifically, the digital image system110includes, or is part of, one or more systems that implement digital image processing and/or digital image generation. For example, the digital image system110provides tools for viewing, generating, editing, and/or otherwise interacting with digital images (e.g., via the digital image application114of the client device106). In one or more embodiments, the digital image system110processes digital content items including digital images and/or digital videos. To illustrate, the digital image system110utilizes neural networks to generate and/or modify synthesized digital images. In one or more embodiments, the digital image system110generates datasets of synthesized digital images or digital videos in connection with training neural networks or machine-learning models (e.g., segmentation neural networks, generator neural networks). In one or more additional embodiments, the digital image system110processes digital images in connection with one or more additional systems such as cloud-storage systems. In connection with generating or modifying digital images, the digital image system110includes the class-specific object editing system102to generate synthesized objects within digital images. In particular, the class-specific object editing system102utilizes the class-specific generator neural networks112to generate individual synthesized objects of a plurality of object classes to refine a synthesized digital image. For example, the digital image system110(or the class-specific object editing system102) generates a base (e.g., initial) synthesized digital image utilizing a conditional generator neural network. More specifically, the digital image system110generates a synthesized digital image from a semantic label map or other prior that indicates a structure or layout of foreground and/or background objects in the resulting image. In one or more embodiments, a synthesized digital image includes a digital image that is at least partially generated by a neural network. In particular, a synthesized digital image includes a digital image created from one or more priors indicating positions and classes of objects. For instance, a synthesized digital image is a digital image generated by a generator neural network based on a semantic label map. In one or more embodiments, a generator neural network further generates a synthesized digital image based on an edge map indicating edges of objects. According to some embodiments, a synthesized digital image includes a digital image representation of a real-world scene generated by a neural network. In one or more embodiments, a semantic label map includes a representation of labels for a plurality of objects within a scene. To illustrate, a semantic label map includes a plurality of values indicating object classes for a plurality of pixels in a digital image. Thus, a semantic label provides information indicating positions and classes of a plurality of background and/or foreground objects within a digital image. In one or more embodiments, the class-specific object editing system102modifies a synthesized digital image by generating one or more synthesized objects to replace one or more objects from the synthesized digital image. Specifically, the class-specific object editing system102determines classes of objects in the synthesized digital image. Additionally, the class-specific object editing system102utilizes the class-specific generator neural networks112corresponding to the determined classes of objects to generate new, synthesized objects. The class-specific object editing system102also replaces the objects in the synthesized digital image with the corresponding synthesized objects. In one or more embodiments, a neural network includes a computer representation that is tunable based on inputs to approximate unknown functions. In particular, a neural network includes one or more layers (i.e., artificial neurons) that utilize algorithms to learn from, and make predictions on, known data by analyzing the known data to learn to generate outputs that reflect patterns and attributes of the known data. For example, a neural network makes high-level abstractions in data by generating data-driven predictions or decisions from the known input data. In some embodiments, a neural network includes, but is not limited to, a convolutional neural network, a recurrent neural network, a residual neural network, or an adversarial neural network. To illustrate, a neural network includes a generator neural network for generating synthesized digital images. In one or more embodiments, a generator neural network includes a generative adversarial network with one or more encoders or decoders including residual neural network layers, linear neural network layers, rectified linear unit neural network layers, and/or other neural network layers. In addition, a class-specific neural network includes a generator neural network trained to generate digital image content corresponding to a particular object class. Accordingly, generator neural networks described herein provide operations for generating synthesized digital images and/or portions of synthesized digital images. Furthermore, in one or more embodiments, an object includes a visible item with a definable boundary relative to other visible items in a scene. For example, an object includes an item in a foreground of a scene including, but not limited to, real-world items such as furniture, people, faces, clothing, buildings, vehicles, or the like. Additionally, in one or more embodiments, an object includes a portion of a larger object (i.e., a subcomponent of an object) such as a particular body part or a vehicle component. In some embodiments, a digital image includes a plurality of foreground objects presented according to a particular perspective such that one or more of the objects overlap one or more other objects in a scene. Additionally, as mentioned, each object in a digital image corresponds to an object class. In one or more embodiments, an object class includes a particular category of object. For instance, an object class includes a label or description indicating the category of the object from a plurality of possible categories. To illustrate, an object class includes, but is not limited to, a particular real-world item such as furniture, person, face, clothing item, building, vehicle, etc. In additional embodiments, an object class corresponds to a particular subcomponent of another object such as a particular body part (e.g., face or limb) or a particular clothing item. In one or more embodiments, the server device(s)104include a variety of computing devices, including those described below with reference toFIG.12. For example, the server device(s)104includes one or more servers for storing and processing data associated with synthesized digital images and synthesized objects. In some embodiments, the server device(s)104also include a plurality of computing devices in communication with each other, such as in a distributed storage environment. In some embodiments, the server device(s)104include a content server. The server device(s)104can also include an application server, a communication server, a web-hosting server, a networking server, a digital content campaign server, or a digital communication management server. In addition, as shown inFIG.1, the system environment100includes the client device106. In one or more embodiments, the client device106includes, but is not limited to, a mobile device (e.g., smartphone or tablet), a laptop, a desktop, including those explained below with reference toFIG.12. Furthermore, the client device106can be operated by a user (e.g., a user included in, or associated with, the system environment100) to perform a variety of functions. In particular, the client device106performs functions such as, but not limited to, accessing, generating, viewing, modifying, and otherwise interacting with digital images or datasets of digital images via the digital image application114. The client device106also performs functions for generating, capturing, or accessing data to provide to the digital image system110and the class-specific object editing system102in connection with generating and modifying digital images. For example, the client device106communicates with the server device(s)104via the network108to provide digital images to the server device(s)104or receive digital images from the server device(s)104. AlthoughFIG.1illustrates the system environment100with a single client device106, the system environment100can include a different number of client devices. Additionally, as shown inFIG.1, the system environment100includes the network108. The network108enables communication between components of the system environment100. In one or more embodiments, the network108may include the Internet or World Wide Web. Additionally, the network108can include various types of networks that use various communication technology and protocols, such as a corporate intranet, a virtual private network (VPN), a local area network (LAN), a wireless local network (WLAN), a cellular network, a wide area network (WAN), a metropolitan area network (MAN), or a combination of two or more such networks. Indeed, the server device(s)104and the client device106communicates via the network using one or more communication platforms and technologies suitable for transporting data and/or communication signals, including any known communication technologies, devices, media, and protocols supportive of data communications, examples of which are described with reference toFIG.12. AlthoughFIG.1illustrates the server device(s)104and the client device106communicating via the network108, in alternative embodiments, the various components of the class-specific object editing system102communicate and/or interact via other methods (e.g., the server device(s)104and the client device106can communicate directly). Furthermore, althoughFIG.1illustrates the class-specific object editing system102being implemented by a particular component and/or device within the system environment100, the class-specific object editing system102can be implemented, in whole or in part, by other computing devices and/or components in the system environment100(e.g., the client device106). Additionally, the server device(s)104and/or the client device106may access synthesized digital images from a third-party system via the network108. In particular, in some implementations, the class-specific object editing system102on the server device(s)104supports the class-specific object editing system102on the client device106. For instance, the class-specific object editing system102on the server device(s)104learns parameters for the class-specific generator neural networks112. The class-specific object editing system102then, via the server device(s)104, provides the class-specific generator neural networks112to the client device106. In other words, the client device106obtains (e.g., downloads) the class-specific generator neural networks112with the learned parameters from the server device(s)104. Once downloaded, the client device106can utilize the class-specific generator neural networks112to perform one or more image editing tasks independent from the server device(s)104. In alternative implementations, the class-specific object editing system102includes a web hosting application that allows the client device106to interact with content and services hosted on the server device(s)104. To illustrate, in one or more implementations, the client device106accesses a web page supported by the server device(s)104. The client device106provides input to the server device(s)104to perform an image editing task utilizing the class-specific object editing system102, and, in response, the class-specific object editing system102on the server device(s)104performs the task. The server device(s)104then provides the output or results of the image editing task to the client device106. In one or more embodiments, the class-specific object editing system102accurately, flexibly, and efficiently generates synthesized digital images. Specifically, the class-specific object editing system102replaces objects in synthesized digital images with synthesized objects having improved texture and shape details over the initial synthesized digital images.FIG.2illustrates that the class-specific object editing system102utilizes an object of a synthesized digital image to generate a new synthesized object. More specifically,FIG.2illustrates that the class-specific object editing system102utilizes a class-specific generator neural network to generate the synthesized object to replace the object of a synthesized digital image. As mentioned,FIG.2illustrates that the class-specific object editing system102utilizes a class-specific generator neural network200to replace an object of a synthesized digital image with a synthesized object. In one or more embodiments, the class-specific object editing system102first identifies an object202in a foreground of a digital image (e.g., a synthesized digital image generated by a generator neural network). For example,FIG.2illustrates that the object202includes a piece of furniture (e.g., a bed) in a scene that includes one or more additional objects in the foreground and/or background relative to the object202. In one or more additional embodiments, the class-specific object editing system102utilizes the class-specific generator neural network200to generate a synthesized object204from the object202. Additionally, the class-specific object editing system102utilizes the class-specific generator neural network200to process a portion206of the synthesized digital image including the object202. To illustrate, the class-specific object editing system102crops the synthesized digital image to the portion206of the synthesized digital image including the object202and context data for the object202. In one or more additional embodiments, the class-specific object editing system102also utilizes the class-specific generator neural network200to process a portion208of a semantic label map corresponding to the object202. In one or more embodiments, context data refers to visual information associated with, but not included in, an object within a digital image. For example, context data includes one or more portions of a digital image surrounding a particular object. To illustrate, the context data includes a plurality of pixels within a cropped portion of the digital image that includes the object and portions of one or more foreground objects or background objects from a scene (e.g., pixels in the portion206of the synthesized digital image around the object202). More specifically, the context data can include semantic information for one or more portions of a semantic label map or segmentation map (e.g., semantic information from the portion208of the semantic label map around the object202). Additionally, in one or more embodiments, the class-specific object editing system102crops the synthesized digital image to center objects to maintain consistent spatial alignment for generating synthesized objects utilizing generator neural networks. In one or more embodiments, in connection with cropping the synthesized digital image to the portion206, the class-specific object editing system102also masks out (e.g., excludes) the object from the portion206. Specifically, the class-specific object editing system102generates a digital mask for the pixels in the portion206of the synthesized digital image. The class-specific object editing system102then utilizes the digital mask to mask out the foreground region (e.g., the object) such as by assigning zero values to the pixels associated with the foreground region. Furthermore, the class-specific object editing system102assigns one values to the pixels associated with the background region (e.g., portions not part of the object) to include context data from the synthesized digital image in the cropped portion. In one or more alternative embodiments, the class-specific object editing system102blurs the foreground region associated with the object to retain low frequency information within the cropped portion. Additionally, asFIG.2illustrates, the class-specific object editing system102utilizes the class-specific generator neural network200to generate the synthesized object204based on the portion206of the synthesized digital image and the portion208of the semantic label map. In one or more embodiments, the class-specific generator neural network200includes an encoder210ato encode information about the object202from the synthesized digital image. Furthermore, in one or more embodiments, the class-specific generator neural network200includes a decoder210bto decode the encoded information about the object202and generate the synthesized object204corresponding to the object202. The architecture of the class-specific generator neural network200is described in greater detail below with reference toFIG.5. In one or more embodiments, the class-specific object editing system102generates the synthesized object204to insert into the synthesized digital image. For example, the class-specific object editing system102inserts the synthesized object204into the synthesized digital image at a location corresponding to the object202. To illustrate, the class-specific object editing system102utilizes alpha blending or other image processing technique to replace the object202with the synthesized object204. Additionally, by utilizing context data associated with the object202to generate the synthesized object204, the class-specific object editing system102more accurately blends the synthesized object204into the synthesized digital image with other objects in the foreground and/or background by gathering hints from the surrounding context of the target object and generating foreground pixels that appear consistent with the background. FIG.3illustrates an overview diagram of the class-specific object editing system102modifying a synthesized digital image via the use of one or more class-specific generator neural networks in accordance with content of the synthesized digital image. In particular,FIG.3illustrates that the class-specific object editing system102utilizes generator neural networks to generate and modify the synthesized digital image. In one or more embodiments, the class-specific object editing system102utilizes conditional generator neural networks to generate synthesized digital images based on prior information indicating positions and/or classes of one or more objects in the synthesized digital images. In one or more embodiments, asFIG.3illustrates, the class-specific object editing system102first utilizes an image synthesis neural network300to generate a synthesized digital image302. For instance, the image synthesis neural network300includes a conditional generator neural network that generates synthesized digital images based on one or more priors. To illustrate, the image synthesis neural network300includes a generative adversarial neural network to generate the synthesized digital image302based on data indicating one or more objects, one or more object classes, and object positions for generating the synthesized digital image302. More specifically, the class-specific object editing system102utilizes the image synthesis neural network300to generate the synthesized digital image302from a semantic label map304and an edge map306. According to one or more embodiments, the semantic label map304includes semantic information that indicates a position and class of one or more objects for generating the synthesized digital image302. In particular, the image synthesis neural network300utilizes labels of the semantic label map304to determine object classes corresponding to a plurality of pixels for generating the synthesized digital image302. For instance, the semantic label map304includes groups of pixels associated with a particular object class indicating a location and a category of an object. Additionally, in one or more embodiments, the image synthesis neural network300utilizes the edge map306including edges of objects in connection with the semantic label map304to generate the synthesized digital image302with improved accuracy over the semantic label map304alone. AlthoughFIG.3illustrates that the class-specific object editing system102utilizes the image synthesis neural network300to generate the synthesized digital image302from the semantic label map304and the edge map306, in other embodiments, the class-specific object editing system102generates the synthesized digital image302from another prior, such as another digital image (e.g., a photograph). As previously mentioned, the class-specific object editing system102generates the synthesized digital image302as an initial synthesized digital image. Specifically, the class-specific object editing system102utilizes the image synthesis neural network300to generate the synthesized digital image that covers an entire scene. For instance, the class-specific object editing system102utilizes the image synthesis neural network300to generate larger details such as for background object classes like landscapes, walls, floors, etc. In one or more embodiments, the class-specific object editing system102(or another system) trains the image synthesis neural network300to focus on the larger/more significant object classes. By focusing training of the image synthesis neural network300, however, the resulting synthesized digital image302may have reduced details for smaller objects or textures such as details on furniture, faces, or other objects. The architecture of the image synthesis neural network300is described in greater detail below with reference toFIG.5. In one or more embodiments, after generating the synthesized digital image302, the class-specific object editing system102improves the synthesized digital image302by modifying details of foreground objects in the synthesized digital image302. For example, the class-specific object editing system102determines one or more foreground objects in the synthesized digital image302and one or more object classes associated with the one or more foreground objects. To illustrate, the class-specific object editing system102identifies objects and object classes based on the semantic label map304. In addition, the class-specific object editing system102selects class-specific generator neural networks308corresponding to the identified foreground object(s) and object class(es). In one or more embodiments, the class-specific object editing system102creates and trains a plurality of class-specific generator neural networks for a plurality of different object classes. Furthermore, if the synthesized digital image302includes a first object of a first object class and a second object of a second object class, the class-specific object editing system102selects a first class-specific generator neural network for the first object class and a second class-specific generator neural network for the second object class. According to one or more embodiments, the class-specific object editing system102generates synthesized objects310utilizing the class-specific generator neural networks308. Specifically, the class-specific object editing system102utilizes a particular class-specific generator neural network to generate a synthesized object of a particular object class. To illustrate, the class-specific object editing system102utilizes a first class-specific generator neural network to generate a first synthesized object of the first object class. Additionally, the class-specific object editing system102utilizes a second class-specific generator neural network to generate a second synthesized object of the second object class. The class-specific object editing system102accordingly generates a plurality of synthesized objects utilizing corresponding class-specific generator neural networks according to the identified object classes in the synthesized digital image302. As illustrated inFIG.3, after generating the synthesized objects310, the class-specific object editing system102then generates a modified synthesized digital image312. For instance, the class-specific object editing system102replaces identified objects in the synthesized digital image302with the synthesized objects310. In one or more embodiments, the class-specific object editing system102determines positions of the objects in the synthesized digital image302. The class-specific object editing system102then inserts the synthesized objects310into the synthesized digital image302at the positions of the corresponding objects to generate the modified synthesized digital image312. According to some embodiments, the class-specific object editing system102inserts the synthesized objects310utilizing alpha blending to blend the synthesized objects310as foreground objects into the modified synthesized digital image312. FIG.4Aillustrates a diagram of the class-specific object editing system102generating a modified synthesized digital image including a plurality of synthesized objects. In particular,FIG.4Aillustrates that the class-specific object editing system102generates and inserts the synthesized objects into a synthesized digital image according to a particular order. For instance, the class-specific object editing system102inserts the synthesized objects in series (e.g., one at a time) to account for updated context data associated with each synthesized object. In one or more alternative embodiments, the class-specific object editing system102inserts synthesized objects into a synthesized digital image in parallel (e.g., at the same time). As illustrated inFIG.4A, in one or more embodiments, the class-specific object editing system102obtains a semantic label map400and an edge map402including information indicating objects and object classes for generating a synthetic digital image. According to some embodiments, the semantic label map400and the edge map402correspond to a real-world image that the class-specific object editing system102processes. To illustrate, the class-specific object editing system102obtains the semantic label map400and the edge map402from the real-world image for use in generating synthetic digital images based on the real-world image. Alternatively, the class-specific object editing system102generates the semantic label map or other object label map (or portion of a semantic label map) utilizing a neural network or other system. According to one or more embodiments, the class-specific object editing system102utilizes a base generator neural network404(“Base GNN”) to generate a base synthesized digital image406from the semantic label map400and the edge map402. The base generator neural network404is the same neural network as the image synthesis neural network300described above. For example, as mentioned, the base generator neural network404or image synthesis neural network300generates the base synthesized digital image406to synthesize details primarily associated with in the foreground and/or background associated with larger object classes. Accordingly, the base synthesized digital image406can include fewer or less accurate details associated with some objects in the scene. To illustrate, if the scene is a bedroom scene, as illustrated inFIG.4A, the base synthesized digital image406includes a room with various objects (e.g., furniture) inserted into the room by the base generator neural network404. Because the base generator neural network404may not be trained for specific object classes, the resulting base synthesized digital image406can include less accurate details for the furniture in the room. In one or more embodiments, the class-specific object editing system102identifies the objects in the scene of the base synthesized digital image406. In particular, the class-specific object editing system102determines that the scene of the base synthesized digital image406includes a plurality of furniture objects. The class-specific object editing system102then selects a plurality of class-specific generator neural networks corresponding to each of the objects in the base synthesized digital image406. For example, the class-specific object editing system102selects a first generator neural network408acorresponding to a bed (“Bed GNN”), a second generator neural network408bcorresponding to a chest (“Chest GNN”), and a third generator neural network408ccorresponding to a lamp (“Lamp GNN”). In one or more embodiments, the class-specific object editing system102trains each generator neural network according to the corresponding object class (e.g., train the first generator neural network408aon a dataset of images including beds, the second generator neural network408bon a dataset of images including chests, the third generator neural network408con a dataset of images including lamps). In one or more embodiments, the class-specific object editing system102generates a first synthesized object410autilizing the first generator neural network408a. For instance, the class-specific object editing system102generates the first synthesized object410aincluding a synthesized bed corresponding to a bed from the base synthesized digital image406. To illustrate, the class-specific object editing system102utilizes the first generator neural network408ato generate the first synthesized object410afrom a cropped portion of the base synthesized digital image406corresponding to the first object (e.g., the bed). In connection with generating the first synthesized object410a, the first generator neural network408autilizes context data from the base synthesized digital image406surrounding the bed. As illustrated inFIG.4A, the class-specific object editing system102replaces the corresponding object in the base synthesized digital image406with the first synthesized object410. In one or more embodiments, the class-specific object editing system102inserts the first synthesized object410ainto the base synthesized digital image406. For example, the class-specific object editing system102inserts the synthesized bed into the base synthesized digital image406to generate a first synthesized digital image412athat includes the first synthesized object410a. After generating the first synthesized digital image412awith the first synthesized object410a, the class-specific object editing system102then utilizes the second generator neural network408bto generate a second synthesized object410b. In particular, the class-specific object editing system102determines a second object (e.g., a chest) and context data for the second object from the first synthesized digital image412a. Because the class-specific object editing system102inserted the first synthesized object410ato generate the first synthesized digital image412a, the class-specific object editing system102determines context data for the second object based on the modifications due to inserting the first synthesized object410a. In one or more embodiments, the class-specific object editing system102generates a cropped image from the first synthesized digital image412afor the second object, which may have context data including the first synthesized object410a. Accordingly, the class-specific object editing system102utilizes the second generator neural network408bto generate the second synthesized object410bbased on context data that may be modified by the first synthesized object410a. As illustrated inFIG.4A, the class-specific object editing system102utilizes the second synthesized object410bto generate a second synthesized digital image412b. Specifically, the class-specific object editing system102replaces the second object (e.g., the chest) in the first synthesized digital image412awith the second synthesized object410b, resulting in the second synthesized digital image412b. In one or more embodiments, the class-specific object editing system102generates the second synthesized digital image412bby inserting the second synthesized object410binto the first synthesized digital image412aat the location corresponding to the second object utilizing the context data obtained from the first synthesized digital image412a. Additionally,FIG.4Aillustrates that the class-specific object editing system102utilizes the third generator neural network408cto generate a third synthesized object410cassociated with a third object (e.g., a lamp). In one or more embodiments, the class-specific object editing system102generates a cropped portion of the second synthesized digital image412bcorresponding to the third object. In some embodiments, if the cropped portion of the second synthesized digital image412bincludes context data corresponding to the second synthesized object410b. In other embodiments, the second synthesized object410bis not included in the context data. The class-specific object editing system102utilizes the third generator neural network408cto generate the third synthesized object410c. In response to generating the third synthesized object410c, the class-specific object editing system102generates a third synthesized digital image412c. In particular, the class-specific object editing system102replaces the third object in the second synthesized digital image412bwith the third synthesized object410c. For example, the class-specific object editing system102inserts the third synthesized object410cat a location of the third object. Accordingly, the class-specific object editing system102generates the third synthesized digital image412cby inserting the third synthesized object410cat the location of the third object. By generating each synthesized object in series, the class-specific object editing system102incorporates context data corresponding to previously inserted synthesized objects when synthesizing subsequent objects. In particular, synthesized objects inserted into a digital image can affect the context data for other objects depending on the object locations and sizes. To illustrate, when cropping a digital image to a foreground object in the digital image, the cropped portion may include a bounding box with pixels corresponding to at least a portion of another foreground object. In one or more embodiments, the class-specific object editing system102determines a synthesis order of objects in a synthesized digital image based on object size, position, class, or another attribute. In one or more alternative embodiments, the class-specific object editing system102synthesizes objects for inserting into a digital image at the same time (or from the same digital image), rather than synthesizing a plurality of objects in sequence. In one or more embodiments, the class-specific object editing system102also dilates and softens boundaries of synthesized objects and object instance masks before applying alpha blending to insert the synthesized objects. In one or more additional embodiments, the class-specific object editing system102utilizes feature propagation for an object instance to ensure consistencies between inner portions of the object instance and outer portions (e.g., at boundaries) of the object instance. For example,FIG.4Billustrates a representation of an object instance414within a grid.FIG.4Cillustrates that the representation of the object instance includes inner features416and outer features418. As mentioned, the class-specific object editing system102utilizes feature propagation (e.g., at a decoder component of a generator neural network) to propagate the inner features416to the outer features418(e.g., to replace the outer features418with the inner features416). By propagating the inner features to the outer features418, the class-specific object editing system102reduces artifacts at the boundaries of the object instance, which improves visual consistencies between the object instance and a background when using alpha blending to insert the object instance into a digital image. In one or more embodiments, the class-specific object editing system102utilizes an image synthesis neural network in connection with a plurality of class-specific generator neural networks to generate a synthesized digital image.FIG.5illustrates an example architecture of a generator neural network to generate a base synthesized digital image.FIG.5further illustrates an architecture for each of a plurality of separate class-specific generator neural networks to generate individual synthesized objects for inserting into the base synthesized digital image. In one or more embodiments, as illustrated inFIG.5, a generator neural network includes an encoder502and a decoder504. As shown, the encoder502includes a plurality of components, and the decoder504also includes a plurality of components. According to one or more embodiments, the encoder502encodes information based on priors associated with a scene and outputs one or more signals (e.g., a latent code and a spatial feature tensor). Furthermore, the decoder504utilizes the signals generated by the encoder502to generate a synthesized digital image508. As illustrated inFIG.5, the encoder502includes a first encoder component510a(“E2”) to determine an initial representation based on the priors506. In one or more embodiments, the first encoder component510aincludes one or more neural network layers to convert the priors506into a feature vector or feature map of a fixed length or size by extracting feature sets based on the priors506. Additionally, the first encoder component510aincludes one or more neural network layers to downscale a resolution of the feature map to a first lowered resolution. FIG.5further illustrates that the encoder502includes a second encoder component510b(“E2BU”) and a third encoder component510c(“E2TD”). According to one or more embodiments, the second encoder component510bfurther lowers a resolution of the feature map extracted from the priors506. In particular, the second encoder component510breceives the output of the first encoder component510aand then includes one or more neural network layers in a “bottom-up” configuration to reduce the resolution of the feature map to a predetermined resolution. In one or more embodiments, the second encoder component510bgenerates a plurality of feature maps with sequentially lowered resolutions (e.g., stepping a resolution down in several increments). Furthermore, the second encoder component510balso utilizes one or more neural network layers to generate a latent code based on a feature map with a lowered resolution. In one or more embodiments, the third encoder component510cof the encoder502utilizes a plurality of feature maps at a plurality of different resolutions to generate a spatial feature tensor ϕ′ based on the priors506. For instance, the third encoder component510cincludes a plurality of neural network layers in a “top-down” configuration for upsampling by aggregating a plurality of feature maps or feature sets at different resolutions (e.g., by merging features from E2TDwith the feature maps of the same spatial dimension from E2BU). The third encoder component510cthus incorporates information for generating the synthesized digital image508at a plurality of different resolutions to capture different levels of details. To illustrate, lower resolution features are semantically stronger and have more global information about all classes present in the priors506, while higher resolutions features are more accurately aligned to the input layout. As illustrated inFIG.5, the decoder504includes a mapping component512ato transform a latent code z generated by the encoder502. For example, the mapping component512autilizes one or more neural network layers to modify the latent code while maintaining the same dimensionality. Additionally, the mapping component512atransforms the latent code to convert a normal distribution (or other distribution resulting from generating the latent code from the priors506) to a distribution that better matches a training dataset associated with training the decoder504. The class-specific object editing system102thus ensures that the decoder component512baccurately interprets the encoded data associated with the priors506. Additionally,FIG.5illustrates that the decoder504includes a decoder component512bto generate the synthesized digital image508. In one or more embodiments, the decoder component512bgenerates the synthesized digital image508from the spatial feature tensor generated by the encoder502. Furthermore, the decoder component512butilizes the modified latent code from the mapping component512ato generate the synthesized digital image508according to the modified distribution, thereby aligning the data in the spatial feature tensor to the training data associated with the generator neural network. In some embodiments, the decoder component512bgenerates the synthesized digital image508as a base synthesized digital image. According to one or more embodiments, the generator neural network also includes a feature cropping component514for use with class-specific generator neural networks. In particular, as previously indicated, the class-specific object editing system102synthesizes individual objects to generate accurate synthesized digital images. In one or more embodiments, the generator neural network utilizes the feature cropping component514to generate one or more cropped spatial feature tensors ϕ corresponding to one or more objects (e.g., class instance regions) based on labels or other object classes identified from the priors506. To illustrate, the feature cropping component514utilizes a fixed operation without learnable parameters to crop class instance regions from the spatial feature tensor generated by the third encoder component510c. After utilizing the feature cropping component514to generate cropped spatial feature tensors, the class-specific object editing system102utilizes class-specific decoders (e.g., as part of a plurality of class-specific generator neural networks) to generate synthesized objects. In particular, the class-specific object editing system102provides the cropped spatial feature tensors to the decoder component512bto generate synthesized objects of object classes corresponding to the particular class-specific generator neural networks. For instance, if the decoder504corresponds to a class-specific generator neural network trained for a particular object class (e.g., using a dataset including objects of the particular object class), the decoder504generates the synthesized digital image508as a synthesized object of the object class. Similarly, the class-specific object editing system102utilizes a plurality of different decoders corresponding to class-specific generator neural networks trained for a plurality of different object classes to generate synthesized objects of the different object classes. According to one or more embodiments, the class-specific object editing system102utilizes an architecture for a generator neural network to generate synthesized digital images as described in U.S. patent application Ser. No. 17/400,426 titled “GENERATING SYNTHESIZED DIGITAL IMAGES UTILIZING A MULTI-RESOLUTION GENERATOR NEURAL NETWORK”, filed Aug. 12, 2021, which is herein incorporated in its entirety. In one or more embodiments, the class-specific object editing system102utilizes one or more instances of a generator neural network to generate base synthesized digital images and synthesized objects to modify the base synthesized digital images. For example, a base generator neural network receives a segmentation map S (e.g., a semantic label map) and an instance edge map E to generate a base image Ibthat covers a scene. More specifically, Ib=Gb(cat(S, E)), where cat(·,·) is a channel-wise concatenation. Furthermore, Gbrepresents the base generator neural network including an encoder and decoder architecture, for example, as illustrated inFIG.5. The class-specific object editing system102utilizes a spatial feature tensor as input to the decoder to provide the generator neural network with guidance on the generated spatial structure. By sampling different latent codes z, the generator neural network generates different results given the same segmentation map. As mentioned, in one or more embodiments, the class-specific object editing system102utilizes a plurality of class-specific generator neural networks to improve the quality of smaller object classes. For instance, the class-specific object editing system102trains a plurality of class-specific generator neural networks to use to generate a plurality of synthesized objects (e.g., as inFIG.4). In one or more embodiments, the class-specific object editing system102utilizes context data associated with each of the object instances to improve the quality of the individual objects while also ensuring consistency in the orientation, color, or lighting among different objects. To provide context data around a target object instance to a class-specific generator neural network, the class-specific object editing system102determines a bounding box of the object instance from an instance map. In one or more embodiments, the class-specific object editing system102also enlarges the bounding box (e.g., 1.5 times or 2 times) to crop a real image Ireal_sceneand its segmentation map S. The class-specific object editing system102concatenates the cropped real image Ciand segmentation map Csto use as context C=cat(Ci, Cs) for the class-specific generator neural network Gcto generate a specific instance Ic=Gc(C). During training of the class-specific generator neural network Gc, the class-specific object editing system102crops Cifrom the real image Ireal_scene, rather than from the base image Ib. This provides a ground truth for supervising reconstruction of the context data and a hallucination of the foreground object, while the generated base image Ibmay include artifacts. In one or more embodiments, the class-specific object editing system102utilizes a feature cropping component within the class-specific generator neural network Gcto crop a spatial feature corresponding to the instance bounding box to obtain a spatial feature ϕ. Accordingly, the class-specific object editing system102generates the final synthesized object tightly within the synthesized object Icwithout additional context outside the instance bounding box. According to one or more embodiments, the class-specific object editing system102forces the generator neural network to use the context data C, the class-specific object editing system102applies a perceptual loss between the generated instance Iband the target instance Ireal_ins, which the class-specific object editing system crops directly from the real image Ireal_sceneusing the instance bounding box without enlarging the bounding box. Because background pixels in Ireal_insalready exist in C (i.e., Ci), the generator neural network automatically encodes the background region. To prevent the generator neural network from also automatically encoding the foreground region, the class-specific object editing system utilizes one of a plurality of methods. For instance, the class-specific object editing system102generates a digital mask and masks out the foreground region with zeroes. Alternatively, the class-specific object editing system102blurs the foreground region to retain the low frequency information such that Icroughly follows the input color theme with the use of the perceptual loss. The class-specific object editing system102thus trains the generator neural network to gather hints from the context data of the target instance and generate foreground pixels consistent with the background. In one or more embodiments, the class-specific object editing system102utilizes an adversarial loss, R1regularization, and path length regularization referred to asgan. For the adversarial loss, the real distributions are {Ireal_scene} and {Ireal_ins} for the base generator neural network and class-specific generator neural network, respectively. The class-specific object editing system102also regularizes the encoder by applying KL-Divergence to the output of the encoder (e.g., the latent code z), thus forcing the latent code to follow a normal distribution to support multi-modal synthesis during inference,kl. The class-specific object editing system102utilizes the perceptual loss:perceptual=Σl∥Vl(Igen)−Vl(Ireal)∥1, where Vl(·) represents the output of the ithlayer of a pretrained convolutional neural network. Additionally, Igenis Iband Ic, Irealis Ireal_sceneand Ireal_insin the base generator neural network and the class-specific generator neural network, respectively. Accordingly, the overall training loss is=gan+λ1*kl+λ2*perceptual. In one or more embodiments, the loss weights and the frequency of regularization withinganare predetermined values (e.g., 0.01 and 1 for λ1and λ2, respectively). In one or more embodiments, to composite instances generated by class-specific generator neural networks, the class-specific object editing system102creates an alpha mask of the instance using a ground-truth instance mask Ins, Malpha={1,if⁢Ins⁡(i,j)=target-⁢instance-⁢idx0,otherwise where Ins is a two-dimensional map with different values at each location, and each value is the index for a unique instance. The target_instance_idx is the index for the current target instance. The class-specific object editing system102then resizes and relocates the generated instance Icinto the correct position according to the Malphato obtain the relocated generated instance Ic_relocation. Additionally, to avoid potential small gaps due to quantization during resizing/relocating, the class-specific object editing system102dilates boundaries of both Malphaand Ic_relocation. The composition image Icompis Icomp=M′alpha×I′c_relocation+(1−M′alpha)×Ib, where M′alphaand I′c_relocationare dilated versions of Malphaand Ic_relocation. After completing composition for the first instance, the class-specific object editing system102uses Icompas the base image Ibfor the next instance. FIG.6Aillustrates a plurality of images comparing a plurality of base synthesized digital images of a particular scene (i.e., a bedroom scene with furniture) to a plurality of modified synthesized digital image. Specifically,FIG.6Aillustrates a first set of base synthesized digital images generated utilizing a base generator neural network. The first set of base synthesized digital images includes a first base synthesized digital image600generated based on a semantic label map602for the particular scene. Additionally,FIG.6Aillustrates a close-up view604of an object (e.g., a chest/dresser) within the scene of the base synthesized digital image606. FIG.6Aalso illustrates a first set of modified synthesized digital images including a modified synthesized digital image606generated utilizing the base synthesized digital image600and a plurality of class-specific generator neural networks. Furthermore,FIG.6Aillustrates a composition semantic map608including a plurality of labels corresponding to a plurality of objects to replace from the base synthesized digital image600.FIG.6Aalso illustrates a close-up view610of a synthesized object to replace the object shown in the close-up view604of the base synthesized digital image606. As shown, the synthesized object in the modified synthesized digital image606has more accurate texture and shape details than the object replaced in the base synthesized digital image600. Similarly, the other modified synthesized digital images have improved object details over the base synthesized digital images. FIG.6Billustrates a plurality of images comparing a plurality of base synthesized digital images of an additional scene (i.e., a person against a blurred background) to a plurality of modified synthesized digital image. Specifically,FIG.6Billustrates a second set of base synthesized digital images generated utilizing a base generator neural network trained on a dataset including images similar to the additional scene. The second set of base synthesized digital images includes a base synthesized digital image612generated based on a semantic label map614for the scene. Additionally,FIG.6Billustrates a close-up view616of an object (e.g., a human face) within the scene of the base synthesized digital image612. FIG.6Balso illustrates a second set of modified synthesized digital images including a modified synthesized digital image618generated utilizing the base synthesized digital image612and a plurality of class-specific generator neural networks. Furthermore,FIG.6Billustrates a composition semantic map620including a plurality of labels corresponding to a plurality of objects to replace from the base synthesized digital image612.FIG.6Balso illustrates a close-up view622of a synthesized object to replace the object shown in the close-up view616of the base synthesized digital image612. The synthesized object in the modified synthesized digital image618has more accurate texture and shape details than the object replaced in the base synthesized digital image612. As illustrated, although the objects and scenes inFIGS.6A-6Bare different (e.g., different object classes), by utilizing a plurality of class-specific generator neural networks, the class-specific object editing system102provides significantly improved object details. FIG.7illustrates a plurality of synthesized digital images corresponding to a plurality of scenes. In particular,FIG.7illustrates comparisons of sets of base synthesized digital images, modified synthesized digital images with context data for training class-specific generator neural networks, and modified synthesized digital images without context data for training class-specific generator neural networks. To illustrate, a base synthesized digital image700includes synthesized foreground and background objects corresponding to a bedroom scene. The class-specific object editing system102generates the base synthesized digital image700utilizing a generator neural network with no feature cropping. FIG.7also illustrates a first modified synthesized digital image702generated utilizing a class-specific generator neural network with context data for a synthesized object. Furthermore,FIG.7illustrates a second modified synthesized digital image704generated utilizing the class-specific generator neural network without context data for a synthesized object. As shown, the synthesized object in the first modified synthesized digital image702is more consistent with the rest of the scene than the synthesized object in the second modified synthesized digital image704. The other modified synthesized digital images ofFIG.7that utilize context data for synthesized objects also provide more accurate details and better consistency (e.g., better lighting and orientation) than the other modified synthesized digital images without context data. FIG.8illustrates comparisons of synthesized digital images generated by a conventional system and synthesized digital images generated by the class-specific object editing system102. More specifically,FIG.8illustrates a semantic label map800for generating a synthesized digital image. To illustrate, the conventional system utilizes a generator neural network with spatially-adaptive normalization, as described by Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu in “Semantic image synthesis with spatially-adaptive normalization” in Conference on Computer Vision and Pattern Recognition (2019) (“SPADE”), to generate a first synthesized digital image802.FIG.8also illustrates a second synthesized digital image804that the class-specific object editing system102generates utilizing a class-specific generator neural network.FIG.8also shows that the class-specific object editing system102is more accurate than the conventional system in generating out-of-distribution synthesized objects (e.g., a vehicle on a sidewalk). FIG.9illustrates a plurality of synthesized digital images corresponding to a plurality of different scenes. In particular,FIG.9illustrates that the class-specific object editing system102generates a plurality of different synthesized digital images for the same scene. More specifically, as illustrated inFIG.9, the class-specific object editing system102generates a plurality of synthesized digital images with different versions of synthesized objects replacing a single object in a base synthesized digital image. For example,FIG.9illustrates a set of synthesized digital images900a-900eincluding a plurality of synthesized objects (e.g., beds) including different details generated by a class-specific generator neural network. To illustrate, the beds generated by the class-specific generator neural network include different instances for a single object class, resulting in a plurality of different bed covers with different patterns. Thus, in one or more embodiments, the class-specific object editing system102replaces a single object in a digital image without affecting other portions of the digital image. To illustrate, the class-specific object editing system102masks out an object instance to replace and provides the remaining image as context for the class-specific generator neural network. The class-specific object editing system102then generates a synthesized object and replaces the object instance with the synthesized object. According to an embodiment, experimental data includes quantitative and qualitative evaluations comparing results of a base generator neural network and a composition model that utilizes class-specific generator neural networks with conventional systems. For example, the experimental data includes comparisons based on a bedroom dataset, a full human body dataset, and a cityscape dataset. In particular, the bedroom dataset combines two datasets including images according to a “bedroom” category and a “hotel_room” category. Furthermore, the full human body dataset includes high resolution images of full human bodies with blurred backgrounds and annotated with 24 classes such as faces, upper-cloths, left shoes, and right shoes. The cityscapes dataset includes street scene images. The experimental data uses the three datasets to train a base generator neural network and baselines. The base generator neural network provides some level of accuracy for large object classes in scenes (e.g., beds in a bedroom scene or large background categories like walls and floors). Additionally, the class-specific object editing system102trains class-specific generator neural networks on classes of objects that are typically small and not synthesized well by the base generator neural network and baselines. Due to the class-specific object editing system102utilizing separate generator neural networks for separate classes, the experimental data utilizes additional datasets as extra training data sources for generating bedrooms (e.g., furniture, indoor rooms) and cityscapes (e.g., cityscapes extra, pedestrians). Table 1 below summarizes the selected classes and training sources. SceneClassesTraining data sourcesBedroomBed, chair, table,Bedroom + furniturechest, lamp, pillowBedroom + furniture +indoor roomsHumanShoes, face, upper clothesFull human body datasetCityscapesCarCityscapesPersonCityscapes + cityscapesextra + pedestrians The class-specific object editing system102trained the base generator neural networks to generate 512×512 resolution images for the bedroom and full human body datasets and 1024×512 images for the cityscapes dataset. Because the resolution of each class varies, the class-specific object editing system102trains the class-specific generator neural networks at 128×128 or 256×256 depending on the average size of each class. The class-specific object editing system102also trains all classes—except for the person category in cityscapes—with a blurred foreground region so that the generator neural network attempts to maintain the color tone of instances in a base image during inference time. Additionally, in one or more embodiments, the class-specific object editing system102uses masking, rather than blurring, for synthesizing persons in cityscapes. As mentioned, the experimental data indicates a comparison between the class-specific object editing system102and the base generator neural network with SPADE and two variants of SPADE—“LGGAN” as described by Hao Tang, Dan Xu, Yan Yan, Philip H. S. Torr, and Nicu Sebe in “Local class-specific and global image-level generative adversarial networks for semantic-guided scene generation” in Conference on Computer Vision and Pattern Recognition (2020); and “OASIS” as described by Vadim Sushko, Edgar Schonfeld, Dan Zhang, Juergen Gall, Bernt Schiele, and Anna Khoreva in “You only need adversarial supervision for semantic image synthesis” in International Conference on Learning Representations (2021). The experimental data includes the base generator neural network for the conventional systems trained at higher resolution (with default parameters) and provided with an instance map for fair comparison. SPADE and OASIS resulted in significant memory usage (i.e., ˜16 GB per image to train 512×512 bedroom images), while the class-specific object editing system102used ˜4 GB per such image. LGGAN was incapable of fitting a single image on a 32 GB V100 GPU for the bedroom dataset due to the large number of parameters and separate convolutional layers for each class and resulted in slow training for the other datasets with fewer classes. Table 2 illustrates measurements of Frechet Inception Distance (“FID”) scores of the base generator neural network with the conventional systems relative to the class-specific object editing system102(“System102”). DatasetsSPADEOASISLGGANSystem 102Bedroom44.3839.21N/A33.17Human38.538.65N/A7.22Cityscapes59.6850.9061.4647.07 As illustrated above, the class-specific object editing system102achieves lower FID scores than the other three systems. Additionally, the experimental embodiment generated synthesized images shown to a plurality of people with a segmentation map and two generated images side-by-side. Additionally, the comparison utilizes a two-alternative forced choice option between two images to determine which image looked more realistic. Table 3 below illustrates the human evaluation results indicating that people generally preferred the results of the system102over other systems. The class-specific object editing system102improves the results in the cityscape dataset by utilizing class-specific generators for smaller objects (e.g., cars and people). SystemSystemSystemDatasets102 vs SPADE102 vs OASIS102 vs LGGANBedroom90.0%73.2%N/AHuman82.4%63.2%N/ACityscapes59.2%35.2% (83.6%)62.0% Additionally, the experimental data includes a comparison of qualitative results for SPADE, OASIS, and the base generator neural network of the class-specific object editing system102. The class-specific object editing system102generated images that looked more realistic. For example, the class-specific object editing system102generated bedrooms with bed sheets containing more textures and cloths on generated humans contained more wrinkles. Furthermore, OASIS generated images with visible boundary artifacts on human images. The class-specific object editing system102is also able to generate multiple images corresponding to the same segmentation map by sampling different latent codes z. The experimental data further utilizes per-class FID scores comparing the base generator neural network with the class-specific generator neural networks. In particular, the class-specific object editing system102crops each instance from an original base image and a composition image (e.g., an image including synthesized objects from class-specific generator neural networks) and resizes the cropped portions to the average crop size over all instances in the class. The experimental data also includes human evaluations of the images. Table 4 below includes per-class FID scores of the base generator neural network and the composition model of class-specific generator neural networks and the percentage of time users prefer the class-specific generator neural network over the base generator neural network. DatasetsChestChairPillowLampTableCarPersonFaceShoeFID142.87166.12125.0386.65126.2144.4598.9915.1233.12(base)FID132.12155.12136.7980.12119.4430.4282.3413.5429.87(comp)User71%70%33%62%60%94%89%84%69% Additionally, compositing pixels generated by the class-specific generator neural network on top of a base image generated by the base generator neural network provides improved results over the base image. Table 6 below also illustrates results of an ablation study comparing the impact of training class generator neural networks with additional training data for cityscapes. FID ↓II:III:CompositionCompositionUser Study ↑I: Basew/o extraw/ extraI vs. III vs. IIICar44.4536.7130.4223%/77%6%/94%Person98.9988.4782.3413%/87%11%/89% As shown, the class-specific generator neural networks performed better than the base generator neural network with and without using additional data. The class-specific weights and centrally aligned data thus provide an accuracy advantage over the base generator neural network alone. Using additional training data further improves the FID scores and user preference performance. Additionally, the experimental data indicates improved performance by providing context information C as input to the class-specific generator neural networks. For example, as previously indicated with respect toFIG.8, providing the context data to the class-specific generator neural networks causes the class-specific generator neural networks to generate objects that are consistent with the surrounding lighting condition, while generator neural networks without the context data produced inconsistent results. In particular, a lamp generator trained with context does not use blurred foreground information during training and inference, so the network relies on context to determine the lamp color. Additionally, without context data, the class-specific generator neural networks may result in incorrect inference of gender or skin color. FIG.10illustrates a detailed schematic diagram of an embodiment of the class-specific object editing system102described above. As shown, the class-specific object editing system102is implemented in a digital image system110on computing device(s)1000(e.g., a client device and/or server device as described inFIG.1, and as further described below in relation toFIG.11). Additionally, in one or more embodiments, the class-specific object editing system102includes, but is not limited to, an image generation manager1002, an object class manager1004, a class-specific model manager1006, an image modification manager1008, and a data storage manager1010. The class-specific object editing system102can be implemented on any number of computing devices. In one or more embodiments, the class-specific object editing system102is implemented in a distributed system of server devices for synthetic digital image generation. In alternative embodiments, the class-specific object editing system102is implemented within one or more additional systems. Alternatively, the class-specific object editing system102may be implemented on a single computing device such as a single client device. In one or more embodiments, each of the components of the class-specific object editing system102is in communication with other components using any suitable communication technologies. Additionally, in some embodiments, the components of the class-specific object editing system102are in communication with one or more other devices including other computing devices of a user, server devices (e.g., cloud storage devices), licensing servers, or other devices/systems. It will be recognized that although the components of the class-specific object editing system102are shown to be separate inFIG.10, any of the subcomponents may be combined into fewer components, such as into a single component, or divided into more components as may serve a particular implementation. Furthermore, although the components ofFIG.10are described in connection with the class-specific object editing system102, in one or more embodiments, at least some of the components for performing operations in conjunction with the class-specific object editing system102described herein are implemented on other devices within the environment. In some embodiments, the components of the class-specific object editing system102include software, hardware, or both. For example, the components of the class-specific object editing system102include one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices (e.g., the computing device(s)1000). When executed by the one or more processors, the computer-executable instructions of the class-specific object editing system102can cause the computing device(s)1000to perform the operations described herein. Alternatively, the components of the class-specific object editing system102can include hardware, such as a special purpose processing device to perform a certain function or group of functions. Additionally, or alternatively, the components of the class-specific object editing system102can include a combination of computer-executable instructions and hardware. Furthermore, the components of the class-specific object editing system102performing the functions described herein with respect to the class-specific object editing system102may, for example, be implemented as part of a stand-alone application, as a module of an application, as a plug-in for applications, as a library function or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components of the class-specific object editing system102may be implemented as part of a stand-alone application on a personal computing device or a mobile device. Alternatively, or additionally, the components of the class-specific object editing system102may be implemented in any application that provides digital image modification, including, but not limited to ADOBE® PHOTOSHOP®, ADOBE® AFTER EFFECTS®, ADOBE® ILLUSTRATOR®, ADOBE® PHOTOSHOP® ELEMENTS, and ADOBE® CREATIVE CLOUD® software. “ADOBE,” “PHOTOSHOP,” “AFTER EFFECTS,” “ILLUSTRATOR,” and “CREATIVE CLOUD” are either registered trademarks or trademarks of Adobe Inc. in the United States and/or other countries. In one or more embodiments, the image generation manager1002provides generation and management of synthesized digital images. For example, the image generation manager1002manages one or more generator neural networks to generate synthesized digital images. To illustrate, the image generation manager1002utilizes a base generator neural network1003to generate base synthesized digital images from priors such as semantic label maps and/or edge maps. In one or more embodiments, the image generation manager1002generates synthesized digital images in connection with generating or augmenting one or more datasets for training generator neural networks or other machine-learning models. Additionally, the object class manager1004manages classes of objects for generating synthesized digital images. For instance, the object class manager1004utilizes a semantic label map or other segmentation map to determine a plurality of objects and object positions associated with a plurality of pixel locations for generating a synthesized digital image. In addition, the object class manager1004determines the object classes for objects in a semantic label map including foreground objects and background objects. The object class manager1004communicates with one or more other components of the class-specific object editing system102(e.g., the image generation manager1002and the class-specific model manager1006). Furthermore, the class-specific model manager1006selects class-specific generator neural networks1007for synthesizing digital images. In particular, the class-specific model manager1006communicates with the object class manager1004to determine object classes in a synthesized digital image. To illustrate, the class-specific model manager1006selects class-specific generator neural networks1007corresponding to object classes identified from a semantic label map. In one or more embodiments, the class-specific model manager1006also facilitates training and management of the class-specific generator neural networks1007. In one or more embodiments, the image modification manager1008modifies synthesized digital images utilizing synthesized objects. Specifically, the image modification manager1008obtains synthesized objects generated by the class-specific generator neural networks1007selected by the class-specific model manager1006. The image modification manager1008inserts the synthesized objects into synthesized digital images to replace corresponding objects. For instance, the image modification manager1008utilizes alpha blending to insert synthesized objects into synthesized digital images. The class-specific object editing system102also includes a data storage manager1010(that comprises a non-transitory computer memory/one or more memory devices) that stores and maintains data associated with processing digital images. For example, the data storage manager1010stores data associated with generating and modifying synthesized digital images and individual objects within synthesized digital images. To illustrate, the data storage manager1010stores information associated with semantic label maps, edge maps, synthesized digital images, synthesized objects, digital masks, and one or more generator neural networks. Turning now toFIG.11, this figure shows a flowchart of a series of acts1100of generating a modified synthesized digital image utilizing class-specific object editing systems for individual objects. WhileFIG.11illustrates acts according to one embodiment, alternative embodiments may omit, add to, reorder, and/or modify any of the acts shown inFIG.11. The acts ofFIG.11can be performed as part of a method. Alternatively, a non-transitory computer readable medium can comprise instructions, that when executed by one or more processors, cause a computing device to perform the acts ofFIG.11. In still further embodiments, a system can perform the acts ofFIG.11. As shown, the series of acts1100includes an act1102of generating a synthesized digital image including objects. For example, act1102involves generating a synthesized digital image comprising one or more objects by utilizing an image synthesis neural network. Act1102can involve generating the synthesized digital image based on a semantic label map. Additionally, act1102can involve generating the synthesized digital image based further on an edge map. The series of acts1100also includes an act1104of determining classes associated with the objects. For example, act1104involves determining one or more classes associated with the one or more objects of the synthesized digital image. Act1104can involve determining a first class associated with a first object and a second class associated with a second object, the first class being different than the second class. Alternatively, act1104can involve determining that a a first object and a second object of the synthesized digital image share a class. For example, act1104can involve determining classes associated with a plurality of objects from a semantic label map. Additionally, the series of acts1100includes an act1106of selecting class-specific generator neural networks for the classes. For example, act1106involves selecting one or more class-specific generator neural networks based on the one or more classes associated with the one or more objects. Act1106can involve selecting a first class-specific generator neural network corresponding to the first class and a second class-specific generator neural network corresponding to the second class. Furthermore, the series of acts1100includes an act1108of replacing the objects in the synthesized digital image using the class-specific generator neural networks. For example, act1108involves replacing the one or more objects in the synthesized digital image by utilizing the one or more class-specific generator neural networks according to the one or more classes associated with the one or more objects. Act1108can involve generating a first synthesized object by utilizing the first class-specific generator neural network and a second synthesized object by utilizing the second class-specific generator neural network. Act1108can also involve replacing the first object with the first synthesized object within the synthesized digital image. Act1108can involve obtaining image context data for the second object based on the first synthesized object within the synthesized digital image. Act1108can also involve replacing the second object with the synthesized second object within the synthesized digital image according to the image context data for the second object. Act1108can involve cropping the synthesized digital image to a bounding box corresponding to an object of the one or more objects. Act1108can also involve generating a synthesized object by utilizing a class-specific generator neural network to replace the object within the bounding box. For example, act1108can involve utilizing the class-specific generator neural network based on image context data from image pixels proximate the object of the one or more objects within the bounding box. Act1108can further involve inserting the synthesized object into the synthesized digital image at a position of the object of the one or more objects within the synthesized digital image. Act1108can also involve inserting the one or more objects into the synthesized digital image utilizing alpha blending. Act1108can involve cropping a semantic label map utilized to generate the synthesized digital image to a region corresponding to the bounding box in the synthesized digital image. Additionally, act1108can involve generating a digital mask to mask the object out of the bounding box in the synthesized digital image. Act1108can then involve generating the synthesized object by utilizing the class-specific generator neural network based on the region of the semantic label map and the bounding box with the object masked out of the bounding box according to the digital mask. Act1108can alternatively involve blurring a region corresponding to the object within the bounding box. Furthermore, act1108can involve generating a first, utilizing a first class-specific generator neural network, a first synthesized object based on a first cropped portion of the synthesized digital image. Act1108can involve inserting the first synthesized object into the synthesized digital image. Act1108can involve generating, utilizing a second class-specific generator neural network, a second synthesized object based on a second cropped portion of the synthesized digital image, the second cropped portion comprising at least a portion of the first synthesized object. Alternatively, act1108can involve generating, utilizing a second class-specific generator neural network, a second synthesized object based on a second cropped portion of the synthesized digital image, the second cropped portion excluding the first synthesized object. Act1108can then involve inserting the second synthesized object into the synthesized digital image. Act1108can also involve extracting a plurality of feature sets corresponding to the first object at a plurality of different resolutions. Act1108can also involve determining a spatial feature tensor for the first object by aggregating the plurality of feature sets at the plurality of different resolutions. Act1108can then involve generating, utilizing the first class-specific generator neural network, the first synthesized object based on the spatial feature tensor. In one or more embodiments, act1108involves generating, utilizing an encoder of a class-specific generator neural network, a spatial feature tensor for an object of the one or more objects. Act1108can also involve generating, utilizing a decoder of the class-specific generator neural network, a synthesized portion of the synthesized digital image based on the spatial feature tensor and image context data from a region of the synthesized digital image surrounding the object of the one or more objects. Embodiments of the present disclosure may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. In particular, one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer-readable medium and executable by one or more computing devices (e.g., any of the media content access devices described herein). In general, a processor (e.g., a microprocessor) receives instructions, from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein. Computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are non-transitory computer-readable storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the disclosure can comprise at least two distinctly different kinds of computer-readable media: non-transitory computer-readable storage media (devices) and transmission media. Non-transitory computer-readable storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media. Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to non-transitory computer-readable storage media (devices) (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media (devices) at a computer system. Thus, it should be understood that non-transitory computer-readable storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media. Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general-purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. In some embodiments, computer-executable instructions are executed on a general-purpose computer to turn the general-purpose computer into a special purpose computer implementing elements of the disclosure. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims. Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices. Embodiments of the present disclosure can also be implemented in cloud computing environments. In this description, “cloud computing” is defined as a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on-demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly. A cloud-computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud-computing model can also expose various service models, such as, for example, Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”). A cloud-computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth. In this description and in the claims, a “cloud-computing environment” is an environment in which cloud computing is employed. FIG.12illustrates a block diagram of exemplary computing device1200that may be configured to perform one or more of the processes described above. One will appreciate that one or more computing devices such as the computing device1200may implement the system(s) ofFIG.1. As shown byFIG.12, the computing device1200can comprise a processor1202, a memory1204, a storage device1206, an I/O interface1208, and a communication interface1210, which may be communicatively coupled by way of a communication infrastructure1212. In certain embodiments, the computing device1200can include fewer or more components than those shown inFIG.12. Components of the computing device1200shown inFIG.12will now be described in additional detail. In one or more embodiments, the processor1202includes hardware for executing instructions, such as those making up a computer program. As an example, and not by way of limitation, to execute instructions for dynamically modifying workflows, the processor1202may retrieve (or fetch) the instructions from an internal register, an internal cache, the memory1204, or the storage device1206and decode and execute them. The memory1204may be a volatile or non-volatile memory used for storing data, metadata, and programs for execution by the processor(s). The storage device1206includes storage, such as a hard disk, flash disk drive, or other digital storage device, for storing data or instructions for performing the methods described herein. The I/O interface1208allows a user to provide input to, receive output from, and otherwise transfer data to and receive data from computing device1200. The I/O interface1208may include a mouse, a keypad or a keyboard, a touch screen, a camera, an optical scanner, network interface, modem, other known I/O devices or a combination of such I/O interfaces. The I/O interface1208may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers. In certain embodiments, the I/O interface1208is configured to provide graphical data to a display for presentation to a user. The graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation. The communication interface1210can include hardware, software, or both. In any event, the communication interface1210can provide one or more interfaces for communication (such as, for example, packet-based communication) between the computing device1200and one or more other computing devices or networks. As an example, and not by way of limitation, the communication interface1210may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI. Additionally, the communication interface1210may facilitate communications with various types of wired or wireless networks. The communication interface1210may also facilitate communications using various communication protocols. The communication infrastructure1212may also include hardware, software, or both that couples components of the computing device1200to each other. For example, the communication interface1210may use one or more networks and/or protocols to enable a plurality of computing devices connected by a particular infrastructure to communicate with each other to perform one or more aspects of the processes described herein. To illustrate, the digital content campaign management process can allow a plurality of devices (e.g., a client device and server devices) to exchange information using various communication networks and protocols for sharing information such as electronic messages, user interaction information, engagement metrics, or campaign management resources. In the foregoing specification, the present disclosure has been described with reference to specific exemplary embodiments thereof. Various embodiments and aspects of the present disclosure(s) are described with reference to details discussed herein, and the accompanying drawings illustrate the various embodiments. The description above and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. The present disclosure may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. For example, the methods described herein may be performed with less or more steps/acts or the steps/acts may be performed in differing orders. Additionally, the steps/acts described herein may be repeated or performed in parallel with one another or in parallel with different instances of the same or similar steps/acts. The scope of the present application is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
103,983
11861763
DETAILED DESCRIPTION Overview Digital content creators often use color pickers to select colors in the digital content creation process, such as selecting color to be applied to a digital image as part of an image editing process, selecting color to be applied to text, selecting color to be applied as text background, and so forth. Conventional color pickers, however, focus on selection of a single color, and do not consider how different color selections appear with one another when integrated into digital content. Consequently, utilizing conventional color pickers in the digital content creation process produces digital content having a visual appearance that is difficult, if not impossible, for many individuals to perceive. For instance, individuals with often have difficulty reading text that does not contrast with its background. This difficulty is exacerbated if the individual suffers from a color vision deficiency that lowers the contrast between foreground and background even further. Providing a minimum luminance contrast ratio between foreground and background colors makes text more readable for individuals who are unable to perceive the full range of colors as well as individuals who are unable to perceive color. However, conventional color pickers offer no indication as to a resulting contrast ratio from different colors when presented together, much less identify colors that satisfy a contrast ratio threshold relative to a selected color. As such, conventional color pickers force content creators to compute a contrast ratio for each potential color pair before ultimately selecting a color pair to be used in their design. To address these issues, techniques for generating a contrast ratio color picker based on a selected color are described. In one example, a contrast ratio color picker system identifies a color selected as part of a digital content creation process, such as a color for stylizing a display of text, a color for digital graphics, and so forth. The contrast ratio color picker system identifies a relative luminance value of the selected color, as well as a contrast ratio relative to the selected color for use in the content creation process. In implementations, the contrast ratio color picker system automatically identifies a contrast ratio threshold based on a type of digital content to which the selected color is applied. For instance, text that is larger and has wider character strokes is easier to read by a viewing user at lower contrast than small text that is smaller with narrow character strokes. Similarly, graphics that do not include legible text content is easier to identify and comprehend by a viewing user at an even lower contrast relative to small text. Accounting for this relative ease in readability and comprehension, the contrast ratio color picker system automatically designates an appropriate contrast ratio threshold for the digital content to which the selected color is applied. Given the relative luminance value and the contrast ratio threshold, the contrast ratio color picker system is configured to compute the contrast ratio between the selected color and every other color inside a color gamut to identify colors that fail to satisfy the contrast ratio threshold. However, because color gamuts include millions of different colors (e.g., there are 16,777,216 colors inside a 24-bit color gamut), the corresponding number of contrast ratio computations requires significant time and computational resources to complete. To mitigate the requisite computational resources, the contrast ratio color picker selects a palette comprising a subset of colors from the color gamut, such as a plane of color representing a cross section of a color gamut geometry. The contrast ratio color picker further identifies a resolution of a display device configured to output the color palette, and computes the contrast ratio between the selected color and each color in the color palette on a per-pixel basis, thereby performing a minimum number of computations as required by the display device resolution. Colors failing to satisfy the contrast ratio threshold are then masked, visually occluded, or otherwise obscured to generate a filtered palette that displays only colors satisfying the contrast ratio threshold. The filtered palette consisting colors satisfying the contrast ratio threshold is then output as part of a contrast ratio color picker, which enables a designer to readily identify a range of colors that are useable together with the selected color in creating the digital content while maintaining the contrast ratio threshold for the particular type of digital content. The contrast ratio color picker system is further configured to output the contrast ratio color picker with parameter controls that enable a user to modify the selected color, the relative luminance value, the threshold contrast ratio, and/or parameters defining the color space palette used to generate the filtered palette, thereby offering increased color selection functionality relative to conventional color pickers. Further discussion of these and other examples is included in the following sections and shown in corresponding figures. Term Examples As used herein, the term “color” refers to a property that distinguishes among different kinds of light, defined in terms of human perception, and technically described as recognized by a viewer in terms of hue, luminance and saturation. Luminance refers to the brightness of the light, while hue refers to the visual property that distinguishes different colors from one another (e.g., red from pink, blue from purple, green from yellow, and so forth), and saturation refers to how pure a color is relative to its grey version. As used herein, the term “relative luminance” refers to the relative brightness of any point in a color space, normalized to zero for a darkest black in the color space and one for a lightest white in the color space. In certain color spaces where relative luminance is explicitly represented, such as the XYZ and xyY color spaces, relative luminance is represented by “Y.” As used herein, the term “contrast ratio” refers to a ratio of the relative luminances of foreground and background colors, such as text rendered on a background. Contrast ratio is represented as Y1/Y2, where Y1 represents the relative luminance of the lighter of the colors and Y2 is the relative luminance of the darker of the colors. In some implementations, contrast ratio is represented using an offset value to compensate for contrast ratios that occur when a relative luminance value is at or near zero, for ambient light effects, and so forth. As an example, an offset value of 0.05 results in representing contrast ratio as ((Y1+0.05))/((Y2+0.05)). Contrast ratios can range from 1 (commonly expressed as 1:1) and 21 (commonly expressed as 21:1). As used herein, the term “color gamut” refers to the full range of colors in a color space visible to the human eye. As used herein, the term “color space” refers to one or more standards that define color gamut constraints relative to display devices, which are limited in ability to produce each and every color of the entire color gamut. Example color spaces include RGB, HSL, HSV, CMYK, LAB, XYZ, xyY, and so forth. The “RGB” color space specifies a color value with red, green, and blue (red, green, blue) parameters. Each parameter defines the intensity of the corresponding color as an integer between zero and 255. For example, rgb(0, 0, 255) is rendered as blue, because the blue parameter is set to its highest possible value while the red and green parameters are set to zero. The “HSL” color space specifies a color value with hue, saturation, and lightness parameters. Hue specifies a degree on the color wheel ranging from 0 to 360, where 0 corresponds to red, 120 corresponds to green, and 240 corresponds to blue. Saturation specifies a percentage value between 0% and 100%, inclusive, where 0% represents a shade of gray and 100% represents the full color. Lightness specifies a percentage value between 0% and 100%, inclusive, where 0% represents black and 100% represents white. The “HSV” color space is similar to HSL in its use of hue and saturation parameters, differing by using a value parameter instead of a lightness parameter, where the value parameter is a percentage value between 0% and 100%, inclusive, indicating how the color appears under light. The difference between HSL and HSV is that a color with maximum lightness in the HSL color space is pure white, while a color with a maximum value parameter in the HSV color space is analogous to shining a bright white light on a colored object (e.g., shining a bright white light on a red object causes the object to appear brighter and more intense, but still red, while shining a dim light on the red object causes the object to appear darker and less bright, but still red). In both HSL and HSV color spaces, colors of each hue are arranged in a radial slice around a central axis of neutral colors ranging from black at the bottom to white at the top. The “CMYK” color space specifies a color as a combination of cyan, magenta, yellow, and black parameters. Each parameter specifies a percentage value from 0% to 100%, inclusive, where 0% represents white and 100% represents the full corresponding color of cyan, magenta, yellow, or black. The “LAB” color space specifies a color value using three parameters (L, a, and b) relative to a three-axis system. The a-axis ranges from green to red, with the “a” parameter specifying a position on the a-axis. The b-axis ranges from blue to blue to yellow, with the “b” parameter specifying a position on the b-axis. The L-axis ranges from black to white, with the “L” parameter specifying a position on the L-axis. The “XYZ” color space refers to a mapping of colors resulting from experiments conducted to identify all the colors visible to an average human, and is also referred to as the 1931 CIEXYZ color space. The XYZ color space represents a color value in terms of a three-axis system, where the Y-axis corresponds to relative luminance and the Y parameter specifies a position on the Y-axis. The X and Z axes represent how cones in the human eye respond to light waves of varying frequencies, quantified in terms of tristimulus values, and thus the XYZ color space provides a device-invariant representation of color. Because the human eye has three types of color sensors that respond to different ranges of wavelengths, plotting the XYZ color space results in a three-dimensional color gamut geometry. The “xyY” color space refers to a transformation of the XYZ color space to two-dimensional coordinates, where the “x” and “y” parameters specify the chromaticity of a color and the “Y” parameter specifies the relative luminance of the color. Via the mapping provided by the xyY color space, the X and Z tristimulus values in the XYZ color space can be calculated back from the chromaticity values and the relative luminance value. In the following discussion, an example environment is described that is configured to employ the techniques described herein. Example procedures are also described that are configured for performance in the example environment as well as other environments. Consequently, performance of the example procedures is not limited to the example environment and the example environment is not limited to performance of the example procedures. Example Environment FIG.1is an illustration of a digital medium environment100in an example implementation that is operable to employ techniques described herein. As used herein, the term “digital medium environment” refers to the various computing devices and resources utilized to implement the techniques described herein. The digital medium environment100includes a computing device102, which is configurable in a variety of manners. The computing device102, for instance, is configurable as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone as illustrated), and so forth. Thus, the computing device102ranges from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) to low-resource devices with limited memory and/or processing resources (e.g., mobile devices). Additionally, although a single computing device102is shown, the computing device102is representative of a plurality of different devices, such as multiple servers utilized by a business to perform operations “over the cloud.” The computing device102includes a contrast ratio color picker system104. The contrast ratio color picker system104is implemented at least partially in hardware of the computing device102to process digital content106, which is illustrated as maintained in a storage device108of the computing device102, in order to identify a color selection made at a color selection component110and generate a contrast ratio color picker112based on the color selection. The digital content106is representative of any form of digital content configured for display by a computing device, such as text data, vector graphic data, raster image data (e.g., rasterized graphic data, bitmap image data, etc.), combinations thereof, and so forth. The contrast ratio color picker112includes a filtered palette114, which consists of a display of one or more colors that satisfy a contrast ratio threshold relative to the selected color indicated by the color selection component110. For instance, responsive to detecting a selection via the color selection component110of a color to be applied to the digital content106, the contrast ratio color picker system104is configured to output the filtered palette114as displaying only colors that satisfy a contrast ratio threshold for the digital content106. As an example, if the selected color is detected for application to text content displayed in the user interface118, the filtered palette114is generated to display colors that are useable for a background against which the text content is displayed. To ensure that a contrast ratio threshold is satisfied, colors failing to satisfy the contrast ratio threshold relative to the selection indicated at the color selection component110are masked and prevented from being selected via the filtered palette114. The contrast ratio color picker112is further configured to include at least one parameter control116, which is useable to modify a corresponding parameter that dictates output of the filtered palette114, which is displayed together with the color selection component110and the filtered palette114in a user interface118for the contrast ratio color picker system104. Example parameters modifiable by the at least one at least one parameter control116include a relative luminance of the selected color, the contrast ratio threshold, a color space parameter for the filtered palette114, and so forth. Although illustrated as implemented locally at the computing device102, functionality of the contrast ratio color picker system104is implementable in whole or in part via functionality available via the network120, such as part of a web service or “in the cloud,” as described in further detail below with respect toFIG.8. The contrast ratio color picker system104is thus configured to generate a contrast ratio color picker112that readily informs a designer, during the content creation process, as to colors that are useable together to maintain an acceptable contrast ratio in digital content, and precludes the designer from selecting color combinations that fail to achieve an acceptable contrast ratio. In general, functionality, features, and concepts described in relation to the examples above and below are employable in the context of the example procedures described in this section. Further, functionality, features, and concepts described in relation to different figures and examples in this document are interchangeable among one another and are not limited to implementation in the context of a particular figure or procedure. Moreover, blocks associated with different representative procedures and corresponding figures herein are configured to be applied together and/or combined in different ways. Thus, individual functionality, features, and concepts described in relation to different example environments, devices, components, figures, and procedures herein are useable in any suitable combinations and are not limited to the combinations represented by the enumerated examples in this description. Color Picker Generation from a Selected Color and Contrast Ratio Threshold FIG.2depicts a system200in an example implementation showing operation of the contrast ratio color picker system104ofFIG.1in greater detail. To begin in this example, a color selection module202is employed by the contrast ratio color picker system104, which implements the color selection component110. The color selection component110is representative of functionality of the contrast ratio color picker system104to expose one or more colors of a color space for selection (e.g., via user input at a computing device implementing the contrast ratio color picker system104) and application to digital content106. Although illustrated as implemented by the color selection module202, the color selection component110is alternatively implemented by a software application separate from the contrast ratio color picker system104, as described in further detail below with respect toFIG.8. For instance, in accordance with one or more implementations, the color selection component110is representative of native functionality provided by a word processing application (e.g., enabling selection of one or more colors to be applied to font or a font background), a digital graphics application (e.g., enabling selection of one or more colors to be applied to vector-based and/or raster-based digital content), combinations thereof, and so forth. The color selection component110is thus configurable in a variety of different manners to enable selection of a color for application to digital content106. In some implementations, the color selection component110includes controls that enable designation of a color space model displayed by the color selection component110, controls that enable designation of various parameters for specifying which subset of colors of the color space are currently displayed by the color selection component110, combinations thereof, and so forth. The color selection component110is thus configurable in a variety of different manners to enable selection of a color for application to digital content106, and is representative of functionality implemented natively by the contrast ratio color picker system104as well as functionality offered by an application implemented separately from the contrast ratio color picker system104. The color selection module202is configured to monitor input detected at the color selection component110and identify a color selection based on the monitoring, such as a selection indicating a color to be applied to digital content106. Responsive to identifying the color selection via the color selection component110, the color selection module202generates input color space data204for the color selection. For instance, in an example implementation where the color selection component110is configured to display a subset of colors from an RGB color space, the input color space data204is generated to include corresponding values for the Red, Blue, and Green components identifying the color selection from the color selection component110. Similarly, in another example implementation where the color selection component110is configured for an HSL color space, the input color space data204includes values for Hue, Saturation, and Lightness components that identify the color selection from the color selection component110. As a further example, in an implementation where the color selection component110is configured for a CMYK color space, the input color space data204includes values for Cyan, Magenta, Yellow, and Black components that identify the color selection from the color selection component110. Thus, the input color space data204is representative of information that identifies a color selected from the color selection component110, where a format of the input color space data204is dependent on a color space for which the color selection component110is configured. The color selection module202is configured to provide the input color space data204to a color identification module206, which is representative of functionality of the contrast ratio color picker system104to optionally convert the input color space data204to a color space of the contrast ratio color picker112, as represented by the target color space data208. For instance, in an example implementation where the color selection component110is configured for the RGB color space and the contrast ratio color picker112is configured for the HSV color space, the color identification module206is configured to convert the RGB values represented by the input color space data204to values for each of the Hue, Saturation, and Value components of a same color as represented in the HSV color space, and output the converted values as the target color space data208. The color identification module206is further representative of functionality of the contrast ratio color picker system104to determine a relative luminance value210for the color selected from the color selection component110. The color identification module206is configured to determine the relative luminance value210from either the input color space data204or the target color space data208, and a particular manner in which the relative luminance value210is determined depends on the color space corresponding to the input color space data204or the color identification module206used to determine the relative luminance value210. For instance, in an example implementation where the relative luminance value210is determined based on data identifying a selected color in the context of a color space where relative luminance is explicitly represented (e.g., the XYZ color space, the xyY color space, and so forth), the relative luminance value210is determined from the corresponding Y value as set forth in the input color space data204or the target color space data208. Alternatively, in an example implementation where the relative luminance value210is determined from a color space that does not explicitly represent relative luminance, the color identification module206is configured to compute the relative luminance value210from the input color space data204or the target color space data208. The color identification module206is configured to compute the relative luminance value210using any one or combination of relative luminance computation techniques. As an example, in an implementation where the input color space data204or target color space data208used to compute the relative luminance value210is expressed in the context of the RGB color space, the color identification module206computes the relative luminance value210according to Equation 1: Y=0.2126R+0.715G+0.0722B(Eq. 1) As set forth in Equation 1, Y represents the relative luminance value210and R, G, and B represent the Red, Green, and Blue color space values as set forth in one or more of the input color space data204or the target color space data208. Equation 1 thus reflects the significance of each color space attribute with respect to luminosity, where green light contributes the most to the intensity of a color as perceived by human viewers and blue light contributes the least to the perceived intensity. The color identification module206is then configured to provide the target color space data208and the relative luminance value210to a contrast ratio module212. The contrast ratio module212is representative of functionality of the contrast ratio color picker system104to generate a contrast ratio threshold214, which represents a value ranging from one to 21, inclusive. The contrast ratio threshold214defines a minimum contrast ratio for use in applying the color selected from the color selection component110against a different color (e.g., for use in applying the selected color to a foreground element against a different background color, or vice versa) in digital content106. In some implementations, the contrast ratio threshold214is specified manually via user input at a computing device implementing the contrast ratio color picker system104. Alternatively, in some implementations the contrast ratio threshold214is determined automatically by the contrast ratio color picker system104(e.g., independent of user input or intervention) based on a type of digital content106to which the color selected from the color selection component110is applied. For instance, the contrast ratio module212is configured to identify whether the color selection component110is used apply the selected color to graphics (e.g., vector graphics, raster graphics, and the like), large text, or small text digital content106and automatically determine the contrast ratio threshold214based on the identified type of digital content106. As an example, the contrast ratio module212designates the contrast ratio threshold214as three (e.g., specifying a 3:1 contrast ratio) responsive to determining that the color selected from the color selection component110is to be applied to graphics digital content106. As another example, the contrast ratio module212designates the contrast ratio threshold214as 4.5 (e.g., specifying a 4.5:1 contrast ratio) responsive to determining that the color selection component110is used to select a color for large text digital content106. As yet another example, the contrast ratio module212designates the contrast ratio threshold214as seven (e.g., specifying a 7:1 contrast ratio) responsive to determining that the color selection component110is used to designate a color for small text digital content106. In accordance with one or more implementations, the contrast ratio module212is configured to differentiate between small text and large text digital content106based on characteristics of text digital content106to which a selected color is applied (e.g., as a font color for the text or as a color for a background against which the text is displayed). For instance, the contrast ratio module212is configured to classify digital content106as large text responsive to determining that the color selected from the color selection component110is to be applied to font that is bold and size 14-point (e.g., 18.667 pixels) or larger, as well as font that is size 18-point (e.g., 24 pixels) or larger independent of bolding. Conversely, the contrast ratio module212is configured to classify text digital content106that does not satisfy large text font size thresholds as small text. As shown in the example implementation300ofFIG.3, the contrast ratio module212is configured to automatically designate different contrast ratio thresholds214for a color selected from the color selection component110, based on a type of digital content106for which the selected color is to be applied. In the example implementation300, digital content302represents an example instance of large text digital content, with the characters “HARPER” displayed in a size 26-point, bold font. Digital content304represents an example instance of small text digital content, with the characters “Henry” displayed in size 12-point font. Digital content306represents an example instance of graphics digital content. As evidenced by the example implementation300, text that is larger and has wider character strokes is easier to read by a viewing user at lower contrast. Similarly, graphics that do not include legible text content is easier to identify and comprehend by a viewing user at an even lower contrast relative to small text. Accounting for this relative ease in readability and comprehension, the contrast ratio module212automatically designates an appropriate contrast ratio threshold214for the digital content106to which a color selection from the color selection component110is to be applied. For instance, responsive to determining that the color selection represented by the input color space data204is to be applied to the characters, or the background against which the characters are disposed, in digital content302, the contrast ratio module212automatically designates the contrast ratio threshold308of 4.5 for use in generating the contrast ratio color picker112. Alternatively, responsive to determining that the selected color from the color selection component110is to be applied to the characters or the background of the digital content304, the contrast ratio module212automatically designates the contrast ratio threshold310of seven for use in generating the contrast ratio color picker112. Alternatively, responsive to determining that the color selection from the color selection component110is to be applied to a fill, line, or background of digital content306, the contrast ratio module212automatically designates the contrast ration threshold312of three for use in generating the contrast ratio color picker112. In this manner, the contrast ratio thresholds214automatically determined by the contrast ratio module212are based on type of digital content106to which a color selection is being applied. Although described herein in the example implementations of pre-configured thresholds of seven for small text, 4.5 for large text, and three for graphics, the contrast ratio threshold214automatically assigned by the contrast ratio module212is representative of any suitable contrast ratio value, which in implementations is specified by a developer of the contrast ratio color picker system104. Alternatively or additionally, the contrast ratio threshold214is designated based on a type of digital content106to which a selected color is applied are pre-specified (e.g., prior to presentation of the color selection component110) by a user of a computing device implementing the contrast ratio color picker system104. The contrast ratio threshold214is provided to a color filtration module216, which is representative of functionality of the contrast ratio color picker system104to identify a target color space palette218, which is representative of a subset of a color space's colors that includes the selected color represented by the target color space data208. The color filtration module216is further representative of functionality of the contrast ratio color picker system104to generate a filtered palette114. The filtered palette114is representative of an instance of the target color space palette218, excluding colors that are otherwise included in the target color space palette218but fail to satisfy the contrast ratio threshold214. As shown in the example implementation400ofFIG.4, the color filtration module216is configured to generate the target color space palette218based on the color selected from the color selection component110, as represented by the target color space data208, and generate the filtered palette114from the target color space palette218based on the contrast ratio threshold214. As such, in some implementations the target color space palette218comprises the color picking surface provided by the color selection component110. The example implementation400illustrates a scenario where the target color space to be represented by the contrast ratio color picker112is the HSV color space. The color gamut402represents a visualization of the HSV color space as a cylinder of colors, where colors of each hue are arranged in a radial plane around a central axis of neutral colors that ranges from black at the bottom of the color gamut402cylinder to white at the top. Given the vast number of colors included in a color space, the color filtration module216is configured to identify the radial plane of the color gamut402that includes the color represented by the target color space data208and output the identified radial plane as the target color space palette218. Although described herein and illustrated in the example implementation400in the context of the HSV color space, the color filtration module216is configured to generate the target color space palette218for any color space, such as by presenting a cross section of an RGB cube, a ring-triangle representing a cross section of an HSL bi-cone, a cross section of an xyY color gamut geometry, and so forth. Given the target color space palette218, the color filtration module216computes the contrast ratio between the target color space data208and the colors represented in the target color space palette218, and masks portions of the target color space palette218containing colors that fail to satisfy the contrast ratio threshold214. By generating the filtered palette114from the target color space palette218rather than the entire color gamut from which the target color space palette218was generated, the color filtration module216conserves computational resources otherwise required to compute the contrast ratio between the selected color and millions of colors in the entire gamut. Because the amount of colors displayed by the target color space palette218depend on a resolution of a display device outputting the target color space palette218, such as a display device of a computing device implementing the contrast ratio color picker system104, the color filtration module216is configured to generate the filtered palette114by processing colors of the target color space palette218on a per-pixel basis. In implementations, the color filtration module216determines, for each pixel of the target color space palette218, whether a displayed color satisfies the contrast ratio threshold214relative to the color selected from the color selection component110. Pixels displaying colors that fail to satisfy the contrast ratio threshold214are masked from display in the filtered palette114. To compute the contrast ratio and mask pixels failing to satisfy the contrast ratio threshold214, the color filtration module216implements a WebGL fragment shader, which further conserves computational resources by processing colors of the target color space palette218actually displayed by a display device outputting the target color space palette218. In this manner, the color filtration module216avoids processing colors otherwise included in the target color space palette218, but not actually output for display, thereby avoiding unnecessary contrast ratio computations in generating the filtered palette114. The resulting filtered palette114thus includes unmasked pixels404of the target color space palette218representing colors that satisfy the contrast ratio threshold214and masked pixels406, representing colors that fail to satisfy the contrast ratio threshold214, relative to the color selected from the color selection component110. The filtered palette114is then provided to the color picker module220, which is representative of functionality of the contrast ratio color picker system104to output the contrast ratio color picker112for display at a computing device implementing the contrast ratio color picker system104. The color picker module220is configured to include the filtered palette114in the contrast ratio color picker112together with at least one parameter control116. The at least one parameter control116enables modification of a corresponding color space parameter that defines the subset of colors from a color space gamut that are included in the target color space palette218, and consequently the subset of colors satisfying the contrast ratio threshold214that result in the filtered palette114. Example color space parameters that are modifiable using the at least one at least one parameter control116include the relative luminance value210, the contrast ratio threshold214, and parameters that are specific to the target color space. Examples of parameters that are specific to a target color space include: Hue, Saturation, and Lightness parameters for an HSL target color space; Red, Green, and Blue parameters for an RGB target color space; Hue, Saturation, and Value parameters for an HSV target color space; Cyan, Magenta, Yellow, and Black parameters for a CMYK target color space; and so forth. As shown in the example implementation500ofFIG.5, the at least one parameter control116is useable to cause real-time updates to the filtered palette114depicted by the contrast ratio color picker112, thereby enabling a user of the contrast ratio color picker system104to readily understand available colors for use in satisfying a contrast ratio threshold relative to a previously selected color. The example implementation500depicts the at least one parameter control116as configured for an HSV target color space, with parameter controls502,504, and506useable to adjust the filtered palette114for a selected color, as represented by selected color indicator508in the illustrated example. Parameter control502is representative of an instance of the parameter control116that is useable to specify the relative luminance value210of the selected color. For instance, the example implementation500illustrates the relative luminance value210for the selected color indicator508as being 0.14. The parameter control502enables a user of the contrast ratio color picker system104to adjust the relative luminance value210, and consequently the color indicated by the selected color indicator508. In this manner, parameter control502enables the user of the contrast ratio color picker system104to readily understand how the relative luminance of a selected color impacts available other colors that are displayable adjacent to the selected color while maintaining a minimum acceptable contrast ratio. The parameter control504is representative of a target color space-specific parameter, which in the example implementation500, enables a user of the contrast ratio color picker system104to adjust a “Value” parameter of the HSV target color space to be used in generating the filtered palette114. In the context of the HSV color space, the “Value” parameter corresponds to a cross section of the HSV color gamut cylinder to be used as the target color space palette218. For instance, the example implementation500illustrates the parameter control504as designating a “Value” parameter of one, which indicates that a top-most cross section of the HSV color gamut cylinder is to be used as the target color space palette218. Conversely, a “Value” parameter of zero instructs the color picker module220to utilize a bottom-most cross section of the HSV color gamut cylinder as the target color space palette218. In this manner, parameter control504further represents functionality of controlling a target color space-specific parameter for the contrast ratio color picker112that both enables intuitive navigation of the target color space's gamut geometry and mitigates computational resources required by the contrast ratio color picker system104to generate the filtered palette114, where the filtered palette114represents a subset of colors in the target color space palette218that satisfy the contrast ratio threshold214. The contrast ratio threshold214is further adjustable via the at least one parameter control116, as represented in the example implementation500by parameter control506designating a 3:1 contrast ratio threshold. Mitigating computational resources otherwise required to generate the contrast ratio color picker112enables implementation of the contrast ratio color picker system104among computing device types ranging in different processing capabilities. For instance, depicting the HSV color space using a 24-bit color depth display, as commonly implemented by computing device displays, requires depiction of 16,777,216 distinct colors. Consequently, computing whether each of the over 16 million distinct colors satisfies the contrast ratio threshold214relative to the color indicated by the selected color indicator508requires a prohibitive amount of computational resources and cannot be performed in real-time by many computing devices (e.g., mobile computing devices having limited processing resources). By enabling designation of a subset of colors to be included in the target color space palette218(e.g., by designating a single cross section of the HSV cylinder via input at the parameter control504), the contrast ratio color picker system104is configured to limit the number of colors processed in generating the filtered palette114from the target color space palette218. Because the number of colors in the target color space palette218displayed by a computing device implementing the contrast ratio color picker system104is dependent on a display device resolution (e.g., a display device supporting a high resolution will display additional colors of the same target color space palette218relative to lower-resolution display device), the color picker module220only processes colors output for display in generating the filtered palette114. For instance, in generating the filtered palette114, the color picker module220considers whether each pixel of the target color space palette218that is output for display satisfies the contrast ratio threshold214, and masks those pixels that fail to satisfy the contrast ratio threshold214. In this manner, the color picker module220further conserves computing resources required to generate the filtered palette114by processing only colors actually output for display, enabling real-time updates to the filtered palette114based on inputs received at the at least one parameter control116. Via this efficient use of computational resources, the contrast ratio color picker system104provides real-time updates to the filtered palette114based on adjustments to the input color space data204as enabled by example parameter control502, adjustments to the target color space palette218as enabled by example parameter control504, and adjustments to the contrast ratio threshold214as enabled by example parameter control506. Filtered palettes510-520represent example instances of a filtered palette114output based on input to the at least one parameter control116, for a color selected from the color selection component110having a relative luminance value210represented by parameter control502and a visual appearance represented by selected color indicator508. Filtered palette510, for instance, represents an instance of the filtered palette114resulting from designating an HSV color space “Value” parameter of 0.9 and a contrast ratio threshold214of three. Comparatively, filtered palette512and filtered palette514represent instances of maintaining the contrast ratio threshold214of three, as represented by filtered palette512, and modifying the “Value” parameter for the HSV color space, with filtered palette512representing a “Value” parameter of 0.75 and filtered palette514representing a “Value” parameter of 0.56. Filtered palettes510,512, and514thus visualize different colors available from different cross sections of the HSV color gamut cylinder that satisfy a 3:1 contrast ratio threshold relative to a color indicated by the selected color indicator508. As further examples, filtered palette516represents an example instance of the filtered palette114output for the color indicated by selected color indicator508responsive to input modifying the parameter control504to designate a “Value” parameter of one and input modifying the parameter control506to designate a contrast ratio threshold of 4.5. Filtered palette518represents an update to the filtered palette516resulting from input further modifying the parameter control504to designate a “Value” parameter of 0.84 while maintaining the 4.5:1 contrast ratio threshold. Filtered palette520represents an example instance of the filtered palette114indicating that no colors from the target color space palette218as constrained by the at least one parameter control116(e.g., by parameter control504) are available to satisfy the contrast ratio threshold214for the selected color. In this manner, the contrast ratio color picker system104depicts in real-time available colors for satisfying a contrast ratio threshold relative to a selected color and designated color space parameters, thereby enabling a digital content designer to readily understand contrast ratio implications prior to stylization of digital content. Although depicted as slider controls in the example implementation500, the at least one parameter control116included in the contrast ratio color picker112is configurable in a variety of control mechanisms, such as via a scroll wheel, a text input field, one or more selectable icons, combinations thereof, and so forth. As shown in the example implementation600ofFIG.6, the contrast ratio color picker112is useable to enable selection of a color from the filtered palette114and apply the color selected from the filtered palette114to digital content. In the illustrated implementation600, the contrast ratio color picker112includes a color selection component110, with user input selecting a color to be represented as input color space data204from the color selection component110via cursor602. The color selection indicated by cursor602, for instance, is detected by the color selection module202as designating a color of text in digital content604. Responsive to the color selection via the color selection component110, the contrast ratio color picker112is updated to display the filtered palette114, representing a subset of colors in a target color space palette218that satisfy a contrast ratio threshold214for the selected color. In accordance with one or more implementations, the contrast ratio threshold214used to generate the filtered palette114is automatically determined by the contrast ratio color picker system104based on a context of how the selected color is applied to the digital content604. For instance, in the example implementation600, the contrast ratio module212is configured to identify that the color selection indicated by cursor602is applied to the characters “JOSE” in digital content604. The contrast ratio module212, responsive to determining that the characters “JOSE” are formatted in a bold, 35-point font size, automatically determines that the color selection is applied to large text, and that a contrast ratio threshold214of 3:1 is to be used in generating the filtered palette114. As described above, the color filtration module216is configured to generate the filtered palette114according to the contrast ratio threshold214based on any suitable target color space palette218, such as a cross section of an HSV color gamut cylinder that includes the color selection indicated in the color selection component110. Adjustment of the target color space palette218used to generate the filtered palette114is further enabled via the at least one parameter control116, as described above with respect toFIG.5. In addition to depicting colors that satisfy the contrast ratio threshold214relative to the color selected from the color selection component110, the filtered palette114is configured to enable selection of a depicted color and application of the selected color to digital content. For instance, in the example implementation600, user input at the filtered palette114is illustrated via cursor606as selecting a color to be applied as a background relative to the text of digital content604, resulting in the visual appearance of digital content608. In this manner, the contrast ratio color picker112readily informs a designer, during the content creation process, as to colors that are useable together to maintain an acceptable contrast ratio in digital content, and precludes the designer from selecting color combinations that fail to achieve an acceptable contrast ratio. Having considered example systems and techniques for generating contrast ratio color picker based on a selected color and generating digital content using the contrast ratio color picker, consider now example procedures to illustrate aspects of the techniques described herein. Example Procedures The following discussion describes techniques that are configured to be implemented utilizing the previously described systems and devices. Aspects of each of the procedures are configured for implementation in hardware, firmware, software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. In portions of the following discussion, reference is made toFIGS.1-6. FIG.7depicts a procedure700in an example implementation of generating a contrast ratio color picker in accordance with the techniques described herein. In accordance with one or more implementations, procedure700is performed by the contrast ratio color picker system104to generate the contrast ratio color picker112. To do so, data defining a selected color is received (block702). The color selection module202, for instance, detects input selecting a color presented for display by the color selection component110. In some implementations, the color selection component110is implemented as native functionality of the contrast ratio color picker system104. Alternatively, the color selection component110is implemented by an application other than the contrast ratio color picker system104, such as by a word processing application, a graphics processing application, and so forth. A relative luminance value for the selected color is then determined (block704). The color identification module206, for instance, computes the relative luminance value210for the selected color using the input color space data204, which describes the selected color in the context of the color space implemented by the color selection component110. In other implementations, the color identification module206first converts the input color space data204to the target color space data208, which describes the selected color in the context of the color space implemented by the contrast ratio color picker112, and uses the target color space data208to compute the relative luminance value210. A contrast ratio color picker is then generated based on the relative luminance value of the selected color (block706). As part of generating the contrast ratio color picker, a contrast ratio threshold is ascertained (block708), a target color space palette is identified (block710), and a filtered palette is generated from the target color space palette by masking pixels in the target color space palette that fail to satisfy the contrast ratio threshold (block712). The contrast ratio module212, for instance, identifies a type of digital content106to which the input color space data204is selected for application. In example implementations, the contrast ratio module212is configured to differentiate as to whether the selected color is to be applied to large text digital content, small text digital content, or graphics digital content, and automatically determine a contrast ratio threshold214for the contrast ratio color picker112based on the type of digital content stylized by the selected color. Alternatively or additionally, the contrast ratio threshold214is specified via input at a computing device implementing the contrast ratio color picker system104. The color filtration module216is configured to identify the target color space palette218in the context of any color space. For instance, in some implementations the color filtration module216identifies a palette of a color space implemented by the color selection component110that includes the selected color and designates the identified palette as the target color space palette218. Alternatively, the color filtration module216identifies a palette of a target color space to be implemented by the contrast ratio color picker112that includes the selected color and designates the identified palette as the target color space palette218. As another example, the color filtration module216selects a random cross section of a color gamut and designates the cross section as the target color space palette218. In some implementations, the color space used for the target color space palette218is designated as a default color space by the contrast ratio color picker system104, is designated via user input, combinations thereof, and so forth. The color filtration module216generates the filtered palette114by processing the target color space palette218on a per-pixel basis, based on resolution of a display device used to output the contrast ratio color picker112. For instance, the color filtration module216computes, for each pixel displaying the target color space palette218, whether the color displayed by the pixel satisfies the contrast ratio threshold214and implements a fragment shader to mask pixels of the target color space palette218failing to satisfy the contrast ratio threshold214. The pixel-masked instance of the target color space palette218is then output as the filtered palette114for the contrast ratio color picker112. In some implementations, the color picker module220generates the contrast ratio color picker112as including at least one parameter control116, which is representative of a control to adjust the relative luminance value210, a control to designate a new color selection, a control to adjust at least one color-space parameter defining the target color space palette218used to generate the filtered palette114, a control to adjust the contrast ratio threshold214, combinations thereof, and so forth. The contrast ratio color picker112is then output with the filtered palette114and the at least one parameter control116(block714). The contrast ratio color picker system104, for instance, outputs the contrast ratio color picker112at a display device associated with the computing device implementing the contrast ratio color picker system104. Input modifying at least one parameter control of the contrast ratio color picker is optionally received (block716), as indicated by the arrow proceeding from block714to block718while circumventing block716. Input modifying the at least one parameter control116that indicates a new color selection from the color selection component110, adjusts the relative luminance value210, designates a different contrast ratio threshold214, modifies a color space-specific parameter that designates the target color space palette218used in generating the filtered palette114, or combinations thereof, is received by the computing device implementing the contrast ratio color picker system104. Responsive to receiving input modifying the at least one parameter control116, operation returns to block706and the contrast ratio color picker112is updated in real-time to reflect modifications resulting from the received input. For example, the contrast ratio color picker112is updated by replacing the previously displayed filtered palette with an updated filtered palette generated for a new color selection, responsive to receiving input indicating the new color selection via the color selection component110. A selected color from the contrast ratio color picker is then applied to digital content (block718). The color selection module202, for instance, identifies input selecting a color from the filtered palette114, such as the example input represented by cursor606inFIG.6. The color selected from the filtered palette114is applied to digital content106, as represented by the illustrated example of implementation600applying the selected color to digital content604to achieve the visual appearance of digital content608. Having described example procedures in accordance with one or more implementations, consider now an example system and device to implement the various techniques described herein. Example System and Device FIG.8illustrates an example system800that includes an example computing device802, which is representative of one or more computing systems and/or devices that implement the various techniques described herein. This is illustrated through inclusion of the contrast ratio color picker system104. The computing device802is configured, for example, as a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system. The example computing device802as illustrated includes a processing system804, one or more computer-readable media806, and one or more I/O interface808that are communicatively coupled, one to another. Although not shown, the computing device802is further configured to include a system bus or other data and command transfer system that couples the various components, one to another. A system bus includes any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines. The processing system804is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system804is illustrated as including hardware element810that are configurable as processors, functional blocks, and so forth. For instance, hardware element810is implemented in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. The hardware elements810are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors are alternatively or additionally comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions are electronically executable instructions. The computer-readable storage media806is illustrated as including memory/storage812. The memory/storage812represents memory/storage capacity associated with one or more computer-readable media. The memory/storage812is representative of volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The memory/storage812is configured to include fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). In certain implementations, the computer-readable media806is configured in a variety of other ways as further described below. Input/output interface(s)808are representative of functionality to allow a user to enter commands and information to computing device802, and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive, or other sensors that are configured to detect physical touch), a camera (e.g., a device configured to employ visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, the computing device802is representative of a variety of hardware configurations as further described below to support user interaction. Various techniques are described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques are configured for implementation on a variety of commercial computing platforms having a variety of processors. An implementation of the described modules and techniques are stored on or transmitted across some form of computer-readable media. The computer-readable media include a variety of media that is accessible by the computing device802. By way of example, and not limitation, computer-readable media includes “computer-readable storage media” and “computer-readable signal media.” “Computer-readable storage media” refers to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information for access by a computer. “Computer-readable signal media” refers to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device802, such as via a network. Signal media typically embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. As previously described, hardware elements810and computer-readable media806are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that is employed in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions. Hardware, in certain implementations, includes components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware. In this context, hardware operates as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously. Combinations of the foregoing are employed to implement various techniques described herein. Accordingly, software, hardware, or executable modules are implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements810. The computing device802is configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by the computing device802as software is achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements810of the processing system804. The instructions and/or functions are executable/operable by one or more articles of manufacture (for example, one or more computing devices802and/or processing systems804) to implement techniques, modules, and examples described herein. The techniques described herein are supported by various configurations of the computing device802and are not limited to the specific examples of the techniques described herein. This functionality is further configured to be implemented all or in part through use of a distributed system, such as over a “cloud”814via a platform816as described below. The cloud814includes and/or is representative of a platform816for resources818. The platform816abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud814. The resources818include applications and/or data that is utilized while computer processing is executed on servers that are remote from the computing device802. Resources818also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network. The platform816is configured to abstract resources and functions to connect the computing device802with other computing devices. The platform816is further configured to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources818that are implemented via the platform816. Accordingly, in an interconnected device embodiment, implementation of functionality described herein is configured for distribution throughout the system800. For example, in some configurations the functionality is implemented in part on the computing device802as well as via the platform816that abstracts the functionality of the cloud814. Conclusion Although the invention has been described in language specific to structural features and/or methodological acts, the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.
66,608
11861764
DETAILED DESCRIPTION The present invention generally relates to methods and systems for three-dimensional (3D) depth reconstruction of vessels in two-dimensional (2D) medical images. Embodiments of the present invention are described herein to give a visual understanding of such methods and systems. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system. Further, it should be understood that while embodiments discussed herein may be discussed with respect to 3D depth reconstruction of vessels in 2D medical images, the present invention is not so limited. The present invention may be applied for 3D depth reconstruction of any tubular object of interest in 2D images of any type. FIG.1shows a 2D x-ray medical image100of a patient (or any other subject) depicting braches of blood vessels, including a region102of overlapping vessels. Medical image100may be acquired to facilitate a clinical examination of the patient, such as, e.g., an angiogram. To facilitate vessel detection and other imaging analysis tasks for such clinical examination, centerline tracing techniques may be applied to medical image100to extract a centerline representation of the branches of the blood vessels. Conventional centerline tracing techniques are not able to distinguish between a bifurcation of a branch and an overlapping of branches at region102. Accordingly, such conventional centerline tracing techniques may incorrectly interpret region102as a bifurcation of the branch of the vessel, thereby tracing a false shortcut path of the branch onto the overlapping branches. Advantageously, embodiments of the present invention apply a series of trained machine learning networks to generate a multi-channel depth image from a 2D medical image, thereby providing a better understanding of the 3D structure of the vessels in the 2D medical image, particularly at regions of overlapping branches, such as, e.g., region102. The multi-channel depth image may be used for centerline tracing or other imaging analysis tasks with improved results. FIG.2shows a high level workflow200for determining depth information of branches of blood vessels in a 2D medical image, in accordance with one or more embodiments. Workflow200may be performed by any suitable computing device, such as, e.g., computer502ofFIG.5. In workflow200, a 2D medical image202is received. Medical image202shows branches of blood vessels, which may include one or more overlapping blood vessels. Medical image202is shown in workflow200as an x-ray image, however it should be understood that medical image202may be any 2D medical image of any suitable modality. A trained image to image network204receives medical image202as input for pixelwise prediction of overlapping branches of blood vessels in medical image202. Image to image network204outputs a branch overlap image channel206representing a probability mask for overlapping blood vessels, where each pixel in branch overlap image channel206is associated with a probability that the pixel depicts overlapping branches. Pixels that have a high probability of overlapping blood vessels are highlighted in branch overlap image channel206. A trained fully convolutional neural network (FCNN)208receives patches of medical image202as input for pixelwise prediction of the orientation of blood vessels. For each patch, FCNN208outputs a set of scalars each corresponding to an orientation probability for a respective orientation of a plurality of orientations. The orientation probability for a respective orientation of a patch represents a probability that a pixel (e.g., the center pixel) of the patch depicts a branch oriented in the respective orientation. For each respective orientation, the scalars corresponding to the respective orientation are combined for each pixel in the medical image from which the patches are extracted, thereby forming branch orientation image channels210-A,210-B,210-C, and210-D (hereinafter referred to branch orientation image channels210) each for a respective orientation. Each branch orientation image channel210for a respective orientation represents a probability mask for the orientation of the blood vessel, where each pixel in the branch orientation image channel is associated with a probability that the pixel depicts a branch oriented in the respective orientation. Pixels in each branch orientation image channel210that have a high orientation probability are highlighted. As shown in workflow200, the set of branch orientation image channels210comprises a branch orientation image channel210-A for a first diagonal orientation (e.g., the diagonal formed between a lower left corner to an upper right corner), a branch orientation image channel210-B for a horizontal orientation, a branch orientation image channel210-C for a vertical orientation, and a branch orientation image channel210-D for a horizontal orientation second diagonal direction (e.g., the diagonal formed between an upper left corner to a lower right corner). Image to image network204, branch overlap image channel206, FCNN208, and set of branch orientation image channels210are represented as intermediate representation218for simplicity, e.g., in describingFIG.4below. Branch overlap image channel206and set of branch orientation image channels210are concatenated to form concatenated image channels212. It should be understood that concatenated image channels212may additionally or alternatively include other image channels. For example, as shown inFIG.2, branch overlap image channel206and set of branch orientation image channels210may also be concatenated with medical image202, representing pixelwise intensity values, to form concatenated image channels212. In another example, branch overlap image channel206and set of branch orientation image channels210may be concatenated with a vesselness image channel representing a pixelwise probability that a pixel represents a vessel to form concatenated image channels212. Other types of image channels are also contemplated. A trained image to image network214receives concatenated image channels212as input for generating a multi-channel depth image216, where each depth image channel corresponds to a respective depth and highlights pixels of branches associated with (e.g., located at) the respective depth. Advantageously, multilayer depth image216may be used for centerline tracing (or other imaging analysis tasks) to distinguish between vessel bifurcations and vessel overlaps to avoid shortcuts. It should be understood that while networks204,208, and214are shown in workflow200as image to image network204, FCNN208, and image to image network214, respectively, any suitable machine learning network, such as, e.g., a convolutional neural network (CNN) may be employed. For example, image to image networks204and214may be a FCNN or FCNN208may be an image to image network. FIG.3shows a method300for determining depth information of vessels in a 2D medical image, in accordance with one or more embodiments. Method300shows a detailed implementation for performing workflow200ofFIG.2, in accordance with one embodiment. Method300may be performed by any suitable computing device, such as, e.g., computer502ofFIG.5. At step302, a medical image is received. The medical image comprises branches of one or more vessels, and may include regions of overlapping branches. In one embodiment, the medical image is an angiogram x-ray image, however it should be understood that the medical image may be of any suitable modality, such as, e.g., magnetic resonance imaging (MRI), computed tomography (CT), ultrasound (US), single-photon emission computed tomography (SPECT), positron emission tomography (PET), etc. The medical image may be received from one or more medical imaging systems or by loading a previously stored medical image acquired using one or more medical imaging systems. At step304, a branch overlap image channel is generated representing a pixelwise probability that the branches overlap based on the medical image. The branch overlap image channel represents a probability mask with pixels corresponding to the medical image. Each pixel in the branch overlap image channel is associated with a probability that the pixel depicts overlapping blood vessels. The branch overlap image channel may be visualized by highlighting pixels based on their associated probability. In one embodiment, pixels in the branch overlap image channel having a high probability of overlapping blood vessels are highlighted. For example, an intensity of a pixel may be determined as being proportional to its associated probability such that a pixel associated with a probability of 0% is determined to have an intensity value of 0 while a pixel associated with a probability of 100% is determined to have an intensity value of 255. In another example, the branch overlap image channel represents a binary probability mask such that a pixel associated with a probability that satisfies a threshold may be highlighted (e.g., by setting its intensity value to 255), while a pixel associated with a probability that does not satisfy the threshold is not highlighted (e.g., by setting its intensity value to 0). In one embodiment, the branch overlap image channel is generated using a first trained machine learning network. In one embodiment, the first trained machine learning network is a trained image to image network. The trained image to image network is trained during a prior training stage using input/output pairs of training images. The trained image to image network includes an encoding network (or encoder) and a decoding network (or decoder). The encoding network has a series of layers that code or down sample the received medical image into a code whose size is substantially less than the size of the received medical image to thereby extract high level representations or features of the received medical image. The decoding network has a series of layers that will then decode the code to convert the high-level representations back to a pixel-level semantic representation to thereby generate the branch overlap image channel. All the intermediate information generated in the encoding network is shared with the decoding network so that no information is lost in the encoding process. It should be understood that the first trained machine learning network may be any suitable machine learning network, such as, e.g., any other convolutional neural network (e.g., FCNN), and is not limited to an image to image network. At step306, a set of branch orientation image channels each associated with a respective orientation of a plurality of orientations are generated based on the medical image. Each branch orientation image channel represents a pixelwise probability that the branches are orientated in the respective orientation. In one embodiment, the plurality of orientations comprise a vertical orientation, a horizontal orientation, a first diagonal orientation (e.g., the diagonal formed between a lower left corner to an upper right corner), and a second diagonal direction (e.g., the diagonal formed between an upper left corner to a lower right corner). Other orientations are also contemplated. Each branch orientation image channel of the set of branch orientation image channels represents a probability mask with pixels corresponding to the medical image. Each pixel in the branch orientation image channel is associated with a probability that a branch in the pixel is oriented in the respective orientation. In one embodiment, the set of branch orientation image channels is generated using a second trained machine learning network. In one embodiment, the second trained machine learning network is a trained FCNN. The trained FCNN is trained during a prior training stage using annotated training image patches. The trained FCNN receives a plurality of patches extracted from the medical image as the input and, for each patch, generates a set of scalars (each corresponding to a respective orientation) as the output representing a probability that the patch is oriented in the respective orientation. The FCNN includes an input layer, multiple convolutional layers, and an output layer. The connections between consecutive layers are defined by a set of convolutional kernel weights and biases. The input layer corresponds to image data of the input image (e.g., the extracted patches from the medical image). The output layer corresponds to the set of scalars. The plurality of patches may be extracted from the medical image using any suitable approach. In one embodiment, a uniform sampling distribution may be used to define evenly spaced sampling points. For example, a patch centered around each pixel in the medical image may be extracted. However, the present invention is not limited thereto and other possible sampling distributions may be used. The patches may be of any suitable size. For each patch, the FCNN outputs a set of scalars each corresponding to an orientation probability for a respective orientation. The orientation probability for a respective orientation of a patch represents a probability that the center pixel of the patch is oriented in the respective orientation. By applying the FCNN to patches centered around, e.g., each pixel in the medical image, an orientation probability for each respective orientation can be predicted for each pixel in the medical image. For each respective orientation, the scalars corresponding to the respective orientation are combined for each pixel in the medical image from which the patches are extracted, thereby forming the branch orientation image channel for each respective orientation. Pixels in each of the branch orientation image channels may be highlighted based on their associated probability. In one embodiment, pixels in the branch orientation image channel having a high probability of overlapping blood vessels are highlighted. For example, the pixels in the branch orientation image channel may be highlighted as discussed above with respect to the branch overlap image channel in step304. It should be understood that the second trained machine learning network may be any suitable machine learning network, such as, e.g., any other convolutional neural network (e.g., an image to image network), and is not limited to a FCNN. At step308, a multi-channel depth image is generated based on the branch overlap image channel and the set of branch orientation image channels. Each depth image channel of the multi-channel depth image comprises portions of the branches corresponding to a respective depth. In one embodiment, the multi-channel depth image is generated using a third trained machine learning network. In one embodiment, the third trained machine learning network is a trained image to image network. The trained image to image network is trained during a prior training stage using input/output pairs of training images. The branch overlap image channel and the set of branch orientation image channels are concatenated and the concatenated image channels are input into the trained image to image network. The trained image to image network outputs a plurality of depth image channels, each corresponding to a respective depth, forming the multi-channel depth image. Each depth image channel represents a probability mask with pixels corresponding to the medical image. Each pixel in the depth image channel is associated with a probability that the pixel depicts branches located at the respective depth. The depth image channels may be visualized by highlighting pixels based on their associated probability. In one embodiment, pixels in the depth image channel having a high probability of overlapping blood vessels are highlighted. For example, the pixels in the depth image channel may be highlighted as discussed above with respect to the branch overlap image channel in step304. It should be understood that while the multi-channel depth image is described herein as being generated based on concatenated image channels comprising the branch overlap image channel and the set of branch orientation image channels at step308, the concatenated image channels may additionally or alternatively comprise any suitable image channel. In one example, the concatenated image channels may include the medical image received at step302, representing pixelwise intensity values. In another example, the concatenated image channels may include a vesselness image channel representing a pixelwise probability that a pixel represents a vessel. It should be understood that the third trained machine learning network may be any suitable machine learning network, such as, e.g., any other convolutional neural network (e.g., a FCNN), and is not limited to an image to image network. At step310, the multi-channel depth image is output. The multi-channel depth image can be output by displaying the multi-channel depth image on a display device of a computer system, storing the multi-channel depth image on a memory or storage of a computer system, or by transmitting the multi-channel depth image to a remote computer system, e.g., for further processing. At step312, an imaging analysis task is performed based on the multi-channel depth image. In one embodiment, the imaging analysis task is centerline tracing of the branches of the vessels in the medical image. Other imaging analysis tasks are also contemplated. In accordance with one embodiment, workflow200ofFIG.2can be modified to leverage high level features previously coded by the encoding network of image to image network214during one or more prior analyses of medical images temporally acquired over a period of time. In particular, image to image network214ofFIG.2can be implemented with a long short-term memory (LSTM) network, which provides long term memory controlled by opening or closing an input gate, an output gate, and/or a forget gate. Advantageously, image to image network214implemented with an LSTM network enables high level features encoded by the encoding network (of image to image network214) to be stored and subsequently used by the decoding network (of image to image network214) to generate more accurate multi-channel depth images. FIG.4shows a high level workflow400for determining depth information of branches of vessels in 2D medical images by leveraging high level features previously coded during prior analyses of medical images temporally acquired over a period of time, in accordance with one or more embodiments. In workflow400, intermediate representations404-A,404-B, . . . ,404-N (collectively referred to as intermediate representations404) represent intermediate representation218shown inFIG.2. While intermediate representations404-A,404-B, . . . ,404-N and image to image long short-term memory (LSTM) networks408-A,408-B, . . . ,408-N (collectively referred to as image to image LSTM networks408) are functionally shown as separate instances in workflow400for ease of understanding to show temporal analysis of medical images402, it should be understood that the same intermediate representation404and the same image to image LSTM network408is applied for each of the medical images402(i.e., the same image to image network and FCNN with the same learned weights are used for each instance of intermediate representation404and the same image to image LSTM network with the same learned weights are used for each instance of image to image LSTM network408shown in workflow400). Similar to workflow200ofFIG.2, in workflow400, 2D medical images402-A,402-B, . . . ,402-N (collectively referred to as medical images402) are received, where N is any integer. Medical images402comprise branches of blood vessels, which may include overlapping branches. Medical images402may be of any suitable modality temporally acquired over a period of time. Medical images402are input into a respective intermediate representation404to generate respective concatenated image channels406-A,406-B, . . . ,406-N (collectively referred to as concatenated image channels406). Each of the concatenated image channels406include a branch overlap image channel and a set of branch orientation image channels. In some embodiments, concatenated image channels406may also include the respective medical image402representing pixelwise intensity values Workflow400modifies workflow200ofFIG.2by replacing image to image network214with image to image LSTM network408. Accordingly, concatenated image channels406are input into trained image to image LSTM network408to generate respective multi-channel depth images410-A,410-B, . . . ,410-N (collectively referred to as multi-channel depth images410). Image to image LSTM network408comprise an image to image network implemented with an LSTM network. The LSTM network enables the image to image network to store and subsequently use high level features previously coded by the encoding network during prior analyses to generate more accurate multi-channel depth images410, as represented by the connection between image to image LSTM network408-A,408-B, . . . ,408-N. Accordingly, image to image LSTM network408receives respective concatenated image channels406. The encoding network of the image to image LSTM networks408codes the received concatenated image channels406to a code representing high level representations or features of the received concatenated image channels406. The code is stored by the LSTM network for subsequent use by the decoding network of the image to image LSTM networks408. As such, the decoding network of the image to image LSTM networks408decodes the code generated by the encoding network from that respective concatenated image channel406and one or more codes stored by the LSTM network previously generated by the encoding network (if available). It should be understood that image to image LSTM network408may use any previously coded high level features generated by the encoding network and is not limited to the immediately prior coded high level features generated by the encoding network. For example, image to image LSTM network408-N may use the previously coded high level features from the instance of image to image LSTM network408-A and/or the instance of LSTM network408-B to generate multi-channel depth image410-N. It should be understood that while the exemplary embodiment of workflow400is shown using an image to image network implemented with an LSTM network, the present invention is not so limited. Any type of CNN (e.g., FCNN) implemented with any type of recurrent neural network (RNN) architecture, such as, e.g., a gated recurrent unit (GRU). Systems, apparatuses, and methods described herein may be implemented using digital circuitry, or using one or more computers using well-known computer processors, memory units, storage devices, computer software, and other components. Typically, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more mass storage devices, such as one or more magnetic disks, internal hard disks and removable disks, magneto-optical disks, optical disks, etc. Systems, apparatus, and methods described herein may be implemented using computers operating in a client-server relationship. Typically, in such a system, the client computers are located remotely from the server computer and interact via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers. Systems, apparatus, and methods described herein may be implemented within a network-based cloud computing system. In such a network-based cloud computing system, a server or another processor that is connected to a network communicates with one or more client computers via a network. A client computer may communicate with the server via a network browser application residing and operating on the client computer, for example. A client computer may store data on the server and access the data via the network. A client computer may transmit requests for data, or requests for online services, to the server via the network. The server may perform requested services and provide data to the client computer(s). The server may also transmit data adapted to cause a client computer to perform a specified function, e.g., to perform a calculation, to display specified data on a screen, etc. For example, the server may transmit a request adapted to cause a client computer to perform one or more of the steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIGS.2-4. Certain steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIGS.2-4, may be performed by a server or by another processor in a network-based cloud-computing system. Certain steps or functions of the methods and workflows described herein, including one or more of the steps ofFIGS.2-4, may be performed by a client computer in a network-based cloud computing system. The steps or functions of the methods and workflows described herein, including one or more of the steps ofFIGS.2-4, may be performed by a server and/or by a client computer in a network-based cloud computing system, in any combination. Systems, apparatus, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor; and the method and workflow steps described herein, including one or more of the steps or functions ofFIGS.2-4, may be implemented using one or more computer programs that are executable by such a processor. A computer program is a set of computer program instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A high-level block diagram of an example computer502that may be used to implement systems, apparatus, and methods described herein is depicted inFIG.5. Computer502includes a processor504operatively coupled to a data storage device512and a memory510. Processor504controls the overall operation of computer502by executing computer program instructions that define such operations. The computer program instructions may be stored in data storage device512, or other computer readable medium, and loaded into memory510when execution of the computer program instructions is desired. Thus, the method and workflow steps or functions ofFIGS.2-4can be defined by the computer program instructions stored in memory510and/or data storage device512and controlled by processor504executing the computer program instructions. For example, the computer program instructions can be implemented as computer executable code programmed by one skilled in the art to perform the method and workflow steps or functions ofFIGS.2-4. Accordingly, by executing the computer program instructions, the processor504executes the method and workflow steps or functions ofFIGS.2-4. Computer504may also include one or more network interfaces506for communicating with other devices via a network. Computer502may also include one or more input/output devices508that enable user interaction with computer502(e.g., display, keyboard, mouse, speakers, buttons, etc.). Processor504may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer502. Processor504may include one or more central processing units (CPUs), for example. Processor504, data storage device512, and/or memory510may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs). Data storage device512and memory510each include a tangible non-transitory computer readable storage medium. Data storage device512, and memory510, may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices. Input/output devices508may include peripherals, such as a printer, scanner, display screen, etc. For example, input/output devices508may include a display device such as a cathode ray tube (CRT) or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer502. Any or all of the systems and apparatus discussed herein may be implemented using one or more computers such as computer502. One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and thatFIG.5is a high level representation of some of the components of such a computer for illustrative purposes. The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
31,158
11861765
DETAILED DESCRIPTION OF EMBODIMENTS The following describes an example imaging system with a clipping-induced bias corrector configured to correct for clipping-induced bias introduced into the data by a mathematical logging operation of the data. The clipping-induced bias correction mitigates clipping-induced bias artifact in the reconstructed volumetric image data. FIG.1schematically illustrates an imaging system100, such as a computed tomography (CT) scanner. Suitable CT scanners include scanners configured for non-spectral and/or spectral imaging. The imaging system100includes a generally stationary gantry102and a rotating gantry104, which is rotatably supported by the stationary gantry102and rotates around an examination region106about a z-axis. The imaging system100further includes a radiation source108, such as an x-ray tube. The radiation source108is rotatably supported by the rotating gantry104, rotates with the rotating gantry104, and emits x-ray radiation that traverses the examination region106. The imaging system100further includes a one- or two-dimensional detector array110of rows of detector elements112. The detector array110is rotatably supported by the rotating gantry104along an angular arc opposite the radiation source108across the examination region106. The detector array110rotates in coordination with the radiation source108, detects x-ray radiation (i.e. x-ray photons) that traverses the examination region106, and generates intensity measurement electrical signals indicative of the detected x-ray radiation. A set of measurements for each acquisition interval is referred to herein as a view. The imaging system100further includes processing electronics114configured to process the electrical signals. In this example, the processing electronics114include an analog-to-digital (A/D) converter that digitizes the electrical signals. In one instance, the A/D converter is implemented as a current-to-frequency (I/F) converter that generates a train of pulses with a frequency proportional to an input electrical current signal. An example of such a converter is described in U.S. Pat. No. 6,671,345 B2, filed Nov. 7, 2001, and entitled “Data Acquisition for Computed Tomography,” which is incorporated herein by reference in its entirety. The A/D converter also takes a log of the digitized signals, producing attenuation line integrals (logged data). As discussed herein, the logging operation clips negative values, which shifts the mean (clipping-induced bias) of the measurements. The imaging system100further includes pre-processing circuitry116. The illustrated pre-processing circuitry116includes at least a clipping-induced bias corrector118and a calibration and/or correction (cal and/or cor) module120. As described in greater detail below, the clipping-induced bias corrector118is configured to correct for the clipping-induced bias introduced by the logging operation, producing corrected logged data. The calibration and/or correction module120is configured to perform calibrations and/or corrections for physical and/or component effects before and/or after the clipping-induced bias correction. Examples include air scan calibration, off-focal radiation correction, beam hardening correction, scatter correction, de-noising, and/or other known CT calibrations and/or corrections. The imaging system100further includes a reconstructor122configured to reconstruct the pre-processed logged data and generate volumetric image data. A subject support124, such as a couch, supports an object or subject in the examination region106. The subject support124is movable in coordination with performing an imaging procedure so as to guide the subject or object with respect to the examination region106for loading, scanning, and/or unloading the subject or object. An operator console126allows an operator to control an operation of the system100such as selecting a scanning protocol, a reconstruction algorithm, etc. The operator console126includes an input device(s) such as a mouse, keyboard, etc. and an output device(s) such as a display monitor. FIG.2schematically illustrates an example of the pre-processing circuitry116. The illustrated clipping-induced bias corrector118includes an unlogger202. The unlogger202is configured to unlog the logged data from the processing electronics114, producing clipped data. As discussed herein, the logging operation clips negative values, which are permanently lost, and the unlogging operation does not restore the lost (clipped) negative values. The illustrated clipping-induced bias corrector118further includes a mean estimator204. The mean estimator204is configured to estimate a mean value of the clipped data. In one instance, this is achieved by applying a filter (e.g., a 3-D smoothing filter) to the clipped data and determining a mean value of the smoothed data. In another instance, a deep learning algorithm is employed to estimate the mean value of the clipped data. Other approaches are also contemplated herein. The illustrated clipping-induced bias corrector118further includes a correction determiner206. The correction determiner206is configured to determine a correction for the clipping-induced bias based on the mean value of the clipped data and a predetermined correction function208.FIG.3illustrates an example of the correction function208. InFIG.3, a first (y-) axis302represents a mean of the unclipped data (i.e. the true mean) and a second (x-) axis304represents a mean of the measured data. A first plot306shows a relationship between the true mean of the unclipped data and a theoretical measurement of the mean of the unclipped data. The measurement is theoretical because the negative values lost during the logging operation are not recoverable. The first plot306shows a one-to-one relationship between the true mean and the theoretical measured mean. As shown, without the clipping, the theoretical measured mean is or is close to the true mean of the original data. A second plot308is a plot of the estimate of the mean value of the clipped data. From the second plot308, the relationship between the true mean of the unclipped data and the estimated mean of the clipped data is approximately one-to-one for higher mean values310. However, for lower mean values312, the estimate of the mean value of the clipped data falls off non-linearly. The second plot308is generated analytically, through Monte-Carlo simulations with Poisson and Gaussian random variables, or through calibration scans with known objects and tube currents, and/or otherwise. Returning toFIG.2, and with further reference toFIG.3, the correction determiner206determines a shift in the mean (i.e. the clipping-induced bias) as a difference between corresponding points of the first and second plots306and308. The correction determiner206generates a low-frequency correction based on the shift. In one instance, the correction is -log⁡(1-biaspm), where bias=Pm−PT, Pm(d, r, v) represents the measured mean of the clipped data, PT(d, r, v) represents the true mean, and p(d, r, v) represents the clipped data, for each detector element d, row r, and view v. The illustrated clipping-induced bias corrector118further includes an adder210. The adder210adds the logged data (−log(p)) and the correction (-log⁡(1-biaspm)) to produce corrected logged data (logcorr=-log⁡(x)-log⁡(1-biaspm)). Note that the correction cannot simply be the bias at least because subtracting the bias from each point in the un-logged data would introduce additional bias for noisy signals close to zero because of the non-linearity of the logarithm operation. The calibration and/or correction module120performs calibrations and/or corrections to the corrected logged data, and the reconstructor122reconstructs the calibrated and/or corrected data. FIG.4shows an example of an image with clipping-induced bias artifact, which manifests as dark shading, which is more predominant along longer paths since more photons are attenuated and less photons will reach the detector array110.FIG.5shows an example of an image reconstructed from the same measurements as the image inFIG.4, but with the clipping-induced bias removed via the correction described herein. Relative toFIG.4, the dark shading is removed and/or reduced in the image ofFIG.5. FIG.6schematically illustrates a variation of the pre-processing circuitry116described in connection withFIG.2. In this variation, the calibration and/or correction module120performs calibrations and/or corrections to the uncorrected logged data, which is then corrected for clipping-induced bias, as described herein, and reconstructed to generate volumetric image data. FIG.7schematically illustrates another variation of the pre-processing circuitry116described in connection withFIG.2. In this variation, a first set of calibrations and/or corrections1201is performed to the uncorrected logged data, and a second set of calibrations and/or corrections1202is performed to the corrected logged data. Generally, this variation represents a combination ofFIGS.2and6. FIG.8schematically illustrates another variation of the pre-processing circuitry116described in connection withFIG.2. In this variation, a first set of calibrations and/or corrections1201is performed to the uncorrected logged data and the partially calibrated and/or corrected data is conveyed to the unlogger202, which processes the partially calibrated and/or corrected data as described herein. A second set of calibrations and/or corrections1202is performed to the partially calibrated and/or corrected data and conveyed to the adder210. In one instance, the first set1201does not include denoising, and the second set1202includes denoising. An example of suitable denoising is described in U.S. Pat. No. 9,031,299 B2, filed Apr. 17, 2013, and entitled “Low Dose CT Denoising,” which is incorporated herein by reference in its entirety. FIG.9schematically illustrates another variation of the pre-processing circuitry116described in connection withFIG.2. In this variation, the first set of calibrations and/or corrections1201is performed to the uncorrected logged data and the partially calibrated and/or corrected data is conveyed to the unlogger202, the second set of calibrations and/or corrections1202is performed to the partially calibrated and/or corrected data and conveyed to the logger adder210, and a third second set of calibrations and/or corrections1203is performed to corrected logged data. FIG.10illustrates an example method in accordance with an embodiment(s) described herein. It is to be appreciated that the ordering of the below acts is not limiting, and other ordering is contemplated herein, such as other serial processing and/or parallel processing. At1002, a scan is performed, producing intensity measurements. At1004, the intensity measurements are logged, creating logged data, which include a clipping-induced bias, which shifts a mean value of the measurements. At1006, the logged data is corrected for the clipping-induced bias, as described herein and/or otherwise. Calibrations and/or corrections for physical and/or component effects can be performed before and/or after the clipping-induced bias correction. At1008, the corrected logged clipped data is reconstructed to generate volumetric image data. The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium (which excludes transitory medium), which, when executed by a computer processor(s) (e.g., central processing unit (CPU), microprocessor, etc.), cause the processor(s) to carry out acts described herein. Additionally, or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium, which is not computer readable storage medium. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
13,165
11861766
DETAILED DESCRIPTION The terms “a” or “an”, as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). Reference throughout this document to “one embodiment”, “certain embodiments”, “an embodiment”, “an implementation”, “an example” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation. The exemplary embodiments are described in the context of methods having certain steps. However, the methods and compositions operate effectively with additional steps and steps in different orders that are not inconsistent with the exemplary embodiments. Thus, the present disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein and as limited only by the appended claims. Furthermore, where a range of values is provided, it is to be understood that each intervening value between an upper and lower limit of the range—and any other stated or intervening value in that stated range—is encompassed within the disclosure. Where the stated range includes upper and lower limits, ranges excluding either of those limits are also included. Unless expressly stated, the terms used herein are intended to have the plain and ordinary meaning as understood by those of ordinary skill in the art. Any definitions are intended to aid the reader in understanding the present disclosure, but are not intended to vary or otherwise limit the meaning of such terms unless specifically indicated. In the case of patients that suffer from movement disorders, or neurologic conditions that affect movement, medical imaging can be problematic. For these patients, who may suffer from ataxia, dystonia, Huntington's disease, Parkinson's disease, Tourette syndrome, and tremors, among others, and for patients who are generally restless, the physical stillness required of many imaging modalities is all but impossible. As a result, images resulting therefrom often exhibit greatly reduced diagnostic quality, as the ability to account for patient motion is insufficient or time-intensive enough that it is impractical. To this end, motion from a patient, or other object, during a magnetic resonance (MR) scan, for instance, can introduce artifacts in reconstructed images (e.g., blurring, ghosting, signal loss, etc.), which may lead to misdiagnosis or repetitive imaging in an effort to mitigate motion errors. While certain motion can be accounted for, to an extent, certain patients, such as those described above who suffer neurologic conditions, cannot control limb movements sufficiently to allow for motionless imaging, thereby introducing a sporadic motion factor into the MR scan task. As background, it should be appreciated that MRI systems do not acquire data directly in image-space, but rather, in the frequency or Fourier space. Motion artifacts can materialize in a scan due to myriad factors including, other than patient motion, the image structure, type of motion, MR pulse sequence settings, and k-space acquisition strategy. The center of k-space contains low spatial frequency information correlated to objects with large, low contrast features and smooth intensity variations, whereas the periphery of k-space contains high spatial frequency information correlated to edges, details, and sharp transitions. A majority of biological samples show very local spectral density in k-space centered around k=0. The kxand kyaxes of k-space correspond to the horizontal (x-) and vertical (y-) axes of a two-dimensional (2D) image. The k-axes, however, represent spatial frequencies in the x- and y-directions rather than positions. For a three-dimensional (3D) image volume, the kzaxis is also sampled, corresponding to a slice dimension of the image volume. Since the object in k-space is described by global planar waves, each point in k-space contains spatial frequency and phase information about every pixel in the final image. Conversely, each pixel in the image maps to every point in k-space. Simple reconstruction using an inverse FFT (iFFT) assumes the object has remained stationary during the time the k-space data were sampled. Therefore, errors from object motion have a pronounced effect on the final reconstructed image because a change in a single sample in k-space can affect the entire image. Since scan durations can take minutes in order to acquire the data necessary for image reconstruction, attempts have been made to accelerate the imaging speed as well as to detect and correct for motion in images, as will be described herein. Several approaches to avoiding or correcting motion artifacts in MRI have been previously implemented. These approaches can be generally defined as prospective motion correction methods and retrospective motion correction methods. Prospective motion correction methods can include continuous, or semi-continuous, measurement of patient motion in order to track a position of the patient over time and update acquisition parameters in anticipation of patient motion. Optical cameras deploying structured light and/or fiducial markers can be used. While prospective motion correction offers high accuracy and high temporal resolution, such approaches often require special hardware and calibration, making them expensive and difficult to consistently and accurately implement. Retrospective motion correction methods can include machine learning-based methods and non-machine learning-based methods. Typically, these techniques are based on radial acquisition methods, which force slow acquisition and result in limited contrast. Machine learning-based methods can include a combination of physics-based models and machine learning networks to solve for motion using data consistency measures. In this way, the machine learning network can provide a jump start on finding solutions to the set of motion parameters. However, the primary drawback of machine learning-based approaches to motion correction is the large parameter space to be solved. In most cases, this approach requires simultaneous solving of sets of motion parameters for each view, or shot, of the imaging space. For in-plane affine transformations, including two translations and one rotation, the number of independent parameters to be solved for can easily be approximately 100, accounting motion parameters and the number of shots. Thus, the size and complexity of this non-convex problem makes the solution slow and possibly unstable. Accordingly, in an embodiment of the present disclosure, a method for the reduction of the parameter space to be solved is described. By solving for the motion parameter set one shot at a time, the size of the problem to be solved can be reduced from approximately 100 to 3. In an embodiment, this allows machine learning-based methods to be implemented. In an embodiment of the present disclosure, the method includes, first, generating an initial ‘clean image’ comprising a subset of the total shots of an MR scan. The initial ‘clean image’, which reflects a minimal motion state of the patient, can be used in order to jump start future convergence calculations when new datasets are considered. The new datasets may be data from a single shot of k-space, the data from the single shot of k-space having comparable differences from the ‘clean image’ and thus allowing the datasets to he compared. In an embodiment, image quality (IQ) changes incrementally with small changes in acceleration factor (e.g., an acceleration factor, R, of 2.6 and 2.3 create very similar images). As new data is added to the intermediate image, the acceleration factor is slightly reduced. Described differently, it can be assumed, in an example, that a first K seconds of an MRI acquisition are motion-free. For instance, it may be that a patient can maintain stillness for 10-30 seconds of the MRI acquisition. Data from a first M shots, corresponding to the first K seconds (e.g. 10-30 seconds), can be used to generate an initial ‘clean image’. Of course, it can be appreciated that M can be easily adjusted as an operator parameter and based on assessment of ‘motion risk’ of the patient over time. As indicated, the M shots of k-space data can be used to reconstruct an image. Though not of a high quality, the image reconstructed from under-sampled k-space data is an intermediate image that can be used to establish a baseline for estimating motion parameters of a subsequent intermediate image and a final image. Using the intermediate image reconstructed from the M shots, a subsequent shot of data, or M+1 shot, can be considered and motion parameters thereof can be calculated using a motion estimation method. If the calculated motion parameters are too large, or the data between M shots and M+1 shot is deemed inconsistent beyond a threshold data consistency value, among other comparisons, data from the M+1 shot can be discarded. Otherwise, the data from the M+1 shot can be added to the data from the M shots and the motion parameters estimated for M+1 can be added to a vector of motion parameters describing the final image. After considering the data from the M+1 shot, and incorporating the data from the M+1 shot, as appropriate, a M+2 shot can be considered in the same way as the M+1 shot was considered and the above-described process can be repeated. The motion parameters within the vector of motion parameters can be used to generate a subsequent intermediate image as well as a final image, if no additional shots are to be considered. In view of the above, it can be appreciated that creating an initial image using all shots of the k-space can lead to significant image artifacts, a result of including shots with significant motion (as shown inFIG.7). On the other hand, creating an image with only one or a few motion free shots is likely insufficient, as an image reconstructed from such little data is likely to be of low quality. Thus, as herein, an intermediate image reconstructed from M shots will have greatly reduced motion when compared with an image generated from all shots of the k-space and the image will be derived from enough data such that it serves as a sufficient ‘clean’ image for estimating motion in future shots. In an embodiment, minimal motion shots of the k-space may have zero motion. In another embodiment, minimal motion shots of the k-space may only have a small amount of motion when compared with other shots of the k-space. In this case, it can be understood that the minimal motion shots of the k-space have relatively zero motion. To this end, “minimal motion” can be defined as shots of the k-space, within a k-space dataset, having the least amount of motion within the k-space dataset. In an example, the shots identified as being ‘minimal motion’ may have elevated levels of motion, considered absolutely, but may have ‘minimal motion’ when considered relative to other shots of the k-space dataset. Of course, ‘minimal motion’ further depends upon scan condition and individual patient behavior. As described below with reference toFIG.3AthroughFIG.3D, ‘minimal motion’ could be determined by comparisons to threshold values or by evaluation of chronological acquisition of the MR scan. In any event, it can be appreciated that the number of shots of the k-space selected to have ‘minimal motion’ should be sufficiently large to have enough data such that the first intermediate image constructed therefrom has adequate image quality to enable other methods to proceed. In an embodiment, the initial ‘clean image’, or intermediate image, can be a reconstruction of the data from the identified minimal motion shots of the k-space and the vector of motion parameters. In an example, the reconstruction can be done by an accelerated image reconstruction method such as compressed sensing and parallel imaging. In another example, the reconstruction can be performed at one or more resolutions in order to expedite the method. The one or more resolutions may be achieved by eliminating data within shots of the k-space, as appropriate. Of course, it can be appreciated that machine learning-based reconstruction methods may also be used for speed. In an embodiment, a final image, or the image generated after having considered and accounted for, or discarded, all shots of the k-space, can be reconstructed based on the vector of motion parameters and according to any type of reconstruction method. In an example, the reconstruction method may be the same as was used for the intermediate images or may be another method that provides higher quality images. In another example, the reconstruction may be performed at varying resolutions. For instance, the resolution of the final image reconstruction may be higher than the resolution of the intermediate image reconstructions, the lower resolution of the intermediate image reconstructions allowing for accelerated motion correction from shot to shot. According to an embodiment, the methods of the present disclosure are not limited to using a first M shots, acquired chronologically, in order to generate an initial image estimate. The initial image estimated may be, instead, based on a minimization of a calculation of a motion metric for each shot within the k-space. Accordingly, the initial image estimate may be based on M shots of N total shots of the k-space having lowest motion, and the M shots may be acquired at any time within the MR scan. According to an embodiment, methods of the present disclosure can be implemented as described below. First, a number of M shots needed to form an initial intermediate image can be calculated. The number of M shots can be based on a number of shots and an acceleration factor. In an example, the number of M shots can be selected in order to provide an acceleration factor, R, of between 3 and 4. Next, the N total shots of the k-space may be considered chronologically and the M shots may be selected therefrom. Alternatively, N total shots of the k-space can be sorted according to motion (or a motion score calculated for each shot). The M shots having the lowest motion can be selected to generate the initial intermediate image. The M shots with least motion can be selected using a k-space/image-space quantitative metric that can be used as a surrogate measure of motion. Motion parameters of a vector of motion parameters describing a final image can be updated based on this minimal motion reference frame. In an embodiment, for shots M+1 to N total shots, motion within each shot can be estimated using a prior intermediate image. For instance, motion parameters of a 9thshot of k-space can be estimated based on an intermediate image generated according to the first 8 evaluated shots of k-space. The motion parameters of the 9thshot of k-space can then be included within the vector of motion parameters and used during intermediate image reconstruction and final image reconstruction to account for motion artifacts. In an embodiment, final image estimation can be performed using data from the first section and subsequent sections deemed suitable, and the vector of motion parameters, using any desired reconstruction method. Turning now to the Figures, the above-described methods will be generally described with respect to the flow diagram ofFIG.1.FIG.1describes method100, an incremental motion correction method for medical imaging modalities, writ large, and MRI, in particular. As indicated, in order to perform incremental motion correction, a first ‘clean image’ estimation needs to be generated on the basis of one or more shots of k-space data determined to be of minimal motion. Accordingly, at sub process110of method100, an intermediate image can be estimated from a first section of k-space. The first section of k-space can be one or more shots, M, of k-space data from N total shots of the acquired k-space data of the MR scan. As will be described later, the first section of k-space can be selected according to acquisition time and or quality of the underlying shot data. The intermediate image of the first section of k-space can be reconstructed according to a reconstruction method. In a non-limiting example, the reconstruction method can be an accelerated image reconstruction method such as compressed sensing (CS) and parallel imaging (PI). In another example, the reconstruction method can be a machine learning-based reconstruction method that enhances speed in intermediate image reconstruction. According to an exemplary embodiment, the intermediate image can be estimated by: XˆM=minXAM⁢F⁢S⁢Tˆall⁢X-yM22(1) where yMis the k-space data from first M shots selected as having minimal motion, AMis a sampling matrix for phase-encoding (PE) lines acquired in the first M shots, X represents the reconstructed image, S is the sensitivity maps of the receiver coils, F is the Fourier Transform operator, {circumflex over (T)}allis the estimated motion parameters for the shots of the N total shots of the k-space that are included in a final image, and {circumflex over (X)}M, which is to be solved for, is the image estimated from the first M shots. Determination of which shots of the N total shots are included in the final image will be described later. The estimation of Equation (1) can be performed by, for instance, conjugate gradient-based sensitivity encoding (CG-SENSE) or another accelerated image reconstruction method. Based on the initial M shots of the intermediate image, it can be appreciated that a vector comprising the motion parameters of a possible N total shots of the k-space included in the final image can be described as: {circumflex over (T)}all=[0,0, . . . , {circumflex over (T)}f]  (2) In an embodiment, motion parameters of the N total shots of the final image can include, in two-dimensional image-space, two translational components and one rotational component. In other embodiments, wherein additional dimensions of data are used (i.e. three-dimensional space), the motion parameters may be defined as having additional translational components and/or additional rotational components. Moreover, the motion parameters, as defined herein, should not be considered limiting, as any definition of motion parameters sufficient to allow motion correction within subsequent k-space data section could be implemented herein. Having reconstructed an intermediate image at sub process110, method100can proceed to sub process120where data from a second section of k-space can be evaluated for motion using the estimated intermediate image as a minimal motion reference. The second section of k-space may be one or more shots of k-space. In an example, the second section of k-space can be a next shot i of k-space data. Accordingly, motion parameters of the second section of k-space data can be estimated as Tˆi=minTAi⁢FST⁢XˆM-yi22(3) where yiis the k-space data for the second section of k-space, or shot i, Aiis the sampling matrix for PE lines acquired in shot i, T represents the matrix of motion parameters, and {circumflex over (T)}i, which is to be solved for, is the estimated motion parameters for shot i (i.e., the second section of k-space). The estimation of Equation (3) can be performed by, for instance, Levenberg-Marquardt or another method such as Newton's. At sub process130of method100, the motion parameters estimated for the second section of k-space can be evaluated to determine if the second section of k-space should be added to the first section of k-space data that defines a final image. Moreover, the evaluation determines whether the estimated motion parameters should be included within the vector of motion parameters of the N total shots of the k-space, or {circumflex over (T)}all. {circumflex over (T)}allmay be stored in data buffer145and may be accessible to sub process110of method100and step150of method100. In determining the value of the second section of k-space, and as will be described with reference toFIG.6, a data consistency metric may be a motion score calculated based on the estimation of the motion parameters of the second section of k-space. A value of the data consistency metric may be compared to an acceptability threshold and the data from the second section of k-space can be combined or rejected, as appropriate. In an embodiment, and with the motion parameters for shot i estimated, {circumflex over (T)}ican be added to the vector of motion parameters ({circumflex over (T)}all) so that it can be used during subsequent intermediate image reconstruction at sub process110of method100and final image reconstruction at step150of method100. Assuming the data of the second section of k-space is determined to be acceptable at sub process130of method100, a combined dataset of k-space data may include the “motion corrected” data from the second section of k-space and the data from the first section of k-space. Thus, at step140of method100, an evaluation can be made to determine if additional shots of the k-space should be considered. The evaluation can be (1) a determination of whether the number of evaluated shots is equal to the total number of shots of the k-space or, (2) a determination that no remaining shots of a ranked N total shots, the ranked N total shots having been ranked according to a motion score, will improve the quality of the final image. If it is determined at step140of method100that additional shots of the k-space should be evaluated, method100returns to sub process110and repeats with reconstruction of an intermediate image including the first section of k-space and the second section of k-space. A third section of k-space may then be considered. Alternatively, if it is determined at step140of method100that no additional shots of the ranked N total shots would improve the quality of the image, method100proceeds to step150wherein a final image of the combined data of the k-space is generated according to a final reconstruction method and using the vector of motion parameters stored in the data buffer145. This may be the case, for instance, when, as is described with reference toFIG.3B, the number of evaluated shots equals the N total shots of the k-space. In another instance, this may be the case if, as described inFIG.3C, a next ranked shot of the N total shots is determined to not be able to improve the quality of the final image. In an embodiment, the final reconstruction method may be the same as the iterative reconstruction method selected above or may be a different, higher quality reconstruction method to provide sufficiently diagnostic quality images. To this end, the iterative reconstruction methods and the final reconstruction methods may be performed at different resolutions and according to different techniques, based on constraints and goals at each step. The method described with reference to FIG. I will now be described with reference to the images ofFIG.2.FIG.2provides illustrations of a motion-corrupted image201, an intermediate image including a first M shots of N total shots202, an intermediate image including M+1 shots of the N total shots203, an intermediate image including M+2 shots of the N total shots204, and an intermediate image including M+3 shots of the N total shots205. As additional shots are added, the image estimation via CG-SENSE is improved and acceleration factors arc reduced. For instance, the acceleration factor for the intermediate image including the first M shots of the N total shots202has an acceleration factor of 4, the intermediate image including M+1 shots of the N total shots203has an acceleration factor of 3, the intermediate image including M+2 shots of the N total shots204has an acceleration factor of 2.4, and the intermediate image including M+3 shots of the N total shots204has an acceleration factor of 2. Moreover, the intermediate images are improved with the addition of additional, “motion-corrected” data from successive shots of the N total shots. For instance, estimated motion parameters for the intermediate image including M+3 shots of the N total shots205may be, within a PE translation vector, {circumflex over (T)}all,PE=[0, 0, 0, −1.2960, −2.7992, −3.2990] millimeters. This can be compared with true motion parameter values for the intermediate image including M+3 shots of the N total shots205, which include, described in the form of a PE translation vector, Tall,PE=[0, 0, 0, −1.3 −2.8, −3.3] millimeters. Similar vectors may be determined for rotation and for readout (RO) translation, the remaining motion parameters of the present disclosure, as described above. Turning now toFIG.3A, sub process110of method100will be further described. In order to estimate an intermediate image from a first section of k-space, it is necessary to first select a subset of the k-space as the first section of k-space. Accordingly, at sub process311of sub process110, a subset of the k-space is selected as the first section of k-space. The selection can be based on calculated data consistency metrics, such as motion scores, and the like, or based on a chronological assessment of an MR scan. Notably, the first section of k-space is selected in order to minimize motion and develop a minimal motion reference as an intermediate image. Once the first section of k-space is selected at sub process311of sub process110, a reconstruction of an intermediate image based on the first section of k-space can be performed at step319of sub process110. This reconstruction can then be used at sub process120of method100to estimate motion parameters of a second section of k-space. Different implementations of sub process311of sub process110will now be described with reference toFIG.3BthroughFIG.3D. First, with reference toFIG.3B, the first section of k-space can be selected according to a chronological ordering of the data. In other words, sub process311of sub process110can be performed under the assumption that a patient is still, or having minimal motion, for at least a first portion of a length of a MR scan. Thus, at step312of sub process311, N total shots acquired during a MR scan can be ordered according to the time in which they were acquired. Shots acquired first, as shown inFIG.4A, will be assumed to have minimal motion, as it is likely for the patient to remain still for a first period of time of the exam, corresponding to the first M shots of the N total shots. Therefore, at step313of sub process311, the first M shots may be selected as the first section of k-space and may correspond to a subset of the ordered N total shots of the MR scan. The first section may be one or more shots of the k-space acquired during the MR scan and may correspond to, in an example, a first 20 seconds of acquisition during the MR scan. In an embodiment, the first section of k-space may include one or more shots of the k-space determined to minimize a metric used as a direct measure of, or surrogate of, motion. To this end, a motion score may be calculated for each of N total shots of the k-space acquired during a MR scan. Then, the N total shots of the k-space may be evaluated, chronologically, according to respective motion scores. In this way, the first M shots of the N total shots may be the first L seconds of the MR scan wherein respective motion scores of the shots arc within a predefined percentage (e.g. 1%, 2%, 3%, 5%, 7.5%, 10%, etc.) of a minimum motion score of the N total shots of the MR scan. In still another instance, the first M shots of the N total shots may be J seconds of the MR scan wherein respective motion scores of the shots have an average motion score within a predetermined percentage (e.g. 1%, 2%, 3%, 5%, 7.5%, 10%, etc.) of a minimal motion score. Of course, in any of the above instances, a quantity of the first M shots of the total N shots is variable according to motion of the patient during the MR scan. In an embodiment, additional navigator data, generated by additional RF-pulses (e.g. spin echo or gradient echo), may be acquired with each shot. The navigator data can be used to determine respective motion scores for each shot of the N total shots of the k-space. In an embodiment, motion scores may be determined by for each shot of the N total shots of the k-space by evaluating image gradient entropy of image-space transforms of each shot of the N total shots of the k-space. The images-space transforms may be low resolution transforms, in an example, so that a rough motion evaluation may be rapidly determined. In another embodiment, k-space-entropy may be used to determine motion scores for each shot of the N total shots of the k-space. Of course, with reference toFIG.3C, the first section of the k-space may be selected in other ways. For instance, with reference also toFIG.4B, sub process311of sub process110may allow for selection of, as the first section of k-space, M shots of N total shots of the k-space that are determined to minimize a motion metric used as a direct measure of, or surrogate of, motion. To this end, at step314of sub process311, a motion score, similar to those described above, may be calculated for each of N total shots of the k-space acquired during a MR scan. Then, at step315of sub process311, the N total shots of the k-space may be ranked according to respective motion scores calculated at step314of sub process311. A number of methods for generating motion scores for the N total shots of the k-space may be deployed. In an embodiment, additional navigator data, generated by additional RF-pulses (e.g. spin echo or gradient echo), may be acquired with each shot. The navigator data can be used to determine respective motion scores for each shot of the N total shots of the k-space. In an embodiment, motion scores may be determined by for each shot of the N total shots of the k-space by evaluating image gradient entropy of image-space transforms of each shot of the N total shots of the k-space. The images-space transforms may be low resolution transforms, in an example, so that a rough motion evaluation may be rapidly determined. In another embodiment, k-space-entropy may be used to determine motion scores for each shot of the N total shots of the k-space. At step316of sub process311, the ranked N total shots of the k-space can be evaluated and M shots having lowest respective motion scores can be selected as the first section of k-space. The M shots may be, in an example, one or more shots of k-space. In an embodiment, the M shots of the first section of k-space may be shots of the k-space having respective motion scores that fall within a given deviation from a lowest respective motion score of the N total shots. Of course, other metrics and constraints may be used to define a shot of k-space without deviating from the spirit of the present disclosure. In an embodiment, the M shots of the N total shots may be P ranked shots of the N total shots having respective motion scores within a predefined percentage (e.g. 1%, 2%, 3%, 5%, 7.5%, 10%, etc.) of a highest ranked (i.e., minimum motion score) shot of the N total shots of the MR scan. In still another instance, the M shots of the N total shots may be shots of the MR scan wherein respective motions scores of the shots have an average motion score within a predetermined percentage (e.g. 1%, 2%, 3%, 5%, 7.5%, 10%, etc.) of a highest ranked (i.e. minimum motion score) shot of the N total shots of the MR scan. Of course, in any of the above instances, a quantity of the M shots of the total N shots is variable according to motion of the patient during the MR scan. Alternatively, and with reference now toFIG.3D, a first section of k-space may be selected according to, as a data consistency metric, a data consistency error value. The data consistency error value may be calculated for each of the N total shots based on estimated motion parameters of each of the N total shots. Respective data consistency error values can then be evaluated and those having minimal data consistency error values, or having achieved data consistency error values within predefined ranges (i.e. <1%, <2%, <3%, <5%, <7.5%, <10%, etc.) of a minimal data consistency error value, can be selected, at step318of sub process311, as the M shots of N total shots of k-space that are to be used as the first section of k-space. In an embodiment, the data consistency error value reflects, for each shot of the N total shots, a difference between acquired shot data and data predicted by a forward model from an estimated intermediate image and motion. In other words, the data consistency error value can be equal to an error value from Equation (3) used to estimate motion for each shot of the N total shots. In an embodiment, data consistency error values can be implemented within in a multi-resolution reconstruction method. For example, at a first resolution level, each of the N total shots may be “motion corrected” and a data consistency error value may be recorded. Subsequently, at a next resolution level, the M shots with lowest data consistency error values, calculated at the first resolution level, may be used to form the intermediate image and perform incremental correction, as described herein. In another embodiment, estimation at the first resolution level may be repeated with an intermediate image reconstructed from M shots having minimal data consistency error values following “motion correction” at the first resolution level. The descriptions ofFIG.3BandFIG.3Cwill now be further described with reference to the illustrations ofFIG.4AandFIG.4B. First, with respect to sub process311ofFIG.3B,FIG.4Aprovides a schematic describing selection of M shots to be used as a first section of k-space, according to an exemplary embodiment of the present disclosure. It can be appreciated that a MR scan, or a MR image dataset, can include a k-space406that comprises N total shots of k-space data. The k-space data of each of the N total shots, described inFIG.4Aas406a-406g, are time-dependent signals acquired at different spatial frequencies in k-space. A Fourier transform, which may be a 2D Fourier transform, of the N total shots can be computed in order to produce corresponding grayscale images408a-408g. As shown inFIG.4A, the dashed block indicates selection of a first 3 shots of chronologically-ordered N total shots, where N is 7. In proceeding with step312and step313of sub process311, it can be appreciated that the patient may be able to remain still for a first 3 shots of k-space data, but that a remainder of the k-space data, or 4 shots, will be corrupted, to at least an extent, by patient motion. In an example, the first 3 shots of k-space data may have average motion scores within 1% of a shot of k-space data having a minimal motion score. Thus, as described with reference toFIG.3B, a chronological ordering and selection of M shots of N total shots of k-space data, as the first section of k-space data, can be performed. Second, with respect to sub process311ofFIG.3C,FIG.4Bprovides a schematic describing selection of M shots to be used as a first section of k-space, according to an exemplary embodiment of the present disclosure. It can be appreciated that a MR scan, or a MR image dataset, can include a k-space406that comprises N total shots of k-space data. The k-space data of each of the N total shots, described inFIG.4Bas406a-406g, are time-dependent signals acquired at different spatial frequencies in k-space. A Fourier transform, which may be a 2D Fourier transform, of the N total shots can be computed in order to produce corresponding grayscale images408a-408g. As can be appreciated fromFIG.4B, and assuming the grayscale images408a-408gare ordered chronologically from left to right, a first M shots of the N total shots of the k-space may not have minimal motion, as is desired for the first section of k-space. Accordingly, as in sub process311ofFIG.3C, a motion score can be calculated for each shot of the N total shots of the k-space, the N total shots can be ranked, accordingly (not shown), and M shots can be selected as the first section of k-space. InFIG.4B, the selected M shots are indicated by the dashed blocks surrounding406cand408c,406dand408d, and406fand408f, which are ranked as having lowest motion scores. In an example, the lowest motion scores may be defined as D shots having motion scores within 1.5% of a shot of the k-space determined to have a minimal motion score. From the illustrations, it can be appreciated that the selected M shots that comprise the first section of k-space do not need to be acquired within a specific time window of the MR scan, but can be any shots from the N total shots that satisfy the motion score requirements. Turning now toFIG.5, a description of sub process120of method100will be provided. First, the intermediate image estimated according to the first section of k-space selected at sub process120of method100can be obtained at step521of sub process120. Subsequently, the motion parameters for a second section of k-space can be estimated at step522of sub process120. In an embodiment, the second section of k-space can be one or more shots of the N total shots of the k-space of the acquired MR scan. In an example, the second section of k-space is a subsequent shot of the N total shots of the k-space. As described above, the motion parameters of the second section of k-space data can be estimated at step522of sub process120as Tˆi=minTAi⁢FST⁢XˆM-yi22(3) where yiis the k-space data for the second section of k-space, or shot i, Aiis the sampling matrix for PE lines acquired in shot i, T is the matrix of motion parameters, and {circumflex over (T)}i, which is to be solved for, is the estimated motion parameters for shot i (i.e., the second section of k-space). The estimation of Equation (3) can be performed by, for instance, Levenberg-Marquardt or another method such as Newton's. Turning now toFIG.6, and having estimated motion parameters for the data from the second section of k-space at sub process120of method100, combining data from the first section of k-space and the data from the second section of k-space will be described in view of sub process130of method100. At a high-level, sub process130of method100evaluates whether motion present in the second section of k-space is above a level considered to be beneficial to a final image that includes data therein. Thus, at step631of sub process130, a data consistency metric value may be calculated for the second section of k-space. In an embodiment, and as in view ofFIG.3D, the data consistency metric may a data consistency error value such as the 12-norm of the difference between the acquired data and the data projected by the forward model (i.e., Equation (3)) that includes the estimated motion values of the second section of k-space. At step632of sub process130, the calculated data consistency metric value may be compared to an acceptability threshold. In an embodiment, the threshold of acceptability may he a predefined percentage (e.g. <1%, <2%, <3%, <4%, <5%, <7.5%, <10%, etc.) of deviation from a shot, or a section, of k-space having a minimal data consistency metric value. In an example, and in view ofFIG.3D, the threshold of acceptability may be a predefined percentage (e.g. <1%, <2%, <3%, <4%, <5%, <7.5%, <10%, etc.), or other statistic, defining a level of error within the forward model (i.e., Equation (3)). In other words, if it is determined that the motion within the second section of k-space is sufficient to render the final image as having poorer quality, then the data from the second section of k-space should be discarded. Accordingly, if it is determined the data from the second section of k-space does not satisfy the threshold of acceptability at step632of sub process130, the second section of k-space can be discarded at step633of sub process130and a subsequent section of k-space can be considered again at sub process110of method100, if available. Alternatively, if it is determined the data of the second section of k-space does satisfy the acceptability threshold at step632of sub process130, the data of the second section of k-space can be included with the first section of k-space and the vector of motion parameters can be updated at step634of sub process130. In other words, when the second section of k-space is acceptable, {circumflex over (T)}all(i)={circumflex over (T)}i, wherein {circumflex over (T)}iincludes the estimated motion parameters of the second section of k-space. In either outcome, the result of sub process130of method100can be passed to step140of method100and a determination can be made of whether additional sections of k-space should be considered. In the case of a chronological ordering of k-space data, it may be that additional shots of the k-space should be evaluated at sub process110of method100. Similarly, in the case of a motion-based ranking of the k-space data, it may be that additional shots of the k-space may improve quality of the final image and should be evaluated at sub process110of method100. Ultimately, when it is determined at step140of method100that no additional sections of k-space data can improve the quality of a final image, method100proceeds to step150and a final reconstructed image can be generated. FIG.7provides illustrations that demonstrate the functionality of method100when applied to simulated high motion cases. For instance, using a true image755as a reference, it can be appreciated that an estimate without motion correction756, an estimate with simultaneous correction of all shots757, and an estimate with incremental correction of N total shots758, as described in the present disclosure, provide distinctly different outcomes. Moreover, it can be appreciated that the methods of the present disclosure generate a final reconstruction image758that most closely resembles the true image755. FIG.8illustrates an example embodiment of a medical-imaging system860within which method100of the present disclosure can be implemented. The medical-imaging system860includes at least one scanning device862, one or more image-generation devices864, each of which is a specially-configured computing device (e.g., a specially-configured desktop computer, a specially-configured laptop computer, a specially-configured server), and a display device866. The scanning device862is configured to acquire scan data by scanning a region (e.g., area, volume, slice) of an object (e.g., a patient). The scanning modality may be, for example, magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), X-ray radiography, and ultrasonography. The one or more image-generation devices864obtain scan data from the scanning device862and generate an image of the region of the object based on the scan data. To generate the image, for example during intermediate image generation or during final image reconstruction, the one or more image-generation devices864may perform a reconstruction process on the scan data. Examples of reconstruction processes include GRAPPA, CG-SENSE, SENSE, ARC, SPIRiT, and LORAKS. In an embodiment, after the one or more image-generation devices864generate the image, the one or more image-generation devices864send the image to the display device864, which displays the image. In another embodiment, and further to the above, the one or more image-generation devices864may generate two images from the same scan data. The one or more image-generation devices864may use different reconstruction processes to generate the two images from the same scan data, and one image may have a lower resolution than the other image. Additionally, the one or more image-generation devices864may generate an image. Referring now toFIG.9, a non-limiting example of a magnetic resonance imaging (MRI) system970is shown. The MRI system970depicted inFIG.9includes a gantry971(shown in a schematic cross-section) and various related system components972interfaced therewith. At least the gantry971is typically located in a shielded room. The MRI system geometry depicted inFIG.9includes a substantially coaxial cylindrical arrangement of the static field B0magnet973, a Gx, Gy, and Gz gradient coil set974, and a large whole-body RF coil (WBC) assembly975. Along a horizontal axis of this cylindrical array of elements is an imaging volume976shown as substantially encompassing the head of a patient977supported by a patient table978. One or more smaller array RF coils979can be more closely coupled to the patient's head (referred to herein, for example, as “scanned object” or “object”) in imaging volume976. As those in the art will appreciate, compared to the WBC (whole-body coil), relatively small coils and/or arrays, such as surface coils or the like, are often customized for particular body parts (e.g., arms, shoulders, elbows, wrists, knees, legs, chest, spine, etc.). Such smaller RF coils are referred to herein as array coils (AC) or phased-array coils (PAC). These can include at least one coil configured to transmit RF signals into the imaging volume, and a plurality of receiver coils configured to receive RF signals from an object, such as the patient's head, in the imaging volume976. The MRI system970includes a MRI system controller983that has input/output ports connected to a display980, a keyboard981, and a printer982. As will be appreciated, the display980can be of the touch-screen variety so that it provides control inputs as well. A mouse or other I/O device(s) can also be provided. The MRI system controller983interfaces with a MRI sequence controller984, which, in turn, controls the Gx, Gy, and Gz gradient coil drivers985, as well as the RF transmitter986, and the transmit/receive switch987(if the same RF coil is used for both transmission and reception). The MRI sequence controller984includes suitable program code structure988for implementing MRI imaging (also known as nuclear magnetic resonance, or NMR, imaging) techniques including parallel imaging. MRI sequence controller984can be configured for MR imaging with or without parallel imaging. Moreover, the MRI sequence controller984can facilitate one or more preparation scan (pre-scan) sequences, and a scan sequence to obtain a main scan magnetic resonance (MR) image (referred to as a diagnostic image). MR data from pre-scans can be used, for example, to determine sensitivity maps for RF coils975and/or979(sometimes referred to as coil sensitivity maps or spatial sensitivity maps), and to determine unfolding maps for parallel imaging. The MRI system components972include an RF receiver989providing input to data processor990so as to create processed image data, which is sent to display980. The MRI data processor990is also configured to access previously generated MR data, images, and/or maps, such as, for example, coil sensitivity maps, parallel image unfolding maps, distortion maps and/or system configuration parameters991, and MRI image reconstruction program code structures992and993. In one embodiment, the MRI data processor990includes processing circuitry. The processing circuitry can include devices such as an application-specific integrated circuit (ASIC), configurable logic devices (e.g., simple programmable logic devices (SPLDs), complex programmable logic devices (CPLDs), and field programmable gate arrays (FPGAs), and other circuit components that are arranged to perform the functions recited in the present disclosure. The processor990executes one or more sequences of one or more instructions, such as method100described herein, contained in the program code structures992and993. Alternatively, the instructions can be read from another computer-readable medium, such as a hard disk or a removable media drive. One or more processors in a multi-processing arrangement can also be employed to execute the sequences of instructions contained in the program code structures992and993. In alternative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions. Thus, the disclosed embodiments are not limited to any specific combination of hardware circuitry and software. Additionally, the term “computer-readable medium” as used herein refers to any non-transitory medium that participates in providing instructions to the processor990for execution. A computer readable medium can take many forms, including, but not limited to, non-volatile media or volatile media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks, or a removable media drive. Volatile media includes dynamic memory. Also illustrated inFIG.9, and as referenced above, is a generalized depiction of an MRI system program storage (memory)993, where stored program code structures are stored in non-transitory computer-readable storage media accessible to the various data processing components of the MRI system970. As those in the art will appreciate, the program store993can be segmented and directly connected, at least in part, to different ones of the system972processing computers having most immediate need for such stored program code structures in their normal operation (i.e., rather than being commonly stored and connected directly to the MRI system controller983). Additionally, the MRI system970as depicted inFIG.9can be utilized to practice exemplary embodiments described herein below. The system components can be divided into different logical collections of “boxes” and typically comprise numerous digital signal processors (DSP), microprocessors and special purpose processing circuits (e.g., for fast A/D conversions, fast Fourier transforming, array processing, etc.). Each of those processors is typically a clocked “state machine” wherein the physical data processing circuits progress from one physical state to another upon the occurrence of each clock cycle (or predetermined number of clock cycles). Furthermore, not only does the physical state of the processing circuits (e.g., CPUs, registers, buffers, arithmetic units, etc.) progressively change from one clock cycle to another during the course of operation, the physical state of associated data storage media (e.g., bit storage sites in magnetic storage media) is transformed from one state to another during operation of such a system. For example, at the conclusion of an image reconstruction process and/or sometimes an image reconstruction map (e.g., coil sensitivity map, unfolding map, ghosting map, a distortion map etc.) generation process, an array of computer-readable accessible data value storage sites in physical storage media will be transformed from some prior state (e.g., all uniform “zero” values or all “one” values) to a new state wherein the physical states at the physical sites of such an array vary between minimum and maximum values to represent real world physical events and conditions (e.g., the internal physical structures of a patient over an imaging volume space). As those in the art will appreciate, such arrays of stored data values represent and also constitute a physical structure, as does a particular structure of computer control program codes that, when sequentially loaded into instruction registers and executed by one or more CPUs of the MRI system970, causes a particular sequence of operational states to occur and be transitioned through within the MRI system970. Obviously, numerous modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein. Embodiments of the present disclosure may also be as set forth in the following parentheticals. (1) An apparatus for incremental motion correction in magnetic resonance imaging, the apparatus comprising processing circuitry configured to estimate an intermediate image from a first section of k-space, the first section of the k-space corresponding to acquisition time points within a magnetic resonance scan of a subject, the corresponding acquisition time points within the magnetic resonance scan being associated with shots of the k-space determined to have minimal motion, estimate motion parameters of a second section of the k-space using the estimated intermediate image, combine data from the first section of the k-space with data from the second section of the k-space according to the estimated motion parameters, and reconstruct the combined data of the k-space to generate a final image. (2) The apparatus according to (1), wherein the processing circuitry is further configured to order N shots of the k-space of the magnetic resonance scan chronologically, and select, as the first section of the k-space, a first M shots of the ordered N shots of the k-space of the magnetic resonance scan. (3) The apparatus according to either (1) or (2), wherein the processing circuitry is further configured to calculate a motion score for each of N shots of the k-space of the magnetic resonance scan, rank the N shots of the k-space of the magnetic resonance scan according to the calculated motion score for each of the N shots of the k-space, the ranked N shots being acquired over a full time period of the magnetic resonance scan, and select, as the first section of the k-space, a section of the k-space that includes a highest ranked shot of the N shots of the k-space and at least one other of the ranked N shots. (4) The apparatus according to any one of (1) to (3), wherein the processing circuitry is further configured to update a vector of motion parameters to include the estimated motion parameters of the second section of the k-space, the vector of motion parameters including motion parameters corresponding to the first section of the k-space. (5) The apparatus according to any one of (1) to (4), wherein the processing circuitry is further configured to reconstruct the combined data of the k-space to generate the final image based on the updated vector of motion parameters. (6) The apparatus according to any one of (1) to (5), wherein the vector of motion parameters includes, for each combined section of the k-space, two translational values and one rotational value. (7) The apparatus according to any one of (1) to (6), wherein the processing circuitry is further configured to combine the data from the first section of the k-space with the data from the second section of the k-space according to the estimated motion parameters by calculating a value of a data consistency metric for the second section of the k-space, and discarding, when a comparison indicates the calculated value of the data consistency metric is below a threshold of acceptability, the data from the second section of the k-space. (8) The apparatus according to any one of (1) to (7), wherein the processing circuitry is further configured to calculate a data consistency error value for each of N shots of the k-space of the magnetic resonance scan, and select the first section of the k-space based on the data consistency error values calculated for each of the N shots of the k-space of the magnetic resonance scan. (9) A method for incremental motion correction in magnetic resonance imaging, comprising estimating, by processing circuitry, an intermediate image from a first section of k-space, the first section of the k-space corresponding to acquisition time points within a magnetic resonance scan of a subject, the corresponding acquisition time points within the magnetic resonance scan being associated with shots of the k-space determined to have minimal motion, estimating, by the processing circuitry, motion parameters of a second section of the k-space using the estimated intermediate image, combining, by the processing circuitry, data from the first section of the k-space with data from the second section of the k-space according to the estimated motion parameters, and reconstructing, by the processing circuitry, the combined data of the k-space to generate a final image. (10) The method according to (9), further comprising ordering, by the processing circuitry, N shots of the magnetic resonance scan chronologically, and selecting, by the processing circuitry and as the first section of the k-space, a first M shots of the ordered N shots of the magnetic resonance scan. (11) The method according to either (9) or (10), further comprising calculating, by the processing circuitry, a motion score for each of N shots of the magnetic resonance scan, ranking, by the processing circuitry, the N shots of the magnetic resonance scan according to the calculated motion score for each of the N shots, the ranked N shots being acquired over a full time period of the magnetic resonance scan, and selecting, by the processing circuitry and as the first section of the k-space, a section of the k-space that includes a highest ranked shot of the N shots of the k-space and at least one other of the ranked N shots. (12) The method according to any one of (9) to (11), further comprising updating, by the processing circuitry, a vector of motion parameters to include the estimated motion parameters of the second section of the k-space, the vector of motion parameters including motion parameters corresponding to the first section of the k-space. (13) The method according to any one of (9) to (12), wherein the reconstructing the combined data of the k-space to generate the final image is based on the updated vector of motion parameters. (14) The method according to any one of (9) to (13), wherein the updated vector of motion parameters includes, for each combined section of the k-space, two translational values and one rotational value. (15) The method according to ally one of (9) to (14), wherein the combining the data from the first section of the k-space with the data from the second section of the k-space according to the estimated motion parameters includes calculating, by the processing circuitry, a value of a data consistency metric for the second section of the k-space, and discarding, by the processing circuitry and when a comparison indicates the calculated value of the data consistency metric is below a threshold of acceptability, the data from the second section of the k-space. (16) A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform a method for incremental motion correction in magnetic resonance imaging, comprising estimating an intermediate image from a first section of k-space, the first section of the k-space corresponding to acquisition time points within a magnetic resonance scan of a subject, the corresponding acquisition time points within the magnetic resonance scan being associated with shots of the k-space determined to have minimal motion, estimating motion parameters of a second section of the k-space using the estimated intermediate image, combining data from the first section of the k-space with data from the second section of the k-space according to the estimated motion parameters, and reconstructing the combined data of the k-space to generate a final image. (17) The non-transitory computer-readable storage medium according to (16), further comprising ordering N shots of the magnetic resonance scan chronologically, and selecting, as the first section of the k-space, a first M shots of the ordered N shots of the magnetic resonance scan. (18) The non-transitory computer-readable storage medium according to either (16) or (17), further comprising calculating a motion score for each of N shots of the magnetic resonance scan, ranking the N shots of the magnetic resonance scan according to the calculated motion score for each of the N shots, the ranked N shots being acquired over a full time period of the magnetic resonance scan, and selecting, as the first section of the k-space, a section of the k-space that includes a highest ranked shot of the N shots of the k-space and at least one other of the ranked N shots. (19) The non-transitory computer-readable storage medium according to any one of (16) to (18), further comprising updating a vector of motion parameters to include the estimated motion parameters of the second section of the k-space, the vector of motion parameters including motion parameters corresponding to the first section of the k-space. (20) The non-transitory computer-readable storage medium according to any one of (16) to (19), wherein the combining the data from the first section of the k-space with the data from the second section of the k-space according to the estimated motion parameters includes calculating a value of a data consistency metric for the second section of the k-space, and discarding, when the comparison indicates the calculated value of the data consistency metric is below a threshold of acceptability, the data from the second section of the k-space. Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
62,402
11861767
DETAILED DESCRIPTION Implementations are described herein according to the following outline:1.0. General Overview2.0. Operating Environment2.1. Host Devices2.2. Client Devices2.3. Client Device Applications2.4. Data Server System2.5 Cloud-Based System Overview2.6 Searching Externally-Archived Data2.6.1. ERP Process Features2.7. Data Ingestion2.7.1. Input2.7.2. Parsing2.7.3. Indexing2.8. Query Processing2.9. Pipelined Search Language2.10. Field Extraction2.11. Example Search Screen2.12. Data Modeling2.13. Acceleration Techniques2.13.1. Aggregation Technique2.13.2. Keyword Index2.13.3. High Performance Analytics Store2.13.3.1 Extracting Event Data Using Posting Values2.13.4. Accelerating Report Generation2.14. Security Features2.15. Data Center Monitoring3.0. Streaming Data Visualizations3.1. Systems for Streaming Data Visualizations3.2. Techniques for Streaming Data Visualizations 1.0. GENERAL OVERVIEW Modern data centers and other computing environments can comprise anywhere from a few host computer systems to thousands of systems configured to process data, service requests from remote clients, and perform numerous other computational tasks. During operation, various components within these computing environments often generate significant volumes of machine data. Machine data is any data produced by a machine or component in an information technology (IT) environment and that reflects activity in the IT environment. For example, machine data can be raw machine data that is generated by various components in IT environments, such as servers, sensors, routers, mobile devices, Internet of Things (IoT) devices, etc. Machine data can include system logs, network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc. In general, machine data can also include performance data, diagnostic information, and many other types of data that can be analyzed to diagnose performance problems, monitor user interactions, and to derive other insights. A number of tools are available to analyze machine data. In order to reduce the size of the potentially vast amount of machine data that may be generated, many of these tools typically pre-process the data based on anticipated data-analysis needs. For example, pre-specified data items may be extracted from the machine data and stored in a database to facilitate efficient retrieval and analysis of those data items at search time. However, the rest of the machine data typically is not saved and is discarded during pre-processing. As storage capacity becomes progressively cheaper and more plentiful, there are fewer incentives to discard these portions of machine data and many reasons to retain more of the data. This plentiful storage capacity is presently making it feasible to store massive quantities of minimally processed machine data for later retrieval and analysis. In general, storing minimally processed machine data and performing analysis operations at search time can provide greater flexibility because it enables an analyst to search all of the machine data, instead of searching only a pre-specified set of data items. This may enable an analyst to investigate different aspects of the machine data that previously were unavailable for analysis. However, analyzing and searching massive quantities of machine data presents a number of challenges. For example, a data center, servers, or network appliances may generate many different types and formats of machine data (e.g., system logs, network packet data (e.g., wire data, etc.), sensor data, application program data, error logs, stack traces, system performance data, operating system data, virtualization data, etc.) from thousands of different components, which can collectively be very time-consuming to analyze. In another example, mobile devices may generate large amounts of information relating to data accesses, application performance, operating system performance, network performance, etc. There can be millions of mobile devices that report these types of information. These challenges can be addressed by using an event-based data intake and query system, such as the SPLUNK® ENTERPRISE system developed by Splunk Inc. of San Francisco, California. The SPLUNK® ENTERPRISE system is the leading platform for providing real-time operational intelligence that enables organizations to collect, index, and search machine data from various websites, applications, servers, networks, and mobile devices that power their businesses. The data intake and query system is particularly useful for analyzing data which is commonly found in system log files, network data, and other data input sources. Although many of the techniques described herein are explained with reference to a data intake and query system similar to the SPLUNK® ENTERPRISE system, these techniques are also applicable to other types of data systems. In the data intake and query system, machine data are collected and stored as “events”. An event comprises a portion of machine data and is associated with a specific point in time. The portion of machine data may reflect activity in an IT environment and may be produced by a component of that IT environment, where the events may be searched to provide insight into the IT environment, thereby improving the performance of components in the IT environment. Events may be derived from “time series data,” where the time series data comprises a sequence of data points (e.g., performance measurements from a computer system, etc.) that are associated with successive points in time. In general, each event has a portion of machine data that is associated with a timestamp that is derived from the portion of machine data in the event. A timestamp of an event may be determined through interpolation between temporally proximate events having known timestamps or may be determined based on other configurable rules for associating timestamps with events. In some instances, machine data can have a predefined format, where data items with specific data formats are stored at predefined locations in the data. For example, the machine data may include data associated with fields in a database table. In other instances, machine data may not have a predefined format (e.g., may not be at fixed, predefined locations), but may have repeatable (e.g., non-random) patterns. This means that some machine data can comprise various data items of different data types that may be stored at different locations within the data. For example, when the data source is an operating system log, an event can include one or more lines from the operating system log containing machine data that includes different types of performance and diagnostic information associated with a specific point in time (e.g., a timestamp). Examples of components which may generate machine data from which events can be derived include, but are not limited to, web servers, application servers, databases, firewalls, routers, operating systems, and software applications that execute on computer systems, mobile devices, sensors, Internet of Things (IoT) devices, etc. The machine data generated by such data sources can include, for example and without limitation, server log files, activity log files, configuration files, messages, network packet data, performance measurements, sensor measurements, etc. The data intake and query system uses a flexible schema to specify how to extract information from events. A flexible schema may be developed and redefined as needed. Note that a flexible schema may be applied to events “on the fly,” when it is needed (e.g., at search time, index time, ingestion time, etc.). When the schema is not applied to events until search time, the schema may be referred to as a “late-binding schema.” During operation, the data intake and query system receives machine data from any type and number of sources (e.g., one or more system logs, streams of network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc.). The system parses the machine data to produce events each having a portion of machine data associated with a timestamp. The system stores the events in a data store. The system enables users to run queries against the stored events to, for example, retrieve events that meet criteria specified in a query, such as criteria indicating certain keywords or having specific values in defined fields. As used herein, the term “field” refers to a location in the machine data of an event containing one or more values for a specific data item. A field may be referenced by a field name associated with the field. As will be described in more detail herein, a field is defined by an extraction rule (e.g., a regular expression) that derives one or more values or a sub-portion of text from the portion of machine data in each event to produce a value for the field for that event. The set of values produced are semantically-related (such as IP address), even though the machine data in each event may be in different formats (e.g., semantically-related values may be in different positions in the events derived from different sources). As described above, the system stores the events in a data store. The events stored in the data store are field-searchable, where field-searchable herein refers to the ability to search the machine data (e.g., the raw machine data) of an event based on a field specified in search criteria. For example, a search having criteria that specifies a field name “UserID” may cause the system to field-search the machine data of events to identify events that have the field name “UserID.” In another example, a search having criteria that specifies a field name “UserID” with a corresponding field value “12345” may cause the system to field-search the machine data of events to identify events having that field-value pair (e.g., field name “UserID” with a corresponding field value of “12345”). Events are field-searchable using one or more configuration files associated with the events. Each configuration file includes one or more field names, where each field name is associated with a corresponding extraction rule and a set of events to which that extraction rule applies. The set of events to which an extraction rule applies may be identified by metadata associated with the set of events. For example, an extraction rule may apply to a set of events that are each associated with a particular host, source, or source type. When events are to be searched based on a particular field name specified in a search, the system uses one or more configuration files to determine whether there is an extraction rule for that particular field name that applies to each event that falls within the criteria of the search. If so, the event is considered as part of the search results (and additional processing may be performed on that event based on criteria specified in the search). If not, the next event is similarly analyzed, and so on. As noted above, the data intake and query system utilizes a late-binding schema while performing queries on events. One aspect of a late-binding schema is applying extraction rules to events to extract values for specific fields during search time. More specifically, the extraction rule for a field can include one or more instructions that specify how to extract a value for the field from an event. An extraction rule can generally include any type of instruction for extracting values from events. In some cases, an extraction rule comprises a regular expression, where a sequence of characters form a search pattern. An extraction rule comprising a regular expression is referred to herein as a regex rule. The system applies a regex rule to an event to extract values for a field associated with the regex rule, where the values are extracted by searching the event for the sequence of characters defined in the regex rule. In the data intake and query system, a field extractor may be configured to automatically generate extraction rules for certain fields in the events when the events are being created, indexed, or stored, or possibly at a later time. Alternatively, a user may manually define extraction rules for fields using a variety of techniques. In contrast to a conventional schema for a database system, a late-binding schema is not defined at data ingestion time. Instead, the late-binding schema can be developed on an ongoing basis until the time a query is actually executed. This means that extraction rules for the fields specified in a query may be provided in the query itself, or may be located during execution of the query. Hence, as a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying machine data and uses a late-binding schema for searching the machine data, it enables a user to continue investigating and learn valuable insights about the machine data. In some implementations, a common field name may be used to reference two or more fields containing equivalent and/or similar data items, even though the fields may be associated with different types of events that possibly have different data formats and different extraction rules. By enabling a common field name to be used to identify equivalent and/or similar fields from different types of events generated by disparate data sources, the system facilitates use of a “common information model” (CIM) across the disparate data sources (further discussed with respect toFIG.7A). 2.0. OPERATING ENVIRONMENT FIG.1is a block diagram of an example networked computer environment100, in accordance with example implementations. Those skilled in the art would understand thatFIG.1represents one example of a networked computer system and other implementations may use different arrangements. The networked computer system100comprises one or more computing devices. These one or more computing devices comprise any combination of hardware and software configured to implement the various logical components described herein. For example, the one or more computing devices may include one or more memories that store instructions for implementing the various components described herein, one or more hardware processors configured to execute the instructions stored in the one or more memories, and various data repositories in the one or more memories for storing data structures utilized and manipulated by the various components. In some implementations, one or more client devices102are coupled to one or more host devices106and a data intake and query system108via one or more networks104. Networks104broadly represent one or more LANs, WANs, cellular networks (e.g., LTE, HSPA, 3G, and other cellular technologies), and/or networks using any of wired, wireless, terrestrial microwave, or satellite links, and may include the public Internet.2.1. HOST DEVICES In the illustrated implementation, a system100includes one or more host devices106. Host devices106may broadly include any number of computers, virtual machine instances, and/or data centers that are configured to host or execute one or more instances of host applications114. In general, a host device106may be involved, directly or indirectly, in processing requests received from client devices102. Each host device106may comprise, for example, one or more of a network device, a web server, an application server, a database server, etc. A collection of host devices106may be configured to implement a network-based service. For example, a provider of a network-based service may configure one or more host devices106and host applications114(e.g., one or more web servers, application servers, database servers, etc.) to collectively implement the network-based application. In general, client devices102communicate with one or more host applications114to exchange information. The communication between a client device102and a host application114may, for example, be based on the Hypertext Transfer Protocol (HTTP) or any other network protocol. Content delivered from the host application114to a client device102may include, for example, HTML documents, media content, etc. The communication between a client device102and host application114may include sending various requests and receiving data packets. For example, in general, a client device102or application running on a client device may initiate communication with a host application114by making a request for a specific resource (e.g., based on an HTTP request), and the application server may respond with the requested content stored in one or more response packets. In the illustrated implementation, one or more of host applications114may generate various types of performance data during operation, including event logs, network data, sensor data, and other types of machine data. For example, a host application114comprising a web server may generate one or more web server logs in which details of interactions between the web server and any number of client devices102is recorded. As another example, a host device106comprising a router may generate one or more router logs that record information related to network traffic managed by the router. As yet another example, a host application114comprising a database server may generate one or more logs that record information related to requests sent from other host applications114(e.g., web servers or application servers) for data managed by the database server. 2.2. Client Devices Client devices102ofFIG.1represent any computing device capable of interacting with one or more host devices106via a network104. Examples of client devices102may include, without limitation, smartphones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, and so forth. In general, a client device102can provide access to different content, for instance, content provided by one or more host devices106, etc. Each client device102may comprise one or more client applications110, described in more detail in a separate section hereinafter. 2.3. Client Device Applications In some implementations, each client device102may host or execute one or more client applications110that are capable of interacting with one or more host devices106via one or more networks104. For instance, a client application110may be or comprise a web browser that a user may use to navigate to one or more websites or other resources provided by one or more host devices106. As another example, a client application110may comprise a mobile application or “app.” For example, an operator of a network-based service hosted by one or more host devices106may make available one or more mobile apps that enable users of client devices102to access various resources of the network-based service. As yet another example, client applications110may include background processes that perform various operations without direct interaction from a user. A client application110may include a “plug-in” or “extension” to another application, such as a web browser plug-in or extension. In some implementations, a client application110may include a monitoring component112. At a high level, the monitoring component112comprises a software component or other logic that facilitates generating performance data related to a client device's operating state, including monitoring network traffic sent and received from the client device and collecting other device and/or application-specific information. Monitoring component112may be an integrated component of a client application110, a plug-in, an extension, or any other type of add-on component. Monitoring component112may also be a stand-alone process. In some implementations, a monitoring component112may be created when a client application110is developed, for example, by an application developer using a software development kit (SDK). The SDK may include custom monitoring code that can be incorporated into the code implementing a client application110. When the code is converted to an executable application, the custom code implementing the monitoring functionality can become part of the application itself. In some implementations, an SDK or other code for implementing the monitoring functionality may be offered by a provider of a data intake and query system, such as a system108. In such cases, the provider of the system108can implement the custom code so that performance data generated by the monitoring functionality is sent to the system108to facilitate analysis of the performance data by a developer of the client application or other users. In some implementations, the custom monitoring code may be incorporated into the code of a client application110in a number of different ways, such as the insertion of one or more lines in the client application code that call or otherwise invoke the monitoring component112. As such, a developer of a client application110can add one or more lines of code into the client application110to trigger the monitoring component112at desired points during execution of the application. Code that triggers the monitoring component may be referred to as a monitor trigger. For instance, a monitor trigger may be included at or near the beginning of the executable code of the client application110such that the monitoring component112is initiated or triggered as the application is launched, or included at other points in the code that correspond to various actions of the client application, such as sending a network request or displaying a particular interface. In some implementations, the monitoring component112may monitor one or more aspects of network traffic sent and/or received by a client application110. For example, the monitoring component112may be configured to monitor data packets transmitted to and/or from one or more host applications114. Incoming and/or outgoing data packets can be read or examined to identify network data contained within the packets, for example, and other aspects of data packets can be analyzed to determine a number of network performance statistics. Monitoring network traffic may enable information to be gathered particular to the network performance associated with a client application110or set of applications. In some implementations, network performance data refers to any type of data that indicates information about the network and/or network performance. Network performance data may include, for instance, a URL requested, a connection type (e.g., HTTP, HTTPS, etc.), a connection start time, a connection end time, an HTTP status code, request length, response length, request headers, response headers, connection status (e.g., completion, response time(s), failure, etc.), and the like. Upon obtaining network performance data indicating performance of the network, the network performance data can be transmitted to a data intake and query system108for analysis. Upon developing a client application110that incorporates a monitoring component112, the client application110can be distributed to client devices102. Applications generally can be distributed to client devices102in any manner, or they can be pre-loaded. In some cases, the application may be distributed to a client device102via an application marketplace or other application distribution system. For instance, an application marketplace or other application distribution system might distribute the application to a client device based on a request from the client device to download the application. Examples of functionality that enables monitoring performance of a client device are described in U.S. patent application Ser. No. 14/524,748, entitled “UTILIZING PACKET HEADERS TO MONITOR NETWORK TRAFFIC IN ASSOCIATION WITH A CLIENT DEVICE”, filed on 27 Oct. 2014, and which is hereby incorporated by reference in its entirety for all purposes. In some implementations, the monitoring component112may also monitor and collect performance data related to one or more aspects of the operational state of a client application110and/or client device102. For example, a monitoring component112may be configured to collect device performance information by monitoring one or more client device operations, or by making calls to an operating system and/or one or more other applications executing on a client device102for performance information. Device performance information may include, for instance, a current wireless signal strength of the device, a current connection type and network carrier, current memory performance information, a geographic location of the device, a device orientation, and any other information related to the operational state of the client device. In some implementations, the monitoring component112may also monitor and collect other device profile information including, for example, a type of client device, a manufacturer and model of the device, versions of various software applications installed on the device, and so forth. In general, a monitoring component112may be configured to generate performance data in response to a monitor trigger in the code of a client application110or other triggering application event, as described above, and to store the performance data in one or more data records. Each data record, for example, may include a collection of field-value pairs, each field-value pair storing a particular item of performance data in association with a field for the item. For example, a data record generated by a monitoring component112may include a “networkLatency” field (not shown inFIG.1) in which a value is stored. This field indicates a network latency measurement associated with one or more network requests. The data record may include a “state” field to store a value indicating a state of a network connection, and so forth for any number of aspects of collected performance data. 2.4. Data Server System FIG.2is a block diagram of an example data intake and query system108, in accordance with example implementations. System108includes one or more forwarders204that receive data from a variety of input data sources202, and one or more indexers206that process and store the data in one or more data stores208. These forwarders204and indexers208can comprise separate computer systems, or may alternatively comprise separate processes executing on one or more computer systems. Each data source202broadly represents a distinct source of data that can be consumed by system108. Examples of data sources202include, without limitation, data files, directories of files, data sent over a network, event logs, registries, etc. During operation, the forwarders204identify which indexers206receive data collected from a data source202and forward the data to the appropriate indexers. Forwarders204can also perform operations on the data before forwarding, including removing extraneous data, detecting timestamps in the data, parsing data, indexing data, routing data based on criteria relating to the data being routed, and/or performing other data transformations. In some implementations, a forwarder204may comprise a service accessible to client devices102and host devices106via a network104. For example, one type of forwarder204may be capable of consuming vast amounts of real-time data from a potentially large number of client devices102and/or host devices106. The forwarder204may, for example, comprise a computing device which implements multiple data pipelines or “queues” to handle forwarding of network data to indexers206. A forwarder204may also perform many of the functions that are performed by an indexer. For example, a forwarder204may perform keyword extractions on raw data or parse raw data to create events. A forwarder204may generate time stamps for events. Additionally or alternatively, a forwarder204may perform routing of events to indexers206. Data store208may contain events derived from machine data from a variety of sources all pertaining to the same component in an IT environment, and this data may be produced by the machine in question or by other components in the IT environment. 2.5. Cloud-Based System Overview The example data intake and query system108described in reference toFIG.2comprises several system components, including one or more forwarders, indexers, and search heads. In some environments, a user of a data intake and query system108may install and configure, on computing devices owned and operated by the user, one or more software applications that implement some or all of these system components. For example, a user may install a software application on server computers owned by the user and configure each server to operate as one or more of a forwarder, an indexer, a search head, etc. This arrangement generally may be referred to as an “on-premises” solution. That is, the system108is installed and operates on computing devices directly controlled by the user of the system. Some users may prefer an on-premises solution because it may provide a greater level of control over the configuration of certain aspects of the system (e.g., security, privacy, standards, controls, etc.). However, other users may instead prefer an arrangement in which the user is not directly responsible for providing and managing the computing devices upon which various components of system108operate. In one implementation, to provide an alternative to an entirely on-premises environment for system108, one or more of the components of a data intake and query system instead may be provided as a cloud-based service. In this context, a cloud-based service refers to a service hosted by one more computing resources that are accessible to end users over a network, for example, by using a web browser or other application on a client device to interface with the remote computing resources. For example, a service provider may provide a cloud-based data intake and query system by managing computing resources configured to implement various aspects of the system (e.g., forwarders, indexers, search heads, etc.) and by providing access to the system to end users via a network. Typically, a user may pay a subscription or other fee to use such a service. Each subscribing user of the cloud-based service may be provided with an account that enables the user to configure a customized cloud-based system based on the user's preferences. FIG.3illustrates a block diagram of an example cloud-based data intake and query system. Similar to the system ofFIG.2, the networked computer system300includes input data sources202and forwarders204. These input data sources and forwarders may be in a subscriber's private computing environment. Alternatively, they might be directly managed by the service provider as part of the cloud service. In the example system300, one or more forwarders204and client devices302are coupled to a cloud-based data intake and query system306via one or more networks304. Network304broadly represents one or more LANs, WANs, cellular networks, intranetworks, internetworks, etc., using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet, and is used by client devices302and forwarders204to access the system306. Similar to the system of38, each of the forwarders204may be configured to receive data from an input source and to forward the data to other components of the system306for further processing. In some implementations, a cloud-based data intake and query system306may comprise a plurality of system instances308. In general, each system instance308may include one or more computing resources managed by a provider of the cloud-based system306made available to a particular subscriber. The computing resources comprising a system instance308may, for example, include one or more servers or other devices configured to implement one or more forwarders, indexers, search heads, and other components of a data intake and query system, similar to system108. As indicated above, a subscriber may use a web browser or other application of a client device302to access a web portal or other interface that enables the subscriber to configure an instance308. Providing a data intake and query system as described in reference to system108as a cloud-based service presents a number of challenges. Each of the components of a system108(e.g., forwarders, indexers, and search heads) may at times refer to various configuration files stored locally at each component. These configuration files typically may involve some level of user configuration to accommodate particular types of data a user desires to analyze and to account for other user preferences. However, in a cloud-based service context, users typically may not have direct access to the underlying computing resources implementing the various system components (e.g., the computing resources comprising each system instance308) and may desire to make such configurations indirectly, for example, using one or more web-based interfaces. Thus, the techniques and systems described herein for providing user interfaces that enable a user to configure source type definitions are applicable to both on-premises and cloud-based service contexts, or some combination thereof (e.g., a hybrid system where both an on-premises environment, such as SPLUNK® ENTERPRISE, and a cloud-based environment, such as SPLUNK CLOUD™, are centrally visible). 2.6. Searching Externally-Archived Data FIG.4shows a block diagram of an example of a data intake and query system108that provides transparent search facilities for data systems that are external to the data intake and query system. Such facilities are available in the Splunk® Analytics for Hadoop® system provided by Splunk Inc. of San Francisco, California. Splunk® Analytics for Hadoop® represents an analytics platform that enables business and IT teams to rapidly explore, analyze, and visualize data in Hadoop® and NoSQL data stores. The search head210of the data intake and query system receives search requests from one or more client devices404over network connections420. As discussed above, the data intake and query system108may reside in an enterprise location, in the cloud, etc.FIG.4illustrates that multiple client devices404a,404b, . . . ,404nmay communicate with the data intake and query system108. The client devices404may communicate with the data intake and query system using a variety of connections. For example, one client device inFIG.4is illustrated as communicating over an Internet (Web) protocol, another client device is illustrated as communicating via a command line interface, and another client device is illustrated as communicating via a software developer kit (SDK). The search head210analyzes the received search request to identify request parameters. If a search request received from one of the client devices404references an index maintained by the data intake and query system, then the search head210connects to one or more indexers206of the data intake and query system for the index referenced in the request parameters. That is, if the request parameters of the search request reference an index, then the search head accesses the data in the index via the indexer. The data intake and query system108may include one or more indexers206, depending on system access resources and requirements. As described further below, the indexers206retrieve data from their respective local data stores208as specified in the search request. The indexers and their respective data stores can comprise one or more storage devices and typically reside on the same system, though they may be connected via a local network connection. If the request parameters of the received search request reference an external data collection, which is not accessible to the indexers206or under the management of the data intake and query system, then the search head210can access the external data collection through an External Result Provider (ERP) process410. An external data collection may be referred to as a “virtual index” (plural, “virtual indices”). An ERP process provides an interface through which the search head210may access virtual indices. Thus, a search reference to an index of the system relates to a locally stored and managed data collection. In contrast, a search reference to a virtual index relates to an externally stored and managed data collection, which the search head may access through one or more ERP processes410,412.FIG.4shows two ERP processes410,412that connect to respective remote (external) virtual indices, which are indicated as a Hadoop or another system414(e.g., Amazon S3, Amazon EMR, other Hadoop® Compatible File Systems (HCFS), etc.) and a relational database management system (RDBMS)416. Other virtual indices may include other file organizations and protocols, such as Structured Query Language (SQL) and the like. The ellipses between the ERP processes410,412indicate optional additional ERP processes of the data intake and query system108. An ERP process may be a computer process that is initiated or spawned by the search head210and is executed by the search data intake and query system108. Alternatively or additionally, an ERP process may be a process spawned by the search head210on the same or different host system as the search head210resides. The search head210may spawn a single ERP process in response to multiple virtual indices referenced in a search request, or the search head may spawn different ERP processes for different virtual indices. Generally, virtual indices that share common data configurations or protocols may share ERP processes. For example, all search query references to a Hadoop file system may be processed by the same ERP process, if the ERP process is suitably configured. Likewise, all search query references to a SQL database may be processed by the same ERP process. In addition, the search head may provide a common ERP process for common external data source types (e.g., a common vendor may utilize a common ERP process, even if the vendor includes different data storage system types, such as Hadoop and SQL). Common indexing schemes also may be handled by common ERP processes, such as flat text files or Weblog files. The search head210determines the number of ERP processes to be initiated via the use of configuration parameters that are included in a search request message. Generally, there is a one-to-many relationship between an external results provider “family” and ERP processes. There is also a one-to-many relationship between an ERP process and corresponding virtual indices that are referred to in a search request. For example, using RDBMS, assume two independent instances of such a system by one vendor, such as one RDBMS for production and another RDBMS used for development. In such a situation, it is likely preferable (but optional) to use two ERP processes to maintain the independent operation as between production and development data. Both of the ERPs, however, will belong to the same family, because the two RDBMS system types are from the same vendor. The ERP processes410,412receive a search request from the search head210. The search head may optimize the received search request for execution at the respective external virtual index. Alternatively, the ERP process may receive a search request as a result of analysis performed by the search head or by a different system process. The ERP processes410,412can communicate with the search head210via conventional input/output routines (e.g., standard in/standard out, etc.). In this way, the ERP process receives the search request from a client device such that the search request may be efficiently executed at the corresponding external virtual index. The ERP processes410,412may be implemented as a process of the data intake and query system. Each ERP process may be provided by the data intake and query system, or may be provided by process or application providers who are independent of the data intake and query system. Each respective ERP process may include an interface application installed at a computer of the external result provider that ensures proper communication between the search support system and the external result provider. The ERP processes410,412generate appropriate search requests in the protocol and syntax of the respective virtual indices414,416, each of which corresponds to the search request received by the search head210. Upon receiving search results from their corresponding virtual indices, the respective ERP process passes the result to the search head210, which may return or display the results or a processed set of results based on the returned results to the respective client device. Client devices404may communicate with the data intake and query system108through a network interface420, e.g., one or more LANs, WANs, cellular networks, intranetworks, and/or internetworks using any of wired, wireless, terrestrial microwave, satellite links, etc., and may include the public Internet. The analytics platform utilizing the External Result Provider process described in more detail in U.S. Pat. No. 8,738,629, entitled “External Result Provided Process For Retrieving Data Stored Using A Different Configuration Or Protocol”, issued on 27 May 2014, U.S. Pat. No. 8,738,587, entitled “PROCESSING A SYSTEM SEARCH REQUEST BY RETRIEVING RESULTS FROM BOTH A NATIVE INDEX AND A VIRTUAL INDEX”, issued on 25 Jul. 2013, U.S. patent application Ser. No. 14/266,832, entitled “PROCESSING A SYSTEM SEARCH REQUEST ACROSS DISPARATE DATA COLLECTION SYSTEMS”, filed on 1 May 2014, and U.S. Pat. No. 9,514,189, entitled “PROCESSING A SYSTEM SEARCH REQUEST INCLUDING EXTERNAL DATA SOURCES”, issued on 6 Dec. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. 2.6.1. ERP Process Features The ERP processes described above may include two operation modes: a streaming mode and a reporting mode. The ERP processes can operate in streaming mode only, in reporting mode only, or in both modes simultaneously. Operating in both modes simultaneously is referred to as mixed mode operation. In a mixed mode operation, the ERP at some point can stop providing the search head with streaming results and only provide reporting results thereafter, or the search head at some point may start ignoring streaming results it has been using and only use reporting results thereafter. The streaming mode returns search results in real time, with minimal processing, in response to the search request. The reporting mode provides results of a search request with processing of the search results prior to providing them to the requesting search head, which in turn provides results to the requesting client device. ERP operation with such multiple modes provides greater performance flexibility with regard to report time, search latency, and resource utilization. In a mixed mode operation, both streaming mode and reporting mode are operating simultaneously. The streaming mode results (e.g., the machine data obtained from the external data source) are provided to the search head, which can then process the results data (e.g., break the machine data into events, timestamp it, filter it, etc.) and integrate the results data with the results data from other external data sources, and/or from data stores of the search head. The search head performs such processing and can immediately start returning interim (streaming mode) results to the user at the requesting client device; simultaneously, the search head is waiting for the ERP process to process the data it is retrieving from the external data source as a result of the concurrently executing reporting mode. In some instances, the ERP process initially operates in a mixed mode, such that the streaming mode operates to enable the ERP quickly to return interim results (e.g., some of the machined data or unprocessed data necessary to respond to a search request) to the search head, enabling the search head to process the interim results and begin providing to the client or search requester interim results that are responsive to the query. Meanwhile, in this mixed mode, the ERP also operates concurrently in reporting mode, processing portions of machine data in a manner responsive to the search query. Upon determining that it has results from the reporting mode available to return to the search head, the ERP may halt processing in the mixed mode at that time (or some later time) by stopping the return of data in streaming mode to the search head and switching to reporting mode only. The ERP at this point starts sending interim results in reporting mode to the search head, which in turn may then present this processed data responsive to the search request to the client or search requester. Typically the search head switches from using results from the ERP's streaming mode of operation to results from the ERP's reporting mode of operation when the higher bandwidth results from the reporting mode outstrip the amount of data processed by the search head in the streaming mode of ERP operation. A reporting mode may have a higher bandwidth because the ERP does not have to spend time transferring data to the search head for processing all the machine data. In addition, the ERP may optionally direct another processor to do the processing. The streaming mode of operation does not need to be stopped to gain the higher bandwidth benefits of a reporting mode; the search head could simply stop using the streaming mode results—and start using the reporting mode results—when the bandwidth of the reporting mode has caught up with or exceeded the amount of bandwidth provided by the streaming mode. Thus, a variety of triggers and ways to accomplish a search head's switch from using streaming mode results to using reporting mode results may be appreciated by one skilled in the art. The reporting mode can involve the ERP process (or an external system) performing event breaking, time stamping, filtering of events to match the search query request, and calculating statistics on the results. The user can request particular types of data, such as if the search query itself involves types of events, or the search request may ask for statistics on data, such as on events that meet the search request. In either case, the search head understands the query language used in the received query request, which may be a proprietary language. One exemplary query language is Splunk Processing Language (SPL) developed by the assignee of the application, Splunk Inc. The search head typically understands how to use that language to obtain data from the indexers, which store data in a format used by the SPLUNK® Enterprise system. The ERP processes support the search head, as the search head is not ordinarily configured to understand the format in which data is stored in external data sources such as Hadoop or SQL data systems. Rather, the ERP process performs that translation from the query submitted in the search support system's native format (e.g., SPL if SPLUNK® ENTERPRISE is used as the search support system) to a search query request format that will be accepted by the corresponding external data system. The external data system typically stores data in a different format from that of the search support system's native index format, and it utilizes a different query language (e.g., SQL or MapReduce, rather than SPL or the like). As noted, the ERP process can operate in the streaming mode alone. After the ERP process has performed the translation of the query request and received raw results from the streaming mode, the search head can integrate the returned data with any data obtained from local data sources (e.g., native to the search support system), other external data sources, and other ERP processes (if such operations were required to satisfy the terms of the search query). An advantage of mixed mode operation is that, in addition to streaming mode, the ERP process is also executing concurrently in reporting mode. Thus, the ERP process (rather than the search head) is processing query results (e.g., performing event breaking, timestamping, filtering, possibly calculating statistics if required to be responsive to the search query request, etc.). It should be apparent to those skilled in the art that additional time is needed for the ERP process to perform the processing in such a configuration. Therefore, the streaming mode will allow the search head to start returning interim results to the user at the client device before the ERP process can complete sufficient processing to start returning any search results. The switchover between streaming and reporting mode happens when the ERP process determines that the switchover is appropriate, such as when the ERP process determines it can begin returning meaningful results from its reporting mode. The operation described above illustrates the source of operational latency: streaming mode has low latency (immediate results) and usually has relatively low bandwidth (fewer results can be returned per unit of time). In contrast, the concurrently running reporting mode has relatively high latency (it has to perform a lot more processing before returning any results) and usually has relatively high bandwidth (more results can be processed per unit of time). For example, when the ERP process does begin returning report results, it returns more processed results than in the streaming mode, because, e.g., statistics only need to be calculated to be responsive to the search request. That is, the ERP process doesn't have to take time to first return machine data to the search head. As noted, the ERP process could be configured to operate in streaming mode alone and return just the machine data for the search head to process in a way that is responsive to the search request. Alternatively, the ERP process can be configured to operate in the reporting mode only. Also, the ERP process can be configured to operate in streaming mode and reporting mode concurrently, as described, with the ERP process stopping the transmission of streaming results to the search head when the concurrently running reporting mode has caught up and started providing results. The reporting mode does not require the processing of all machine data that is responsive to the search query request before the ERP process starts returning results; rather, the reporting mode usually performs processing of chunks of events and returns the processing results to the search head for each chunk. For example, an ERP process can be configured to merely return the contents of a search result file verbatim, with little or no processing of results. That way, the search head performs all processing (such as parsing byte streams into events, filtering, etc.). The ERP process can be configured to perform additional intelligence, such as analyzing the search request and handling all the computation that a native search indexer process would otherwise perform. In this way, the configured ERP process provides greater flexibility in features while operating according to desired preferences, such as response latency and resource requirements. 2.7. Data Ingestion FIG.5Ais a flow chart of an example method that illustrates how indexers process, index, and store data received from forwarders, in accordance with example implementations. The data flow illustrated inFIG.5Ais provided for illustrative purposes only; those skilled in the art would understand that one or more of the steps of the processes illustrated inFIG.5Amay be removed or that the ordering of the steps may be changed. Furthermore, for the purposes of illustrating a clear example, one or more particular system components are described in the context of performing various operations during each of the data flow stages. For example, a forwarder is described as receiving and processing machine data during an input phase; an indexer is described as parsing and indexing machine data during parsing and indexing phases; and a search head is described as performing a search query during a search phase. However, other system arrangements and distributions of the processing steps across system components may be used. 2.7.1. Input At block502, a forwarder receives data from an input source, such as a data source202shown inFIG.2. A forwarder initially may receive the data as a raw data stream generated by the input source. For example, a forwarder may receive a data stream from a log file generated by an application server, from a stream of network data from a network device, or from any other source of data. In some implementations, a forwarder receives the raw data and may segment the data stream into “blocks”, possibly of a uniform data size, to facilitate subsequent processing steps. At block504, a forwarder or other system component annotates each block generated from the raw data with one or more metadata fields. These metadata fields may, for example, provide information related to the data block as a whole and may apply to each event that is subsequently derived from the data in the data block. For example, the metadata fields may include separate fields specifying each of a host, a source, and a source type related to the data block. A host field may contain a value identifying a host name or IP address of a device that generated the data. A source field may contain a value identifying a source of the data, such as a pathname of a file or a protocol and port related to received network data. A source type field may contain a value specifying a particular source type label for the data. Additional metadata fields may also be included during the input phase, such as a character encoding of the data, if known, and possibly other values that provide information relevant to later processing steps. In some implementations, a forwarder forwards the annotated data blocks to another system component (typically an indexer) for further processing. The data intake and query system allows forwarding of data from one data intake and query instance to another, or even to a third-party system. The data intake and query system can employ different types of forwarders in a configuration. In some implementations, a forwarder may contain the essential components needed to forward data. A forwarder can gather data from a variety of inputs and forward the data to an indexer for indexing and searching. A forwarder can also tag metadata (e.g., source, source type, host, etc.). In some implementations, a forwarder has the capabilities of the aforementioned forwarder as well as additional capabilities. The forwarder can parse data before forwarding the data (e.g., can associate a time stamp with a portion of data and create an event, etc.) and can route data based on criteria such as source or type of event. The forwarder can also index data locally while forwarding the data to another indexer. 2.7.2. Parsing At block506, an indexer receives data blocks from a forwarder and parses the data to organize the data into events. In some implementations, to organize the data into events, an indexer may determine a source type associated with each data block (e.g., by extracting a source type label from the metadata fields associated with the data block, etc.) and refer to a source type configuration corresponding to the identified source type. The source type definition may include one or more properties that indicate to the indexer to automatically determine the boundaries within the received data that indicate the portions of machine data for events. In general, these properties may include regular expression-based rules or delimiter rules where, for example, event boundaries may be indicated by predefined characters or character strings. These predefined characters may include punctuation marks or other special characters including, for example, carriage returns, tabs, spaces, line breaks, etc. If a source type for the data is unknown to the indexer, an indexer may infer a source type for the data by examining the structure of the data. Then, the indexer can apply an inferred source type definition to the data to create the events. At block508, the indexer determines a timestamp for each event. Similar to the process for parsing machine data, an indexer may again refer to a source type definition associated with the data to locate one or more properties that indicate instructions for determining a timestamp for each event. The properties may, for example, instruct an indexer to extract a time value from a portion of data for the event, to interpolate time values based on timestamps associated with temporally proximate events, to create a timestamp based on a time the portion of machine data was received or generated, to use the timestamp of a previous event, or use any other rules for determining timestamps. At block510, the indexer associates with each event one or more metadata fields including a field containing the timestamp determined for the event. In some implementations, a timestamp may be included in the metadata fields. These metadata fields may include any number of “default fields” that are associated with all events, and may also include one more custom fields as defined by a user. Similar to the metadata fields associated with the data blocks at block504, the default metadata fields associated with each event may include a host, source, and source type field including or in addition to a field storing the timestamp. At block512, an indexer may optionally apply one or more transformations to data included in the events created at block506. For example, such transformations can include removing a portion of an event (e.g., a portion used to define event boundaries, extraneous characters from the event, other extraneous text, etc.), masking a portion of an event (e.g., masking a credit card number), removing redundant portions of an event, etc. The transformations applied to events may, for example, be specified in one or more configuration files and referenced by one or more source type definitions. FIG.5Cillustrates an illustrative example of machine data can be stored in a data store in accordance with various disclosed implementations. In other implementations, machine data can be stored in a flat file in a corresponding bucket with an associated index file, such as a time series index or “TSIDX.” As such, the depiction of machine data and associated metadata as rows and columns in the table ofFIG.5Cis merely illustrative and is not intended to limit the data format in which the machine data and metadata is stored in various implementations described herein. In one particular implementation, machine data can be stored in a compressed or encrypted formatted. In such implementations, the machine data can be stored with or be associated with data that describes the compression or encryption scheme with which the machine data is stored. The information about the compression or encryption scheme can be used to decompress or decrypt the machine data, and any metadata with which it is stored, at search time. As mentioned above, certain metadata, e.g., host536, source537, source type538and timestamps535can be generated for each event, and associated with a corresponding portion of machine data539when storing the event data in a data store, e.g., data store208. Any of the metadata can be extracted from the corresponding machine data, or supplied or defined by an entity, such as a user or computer system. The metadata fields can become part of or stored with the event. Note that while the time-stamp metadata field can be extracted from the raw data of each event, the values for the other metadata fields may be determined by the indexer based on information it receives pertaining to the source of the data separate from the machine data. While certain default or user-defined metadata fields can be extracted from the machine data for indexing purposes, all the machine data within an event can be maintained in its original condition. As such, in implementations in which the portion of machine data included in an event is unprocessed or otherwise unaltered, it is referred to herein as a portion of raw machine data. In other implementations, the port of machine data in an event can be processed or otherwise altered. As such, unless certain information needs to be removed for some reasons (e.g. extraneous information, confidential information), all the raw machine data contained in an event can be preserved and saved in its original form. Accordingly, the data store in which the event records are stored is sometimes referred to as a “raw record data store.” The raw record data store contains a record of the raw event data tagged with the various default fields. InFIG.5C, the first three rows of the table represent events531,532, and533and are related to a server access log that records requests from multiple clients processed by a server, as indicated by entry of “access.log” in the source column536. In the example shown inFIG.5C, each of the events531-534is associated with a discrete request made from a client device. The raw machine data generated by the server and extracted from a server access log can include the IP address of the client540, the user id of the person requesting the document541, the time the server finished processing the request542, the request line from the client543, the status code returned by the server to the client545, the size of the object returned to the client (in this case, the gif file requested by the client)546and the time spent to serve the request in microseconds544. As seen inFIG.5C, all the raw machine data retrieved from the server access log is retained and stored as part of the corresponding events,1221,1222, and1223in the data store. Event534is associated with an entry in a server error log, as indicated by “error.log” in the source column537, that records errors that the server encountered when processing a client request. Similar to the events related to the server access log, all the raw machine data in the error log file pertaining to event534can be preserved and stored as part of the event534. Saving minimally processed or unprocessed machine data in a data store associated with metadata fields in the manner similar to that shown inFIG.5Cis advantageous because it allows search of all the machine data at search time instead of searching only previously specified and identified fields or field-value pairs. As mentioned above, because data structures used by various implementations of the present disclosure maintain the underlying raw machine data and use a late-binding schema for searching the raw machines data, it enables a user to continue investigating and learn valuable insights about the raw data. In other words, the user is not compelled to know about all the fields of information that will be needed at data ingestion time. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by defining new extraction rules, or modifying or deleting existing extraction rules used by the system. 2.7.3. Indexing At blocks514and516, an indexer can optionally generate a keyword index to facilitate fast keyword searching for events. To build a keyword index, at block514, the indexer identifies a set of keywords in each event. At block516, the indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. In some implementations, the keyword index may include entries for field name-value pairs found in events, where a field name-value pair can include a pair of keywords connected by a symbol, such as an equals sign or colon. This way, events containing these field name-value pairs can be quickly located. In some implementations, fields can automatically be generated for some or all of the field names of the field name-value pairs at the time of indexing. For example, if the string “dest=10.0.1.2” is found in an event, a field named “dest” may be created for the event, and assigned a value of “10.0.1.2”. At block518, the indexer stores the events with an associated timestamp in a data store208. Timestamps enable a user to search for events based on a time range. In some implementations, the stored events are organized into “buckets,” where each bucket stores events associated with a specific time range based on the timestamps associated with each event. This improves time-based searching, as well as allows for events with recent timestamps, which may have a higher likelihood of being accessed, to be stored in a faster memory to facilitate faster retrieval. For example, buckets containing the most recent events can be stored in flash memory rather than on a hard disk. In some implementations, each bucket may be associated with an identifier, a time range, and a size constraint. Each indexer206may be responsible for storing and searching a subset of the events contained in a corresponding data store208. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. For example, using map-reduce techniques, each indexer returns partial responses for a subset of events to a search head that combines the results to produce an answer for the query. By storing events in buckets for specific time ranges, an indexer may further optimize the data retrieval process by searching buckets corresponding to time ranges that are relevant to a query. In some implementations, each indexer has a home directory and a cold directory. The home directory of an indexer stores hot buckets and warm buckets, and the cold directory of an indexer stores cold buckets. A hot bucket is a bucket that is capable of receiving and storing events. A warm bucket is a bucket that can no longer receive events for storage but has not yet been moved to the cold directory. A cold bucket is a bucket that can no longer receive events and may be a bucket that was previously stored in the home directory. The home directory may be stored in faster memory, such as flash memory, as events may be actively written to the home directory, and the home directory may typically store events that are more frequently searched and thus are accessed more frequently. The cold directory may be stored in slower and/or larger memory, such as a hard disk, as events are no longer being written to the cold directory, and the cold directory may typically store events that are not as frequently searched and thus are accessed less frequently. In some implementations, an indexer may also have a quarantine bucket that contains events having potentially inaccurate information, such as an incorrect time stamp associated with the event or a time stamp that appears to be an unreasonable time stamp for the corresponding event. The quarantine bucket may have events from any time range; as such, the quarantine bucket may always be searched at search time. Additionally, an indexer may store old, archived data in a frozen bucket that is not capable of being searched at search time. In some implementations, a frozen bucket may be stored in slower and/or larger memory, such as a hard disk, and may be stored in offline and/or remote storage. Moreover, events and buckets can also be replicated across different indexers and data stores to facilitate high availability and disaster recovery as described in U.S. Pat. No. 9,130,971, entitled “Site-Based Search Affinity”, issued on 8 Sep. 2015, and in U.S. patent Ser. No. 14/266,817, entitled “Multi-Site Clustering”, issued on 1 Sep. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. FIG.5Bis a block diagram of an example data store501that includes a directory for each index (or partition) that contains a portion of data managed by an indexer.FIG.5Bfurther illustrates details of an implementation of an inverted index507B and an event reference array515associated with inverted index507B. The data store501can correspond to a data store208that stores events managed by an indexer206or can correspond to a different data store associated with an indexer206. In the illustrated implementation, the data store501includes a _main directory503associated with a _main index and a _test directory505associated with a _test index. However, the data store501can include fewer or more directories. In some implementations, multiple indexes can share a single directory or all indexes can share a common directory. Additionally, although illustrated as a single data store501, it will be understood that the data store501can be implemented as multiple data stores storing different portions of the information shown inFIG.5B. For example, a single index or partition can span multiple directories or multiple data stores, and can be indexed or searched by multiple corresponding indexers. In the illustrated implementation ofFIG.5B, the index-specific directories503and505include inverted indexes507A,507B and509A,509B, respectively. The inverted indexes507A . . .507B, and509A . . .509B can be keyword indexes or field-value pair indexes described herein and can include less or more information that depicted inFIG.5B. In some implementations, the inverted index507A . . .507B, and509A . . .509B can correspond to a distinct time-series bucket that is managed by the indexer206and that contains events corresponding to the relevant index (e.g., _main index, _test index). As such, each inverted index can correspond to a particular range of time for an index. Additional files, such as high performance indexes for each time-series bucket of an index, can also be stored in the same directory as the inverted indexes507A . . .507B, and509A . . .509B. In some implementations inverted index507A . . .507B, and509A . . .509B can correspond to multiple time-series buckets or inverted indexes507A . . .507B, and509A . . .509B can correspond to a single time-series bucket. Each inverted index507A . . .507B, and509A . . .509B can include one or more entries, such as keyword (or token) entries or field-value pair entries. Furthermore, in certain implementations, the inverted indexes507A . . .507B, and509A . . .509B can include additional information, such as a time range523associated with the inverted index or an index identifier525identifying the index associated with the inverted index507A . . .507B, and509A . . .509B. However, each inverted index507A . . .507B, and509A . . .509B can include less or more information than depicted. Token entries, such as token entries511illustrated in inverted index507B, can include a token511A (e.g., “error,” “itemID,” etc.) and event references511B indicative of events that include the token. For example, for the token “error,” the corresponding token entry includes the token “error” and an event reference, or unique identifier, for each event stored in the corresponding time-series bucket that includes the token “error.” In the illustrated implementation ofFIG.5B, the error token entry includes the identifiers 3, 5, 6, 8, 11, and 12 corresponding to events managed by the indexer206and associated with the index _main503that are located in the time-series bucket associated with the inverted index507B. In some cases, some token entries can be default entries, automatically determined entries, or user specified entries. In some implementations, the indexer206can identify each word or string in an event as a distinct token and generate a token entry for it. In some cases, the indexer206can identify the beginning and ending of tokens based on punctuation, spaces, as described in greater detail herein. In certain cases, the indexer206can rely on user input or a configuration file to identify tokens for token entries511, etc. It will be understood that any combination of token entries can be included as a default, automatically determined, or included based on user-specified criteria. Similarly, field-value pair entries, such as field-value pair entries513shown in inverted index507B, can include a field-value pair513A and event references513B indicative of events that include a field value that corresponds to the field-value pair. For example, for a field-value pair sourcetype::sendmail, a field-value pair entry would include the field-value pair sourcetype::sendmail and a unique identifier, or event reference, for each event stored in the corresponding time-series bucket that includes a sendmail sourcetype. In some cases, the field-value pair entries513can be default entries, automatically determined entries, or user specified entries. As a non-limiting example, the field-value pair entries for the fields host, source, sourcetype can be included in the inverted indexes507A . . .507B, and509A . . .509B as a default. As such, all of the inverted indexes507A . . .507B, and509A . . .509B can include field-value pair entries for the fields host, source, sourcetype. As yet another non-limiting example, the field-value pair entries for the IP_address field can be user specified and may only appear in the inverted index507B based on user-specified criteria. As another non-limiting example, as the indexer indexes the events, it can automatically identify field-value pairs and create field-value pair entries. For example, based on the indexers review of events, it can identify IP_address as a field in each event and add the IP_address field-value pair entries to the inverted index507B. It will be understood that any combination of field-value pair entries can be included as a default, automatically determined, or included based on user-specified criteria. Each unique identifier517, or event reference, can correspond to a unique event located in the time series bucket. However, the same event reference can be located in multiple entries. For example if an event has a sourcetype splunkd, host www1 and token “warning,” then the unique identifier for the event will appear in the field-value pair entries sourcetype::splunkd and host::www1, as well as the token entry “warning.” With reference to the illustrated implementation ofFIG.5Band the event that corresponds to the event reference 3, the event reference 3 is found in the field-value pair entries513host::hostA, source::sourceB, sourcetype::sourcetypeA, and IP_address::91.205.189.15 indicating that the event corresponding to the event references is from hostA, sourceB, of sourcetypeA, and includes 91.205.189.15 in the event data. For some fields, the unique identifier is located in only one field-value pair entry for a particular field. For example, the inverted index may include four sourcetype field-value pair entries corresponding to four different sourcetypes of the events stored in a bucket (e.g., sourcetypes: sendmail, splunkd, web_access, and web_service). Within those four sourcetype field-value pair entries, an identifier for a particular event may appear in only one of the field-value pair entries. With continued reference to the example illustrated implementation ofFIG.5B, since the event reference 7 appears in the field-value pair entry sourcetype::sourcetypeA, then it does not appear in the other field-value pair entries for the sourcetype field, including sourcetype::sourcetypeB, sourcetype::sourcetypeC, and sourcetype::sourcetypeD. The event references517can be used to locate the events in the corresponding bucket. For example, the inverted index can include, or be associated with, an event reference array515. The event reference array515can include an array entry517for each event reference in the inverted index507B. Each array entry517can include location information519of the event corresponding to the unique identifier (non-limiting example: seek address of the event), a timestamp521associated with the event, or additional information regarding the event associated with the event reference, etc. For each token entry511or field-value pair entry513, the event reference501B or unique identifiers can be listed in chronological order or the value of the event reference can be assigned based on chronological data, such as a timestamp associated with the event referenced by the event reference. For example, the event reference 1 in the illustrated implementation ofFIG.5Bcan correspond to the first-in-time event for the bucket, and the event reference 12 can correspond to the last-in-time event for the bucket. However, the event references can be listed in any order, such as reverse chronological order, ascending order, descending order, or some other order, etc. Further, the entries can be sorted. For example, the entries can be sorted alphabetically (collectively or within a particular group), by entry origin (e.g., default, automatically generated, user-specified, etc.), by entry type (e.g., field-value pair entry, token entry, etc.), or chronologically by when added to the inverted index, etc. In the illustrated implementation ofFIG.5B, the entries are sorted first by entry type and then alphabetically. As a non-limiting example of how the inverted indexes507A . . .507B, and509A . . .509B can be used during a data categorization request command, the indexers can receive filter criteria indicating data that is to be categorized and categorization criteria indicating how the data is to be categorized. Example filter criteria can include, but is not limited to, indexes (or partitions), hosts, sources, sourcetypes, time ranges, field identifier, keywords, etc. Using the filter criteria, the indexer identifies relevant inverted indexes to be searched. For example, if the filter criteria includes a set of partitions, the indexer can identify the inverted indexes stored in the directory corresponding to the particular partition as relevant inverted indexes. Other means can be used to identify inverted indexes associated with a partition of interest. For example, in some implementations, the indexer can review an entry in the inverted indexes, such as an index-value pair entry513to determine if a particular inverted index is relevant. If the filter criteria does not identify any partition, then the indexer can identify all inverted indexes managed by the indexer as relevant inverted indexes. Similarly, if the filter criteria includes a time range, the indexer can identify inverted indexes corresponding to buckets that satisfy at least a portion of the time range as relevant inverted indexes. For example, if the time range is last hour then the indexer can identify all inverted indexes that correspond to buckets storing events associated with timestamps within the last hour as relevant inverted indexes. When used in combination, an index filter criterion specifying one or more partitions and a time range filter criterion specifying a particular time range can be used to identify a subset of inverted indexes within a particular directory (or otherwise associated with a particular partition) as relevant inverted indexes. As such, the indexer can focus the processing to only a subset of the total number of inverted indexes that the indexer manages. Once the relevant inverted indexes are identified, the indexer can review them using any additional filter criteria to identify events that satisfy the filter criteria. In some cases, using the known location of the directory in which the relevant inverted indexes are located, the indexer can determine that any events identified using the relevant inverted indexes satisfy an index filter criterion. For example, if the filter criteria includes a partition main, then the indexer can determine that any events identified using inverted indexes within the partition main directory (or otherwise associated with the partition main) satisfy the index filter criterion. Furthermore, based on the time range associated with each inverted index, the indexer can determine that that any events identified using a particular inverted index satisfies a time range filter criterion. For example, if a time range filter criterion is for the last hour and a particular inverted index corresponds to events within a time range of 50 minutes ago to 35 minutes ago, the indexer can determine that any events identified using the particular inverted index satisfy the time range filter criterion. Conversely, if the particular inverted index corresponds to events within a time range of 59 minutes ago to 62 minutes ago, the indexer can determine that some events identified using the particular inverted index may not satisfy the time range filter criterion. Using the inverted indexes, the indexer can identify event references (and therefore events) that satisfy the filter criteria. For example, if the token “error” is a filter criterion, the indexer can track all event references within the token entry “error.” Similarly, the indexer can identify other event references located in other token entries or field-value pair entries that match the filter criteria. The system can identify event references located in all of the entries identified by the filter criteria. For example, if the filter criteria include the token “error” and field-value pair sourcetype::web_ui, the indexer can track the event references found in both the token entry “error” and the field-value pair entry sourcetype::web_ui. As mentioned previously, in some cases, such as when multiple values are identified for a particular filter criterion (e.g., multiple sources for a source filter criterion), the system can identify event references located in at least one of the entries corresponding to the multiple values and in all other entries identified by the filter criteria. The indexer can determine that the events associated with the identified event references satisfy the filter criteria. In some cases, the indexer can further consult a timestamp associated with the event reference to determine whether an event satisfies the filter criteria. For example, if an inverted index corresponds to a time range that is partially outside of a time range filter criterion, then the indexer can consult a timestamp associated with the event reference to determine whether the corresponding event satisfies the time range criterion. In some implementations, to identify events that satisfy a time range, the indexer can review an array, such as the event reference array1614that identifies the time associated with the events. Furthermore, as mentioned above using the known location of the directory in which the relevant inverted indexes are located (or other index identifier), the indexer can determine that any events identified using the relevant inverted indexes satisfy the index filter criterion. In some cases, based on the filter criteria, the indexer reviews an extraction rule. In certain implementations, if the filter criteria includes a field name that does not correspond to a field-value pair entry in an inverted index, the indexer can review an extraction rule, which may be located in a configuration file, to identify a field that corresponds to a field-value pair entry in the inverted index. For example, the filter criteria includes a field name “sessionID” and the indexer determines that at least one relevant inverted index does not include a field-value pair entry corresponding to the field name sessionID, the indexer can review an extraction rule that identifies how the sessionID field is to be extracted from a particular host, source, or sourcetype (implicitly identifying the particular host, source, or sourcetype that includes a sessionID field). The indexer can replace the field name “sessionID” in the filter criteria with the identified host, source, or sourcetype. In some cases, the field name “sessionID” may be associated with multiples hosts, sources, or sourcetypes, in which case, all identified hosts, sources, and sourcetypes can be added as filter criteria. In some cases, the identified host, source, or sourcetype can replace or be appended to a filter criterion, or be excluded. For example, if the filter criteria includes a criterion for source S1 and the “sessionID” field is found in source S2, the source S2 can replace S1 in the filter criteria, be appended such that the filter criteria includes source S1 and source S2, or be excluded based on the presence of the filter criterion source S1. If the identified host, source, or sourcetype is included in the filter criteria, the indexer can then identify a field-value pair entry in the inverted index that includes a field value corresponding to the identity of the particular host, source, or sourcetype identified using the extraction rule. Once the events that satisfy the filter criteria are identified, the system, such as the indexer206can categorize the results based on the categorization criteria. The categorization criteria can include categories for grouping the results, such as any combination of partition, source, sourcetype, or host, or other categories or fields as desired. The indexer can use the categorization criteria to identify categorization criteria-value pairs or categorization criteria values by which to categorize or group the results. The categorization criteria-value pairs can correspond to one or more field-value pair entries stored in a relevant inverted index, one or more index-value pairs based on a directory in which the inverted index is located or an entry in the inverted index (or other means by which an inverted index can be associated with a partition), or other criteria-value pair that identifies a general category and a particular value for that category. The categorization criteria values can correspond to the value portion of the categorization criteria-value pair. As mentioned, in some cases, the categorization criteria-value pairs can correspond to one or more field-value pair entries stored in the relevant inverted indexes. For example, the categorization criteria-value pairs can correspond to field-value pair entries of host, source, and sourcetype (or other field-value pair entry as desired). For instance, if there are ten different hosts, four different sources, and five different sourcetypes for an inverted index, then the inverted index can include ten host field-value pair entries, four source field-value pair entries, and five sourcetype field-value pair entries. The indexer can use the nineteen distinct field-value pair entries as categorization criteria-value pairs to group the results. Specifically, the indexer can identify the location of the event references associated with the events that satisfy the filter criteria within the field-value pairs, and group the event references based on their location. As such, the indexer can identify the particular field value associated with the event corresponding to the event reference. For example, if the categorization criteria include host and sourcetype, the host field-value pair entries and sourcetype field-value pair entries can be used as categorization criteria-value pairs to identify the specific host and sourcetype associated with the events that satisfy the filter criteria. In addition, as mentioned, categorization criteria-value pairs can correspond to data other than the field-value pair entries in the relevant inverted indexes. For example, if partition or index is used as a categorization criterion, the inverted indexes may not include partition field-value pair entries. Rather, the indexer can identify the categorization criteria-value pair associated with the partition based on the directory in which an inverted index is located, information in the inverted index, or other information that associates the inverted index with the partition, etc. As such a variety of methods can be used to identify the categorization criteria-value pairs from the categorization criteria. Accordingly based on the categorization criteria (and categorization criteria-value pairs), the indexer can generate groupings based on the events that satisfy the filter criteria. As a non-limiting example, if the categorization criteria includes a partition and sourcetype, then the groupings can correspond to events that are associated with each unique combination of partition and sourcetype. For instance, if there are three different partitions and two different sourcetypes associated with the identified events, then the six different groups can be formed, each with a unique partition value-sourcetype value combination. Similarly, if the categorization criteria includes partition, sourcetype, and host and there are two different partitions, three sourcetypes, and five hosts associated with the identified events, then the indexer can generate up to thirty groups for the results that satisfy the filter criteria. Each group can be associated with a unique combination of categorization criteria-value pairs (e.g., unique combinations of partition value sourcetype value, and host value). In addition, the indexer can count the number of events associated with each group based on the number of events that meet the unique combination of categorization criteria for a particular group (or match the categorization criteria-value pairs for the particular group). With continued reference to the example above, the indexer can count the number of events that meet the unique combination of partition, sourcetype, and host for a particular group. Each indexer communicates the groupings to the search head. The search head can aggregate the groupings from the indexers and provide the groupings for display. In some cases, the groups are displayed based on at least one of the host, source, sourcetype, or partition associated with the groupings. In some implementations, the search head can further display the groups based on display criteria, such as a display order or a sort order as described in greater detail above. As a non-limiting example and with reference toFIG.5B, consider a request received by an indexer206that includes the following filter criteria: keyword=error, partition=_main, time range=3/1/17 16:22.00.000-16:28.00.000, sourcetype=sourcetypeC, host=hostB, and the following categorization criteria: source. Based on the above criteria, the indexer206identifies _main directory503and can ignore _test directory505and any other partition-specific directories. The indexer determines that inverted partition507B is a relevant partition based on its location within the _main directory503and the time range associated with it. For sake of simplicity in this example, the indexer206determines that no other inverted indexes in the _main directory503, such as inverted index507A satisfy the time range criterion. Having identified the relevant inverted index507B, the indexer reviews the token entries511and the field-value pair entries513to identify event references, or events, that satisfy all of the filter criteria. With respect to the token entries511, the indexer can review the error token entry and identify event references 3, 5, 6, 8, 11, 12, indicating that the term “error” is found in the corresponding events. Similarly, the indexer can identify event references 4, 5, 6, 8, 9, 10, 11 in the field-value pair entry sourcetype::sourcetypeC and event references 2, 5, 6, 8, 10, 11 in the field-value pair entry host::hostB. As the filter criteria did not include a source or an IP_address field-value pair, the indexer can ignore those field-value pair entries. In addition to identifying event references found in at least one token entry or field-value pair entry (e.g., event references 3, 4, 5, 6, 8, 9, 10, 11, 12), the indexer can identify events (and corresponding event references) that satisfy the time range criterion using the event reference array1614(e.g., event references 2, 3, 4, 5, 6, 7, 8, 9, 10). Using the information obtained from the inverted index507B (including the event reference array515), the indexer206can identify the event references that satisfy all of the filter criteria (e.g., event references 5, 6, 8). Having identified the events (and event references) that satisfy all of the filter criteria, the indexer206can group the event references using the received categorization criteria (source). In doing so, the indexer can determine that event references 5 and 6 are located in the field-value pair entry source::sourceD (or have matching categorization criteria-value pairs) and event reference 8 is located in the field-value pair entry source::sourceC. Accordingly, the indexer can generate a sourceC group having a count of one corresponding to reference 8 and a sourceD group having a count of two corresponding to references 5 and 6. This information can be communicated to the search head. In turn the search head can aggregate the results from the various indexers and display the groupings. As mentioned above, in some implementations, the groupings can be displayed based at least in part on the categorization criteria, including at least one of host, source, sourcetype, or partition. It will be understood that a change to any of the filter criteria or categorization criteria can result in different groupings. As a one non-limiting example, a request received by an indexer206that includes the following filter criteria: partition=_main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 1-12 as satisfying the filter criteria. The indexer would then generate up to 24 groupings corresponding to the 24 different combinations of the categorization criteria-value pairs, including host (hostA, hostB), source (sourceA, sourceB, sourceC, sourceD), and sourcetype (sourcetypeA, sourcetypeB, sourcetypeC). However, as there are only twelve events identifiers in the illustrated implementation and some fall into the same grouping, the indexer generates eight groups and counts as follows: Group 1 (hostA, sourceA, sourcetypeA): 1 (event reference 7) Group 2 (hostA, sourceA, sourcetypeB): 2 (event references 1, 12) Group 3 (hostA, sourceA, sourcetypeC): 1 (event reference 4) Group 4 (hostA, sourceB, sourcetypeA): 1 (event reference 3) Group 5 (hostA, sourceB, sourcetypeC): 1 (event reference 9) Group 6 (hostB, sourceC, sourcetypeA): 1 (event reference 2) Group 7 (hostB, sourceC, sourcetypeC): 2 (event references 8, 11) Group 8 (hostB, sourceD, sourcetypeC): 3 (event references 5, 6, 10) As noted, each group has a unique combination of categorization criteria-value pairs or categorization criteria values. The indexer communicates the groups to the search head for aggregation with results received from other indexers. In communicating the groups to the search head, the indexer can include the categorization criteria-value pairs for each group and the count. In some implementations, the indexer can include more or less information. For example, the indexer can include the event references associated with each group and other identifying information, such as the indexer or inverted index used to identify the groups. As another non-limiting examples, a request received by an indexer206that includes the following filter criteria: partition=_main, time range=3/1/17 3/1/17 16:21:20.000-16:28:17.000, source=sourceA, sourceD, and keyword=itemID and the following categorization criteria: host, source, sourcetype would result in the indexer identifying event references 4, 7, and 10 as satisfying the filter criteria, and generate the following groups: Group 1 (hostA, sourceA, sourcetypeC): 1 (event reference 4) Group 2 (hostA, sourceA, sourcetypeA): 1 (event reference 7) Group 3 (hostB, sourceD, sourcetypeC): 1 (event references 10) The indexer communicates the groups to the search head for aggregation with results received from other indexers. As will be understand there are myriad ways for filtering and categorizing the events and event references. For example, the indexer can review multiple inverted indexes associated with a partition or review the inverted indexes of multiple partitions, and categorize the data using any one or any combination of partition, host, source, sourcetype, or other category, as desired. Further, if a user interacts with a particular group, the indexer can provide additional information regarding the group. For example, the indexer can perform a targeted search or sampling of the events that satisfy the filter criteria and the categorization criteria for the selected group, also referred to as the filter criteria corresponding to the group or filter criteria associated with the group. In some cases, to provide the additional information, the indexer relies on the inverted index. For example, the indexer can identify the event references associated with the events that satisfy the filter criteria and the categorization criteria for the selected group and then use the event reference array515to access some or all of the identified events. In some cases, the categorization criteria values or categorization criteria-value pairs associated with the group become part of the filter criteria for the review. With reference toFIG.5Bfor instance, suppose a group is displayed with a count of six corresponding to event references 4, 5, 6, 8, 10, 11 (i.e., event references 4, 5, 6, 8, 10, 11 satisfy the filter criteria and are associated with matching categorization criteria values or categorization criteria-value pairs) and a user interacts with the group (e.g., selecting the group, clicking on the group, etc.). In response, the search head communicates with the indexer to provide additional information regarding the group. In some implementations, the indexer identifies the event references associated with the group using the filter criteria and the categorization criteria for the group (e.g., categorization criteria values or categorization criteria-value pairs unique to the group). Together, the filter criteria and the categorization criteria for the group can be referred to as the filter criteria associated with the group. Using the filter criteria associated with the group, the indexer identifies event references 4, 5, 6, 8, 10, 11. Based on a sampling criteria, discussed in greater detail above, the indexer can determine that it will analyze a sample of the events associated with the event references 4, 5, 6, 8, 10, 11. For example, the sample can include analyzing event data associated with the event references 5, 8, 10. In some implementations, the indexer can use the event reference array1616to access the event data associated with the event references 5, 8, 10. Once accessed, the indexer can compile the relevant information and provide it to the search head for aggregation with results from other indexers. By identifying events and sampling event data using the inverted indexes, the indexer can reduce the amount of actual data this is analyzed and the number of events that are accessed in order to generate the summary of the group and provide a response in less time. 2.8. Query Processing FIG.6Ais a flow diagram of an example method that illustrates how a search head and indexers perform a search query, in accordance with example implementations. At block602, a search head receives a search query from a client. At block604, the search head analyzes the search query to determine what portion(s) of the query can be delegated to indexers and what portions of the query can be executed locally by the search head. At block606, the search head distributes the determined portions of the query to the appropriate indexers. In some implementations, a search head cluster may take the place of an independent search head where each search head in the search head cluster coordinates with peer search heads in the search head cluster to schedule jobs, replicate search results, update configurations, fulfill search requests, etc. In some implementations, the search head (or each search head) communicates with a master node (also known as a cluster master, not shown inFIG.2) that provides the search head with a list of indexers to which the search head can distribute the determined portions of the query. The master node maintains a list of active indexers and can also designate which indexers may have responsibility for responding to queries over certain sets of events. A search head may communicate with the master node before the search head distributes queries to indexers to discover the addresses of active indexers. At block608, the indexers to which the query was distributed, search data stores associated with them for events that are responsive to the query. To determine which events are responsive to the query, the indexer searches for events that match the criteria specified in the query. These criteria can include matching keywords or specific values for certain fields. The searching operations at block608may use the late-binding schema to extract values for specified fields from events at the time the query is processed. In some implementations, one or more rules for extracting field values may be specified as part of a source type definition in a configuration file. The indexers may then either send the relevant events back to the search head, or use the events to determine a partial result, and send the partial result back to the search head. At block610, the search head combines the partial results and/or events received from the indexers to produce a final result for the query. In some examples, the results of the query are indicative of performance or security of the IT environment and may help improve the performance of components in the IT environment. This final result may comprise different types of data depending on what the query requested. For example, the results can include a listing of matching events returned by the query, or some type of visualization of the data from the returned events. In another example, the final result can include one or more calculated values derived from the matching events. The results generated by the system108can be returned to a client using different techniques. For example, one technique streams results or relevant events back to a client in real-time as they are identified. Another technique waits to report the results to the client until a complete set of results (which may include a set of relevant events or a result based on relevant events) is ready to return to the client. Yet another technique streams interim results or relevant events back to the client in real-time until a complete set of results is ready, and then returns the complete set of results to the client. In another technique, certain results are stored as “search jobs” and the client may retrieve the results by referring the search jobs. The search head can also perform various operations to make the search more efficient. For example, before the search head begins execution of a query, the search head can determine a time range for the query and a set of common keywords that all matching events include. The search head may then use these parameters to query the indexers to obtain a superset of the eventual results. Then, during a filtering stage, the search head can perform field-extraction operations on the superset to produce a reduced set of search results. This speeds up queries, which may be particularly helpful for queries that are performed on a periodic basis. 2.9. Pipelined Search Language Various implementations of the present disclosure can be implemented using, or in conjunction with, a pipelined command language. A pipelined command language is a language in which a set of inputs or data is operated on by a first command in a sequence of commands, and then subsequent commands in the order they are arranged in the sequence. Such commands can include any type of functionality for operating on data, such as retrieving, searching, filtering, aggregating, processing, transmitting, and the like. As described herein, a query can thus be formulated in a pipelined command language and include any number of ordered or unordered commands for operating on data. Splunk Processing Language (SPL) is an example of a pipelined command language in which a set of inputs or data is operated on by any number of commands in a particular sequence. A sequence of commands, or command sequence, can be formulated such that the order in which the commands are arranged defines the order in which the commands are applied to a set of data or the results of an earlier executed command. For example, a first command in a command sequence can operate to search or filter for specific data in particular set of data. The results of the first command can then be passed to another command listed later in the command sequence for further processing. In various implementations, a query can be formulated as a command sequence defined in a command line of a search UI. In some implementations, a query can be formulated as a sequence of SPL commands. Some or all of the SPL commands in the sequence of SPL commands can be separated from one another by a pipe symbol “|”. In such implementations, a set of data, such as a set of events, can be operated on by a first SPL command in the sequence, and then a subsequent SPL command following a pipe symbol “I” after the first SPL command operates on the results produced by the first SPL command or other set of data, and so on for any additional SPL commands in the sequence. As such, a query formulated using SPL comprises a series of consecutive commands that are delimited by pipe “|” characters. The pipe character indicates to the system that the output or result of one command (to the left of the pipe) should be used as the input for one of the subsequent commands (to the right of the pipe). This enables formulation of queries defined by a pipeline of sequenced commands that refines or enhances the data at each step along the pipeline until the desired results are attained. Accordingly, various implementations described herein can be implemented with Splunk Processing Language (SPL) used in conjunction with the SPLUNK® ENTERPRISE system. While a query can be formulated in many ways, a query can start with a search command and one or more corresponding search terms at the beginning of the pipeline. Such search terms can include any combination of keywords, phrases, times, dates, Boolean expressions, fieldname-field value pairs, etc. that specify which results should be obtained from an index. The results can then be passed as inputs into subsequent commands in a sequence of commands by using, for example, a pipe character. The subsequent commands in a sequence can include directives for additional processing of the results once it has been obtained from one or more indexes. For example, commands may be used to filter unwanted information out of the results, extract more information, evaluate field values, calculate statistics, reorder the results, create an alert, create summary of the results, or perform some type of aggregation function. In some implementations, the summary can include a graph, chart, metric, or other visualization of the data. An aggregation function can include analysis or calculations to return an aggregate value, such as an average value, a sum, a maximum value, a root mean square, statistical values, and the like. Due to its flexible nature, use of a pipelined command language in various implementations is advantageous because it can perform “filtering” as well as “processing” functions. In other words, a single query can include a search command and search term expressions, as well as data-analysis expressions. For example, a command at the beginning of a query can perform a “filtering” step by retrieving a set of data based on a condition (e.g., records associated with server response times of less than 1 microsecond). The results of the filtering step can then be passed to a subsequent command in the pipeline that performs a “processing” step (e.g. calculation of an aggregate value related to the filtered events such as the average response time of servers with response times of less than 1 microsecond). Furthermore, the search command can allow events to be filtered by keyword as well as field value criteria. For example, a search command can filter out all events containing the word “warning” or filter out all events where a field value associated with a field “clientip” is “10.0.1.2.” The results obtained or generated in response to a command in a query can be considered a set of results data. The set of results data can be passed from one command to another in any data format. In one implementation, the set of result data can be in the form of a dynamically created table. Each command in a particular query can redefine the shape of the table. In some implementations, an event retrieved from an index in response to a query can be considered a row with a column for each field value. Columns contain basic information about the data and also may contain data that has been dynamically extracted at search time. FIG.6Bprovides a visual representation of the manner in which a pipelined command language or query operates in accordance with the disclosed implementations. The query630can be inputted by the user into a search. The query comprises a search, the results of which are piped to two commands (namely, command 1 and command 2) that follow the search step. Disk622represents the event data in the raw record data store. When a user query is processed, a search step will precede other queries in the pipeline in order to generate a set of events at block640. For example, the query can comprise search terms “sourcetype=syslog ERROR” at the front of the pipeline as shown inFIG.6B. Intermediate results table624shows fewer rows because it represents the subset of events retrieved from the index that matched the search terms “sourcetype=syslog ERROR” from search command630. By way of further example, instead of a search step, the set of events at the head of the pipeline may be generating by a call to a pre-existing inverted index (as will be explained later). At block642, the set of events generated in the first part of the query may be piped to a query that searches the set of events for field-value pairs or for keywords. For example, the second intermediate results table626shows fewer columns, representing the result of the top command, “top user” which may summarize the events into a list of the top 10 users and may display the user, count, and percentage. Finally, at block644, the results of the prior stage can be pipelined to another stage where further filtering or processing of the data can be performed, e.g., preparing the data for display purposes, filtering the data based on a condition, performing a mathematical calculation with the data, etc. As shown inFIG.6B, the “fields—percent” part of command630removes the column that shows the percentage, thereby, leaving a final results table628without a percentage column. In different implementations, other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. 2.10. Field Extraction The search head210allows users to search and visualize events generated from machine data received from homogenous data sources. The search head210also allows users to search and visualize events generated from machine data received from heterogeneous data sources. The search head210includes various mechanisms, which may additionally reside in an indexer206, for processing a query. A query language may be used to create a query, such as any suitable pipelined query language. For example, Splunk Processing Language (SPL) can be utilized to make a query. SPL is a pipelined search language in which a set of inputs is operated on by a first command in a command line, and then a subsequent command following the pipe symbol “I” operates on the results produced by the first command, and so on for additional commands. Other query languages, such as the Structured Query Language (“SQL”), can be used to create a query. In response to receiving the search query, search head210uses extraction rules to extract values for fields in the events being searched. The search head210obtains extraction rules that specify how to extract a value for fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the fields corresponding to the extraction rules. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, an extraction rule may truncate a character string or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. The search head210can apply the extraction rules to events that it receives from indexers206. Indexers206may apply the extraction rules to events in an associated data store208. Extraction rules can be applied to all the events in a data store or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the portions of machine data in the events and examining the data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. FIG.7Ais a diagram of an example scenario where a common customer identifier is found among log data received from three disparate data sources, in accordance with example implementations. In this example, a user submits an order for merchandise using a vendor's shopping application program701running on the user's system. In this example, the order was not delivered to the vendor's server due to a resource exception at the destination server that is detected by the middleware code702. The user then sends a message to the customer support server703to complain about the order failing to complete. The three systems701,702, and703are disparate systems that do not have a common logging format. The order application701sends log data704to the data intake and query system in one format, the middleware code702sends error log data705in a second format, and the support server703sends log data706in a third format. Using the log data received at one or more indexers206from the three systems, the vendor can uniquely obtain an insight into user activity, user experience, and system behavior. The search head210allows the vendor's administrator to search the log data from the three systems that one or more indexers206are responsible for searching, thereby obtaining correlated information, such as the order number and corresponding customer ID number of the person placing the order. The system also allows the administrator to see a visualization of related events via a user interface. The administrator can query the search head210for customer ID field value matches across the log data from the three systems that are stored at the one or more indexers206. The customer ID field value exists in the data gathered from the three systems, but the customer ID field value may be located in different areas of the data given differences in the architecture of the systems. There is a semantic relationship between the customer ID field values generated by the three systems. The search head210requests events from the one or more indexers206to gather relevant events from the three systems. The search head210then applies extraction rules to the events in order to extract field values that it can correlate. The search head may apply a different extraction rule to each set of events from each system when the event format differs among systems. In this example, the user interface can display to the administrator the events corresponding to the common customer ID field values707,708, and709, thereby providing the administrator with insight into a customer's experience. Note that query results can be returned to a client, a search head, or any other system component for further processing. In general, query results may include a set of one or more events, a set of one or more values obtained from the events, a subset of the values, statistics calculated based on the values, a report containing the values, a visualization (e.g., a graph or chart) generated from the values, and the like. The search system enables users to run queries against the stored data to retrieve events that meet criteria specified in a query, such as containing certain keywords or having specific values in defined fields.FIG.7Billustrates the manner in which keyword searches and field searches are processed in accordance with disclosed implementations. If a user inputs a search query into search bar1401that includes only keywords (also known as “tokens”), e.g., the keyword “error” or “warning”, the query search engine of the data intake and query system searches for those keywords directly in the event data722stored in the raw record data store. Note that whileFIG.7Bonly illustrates four events, the raw record data store (corresponding to data store208inFIG.2) may contain records for millions of events. As disclosed above, an indexer can optionally generate a keyword index to facilitate fast keyword searching for event data. The indexer includes the identified keywords in an index, which associates each stored keyword with reference pointers to events containing that keyword (or to locations within events where that keyword is located, other location identifiers, etc.). When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. For example, if the keyword “HTTP” was indexed by the indexer at index time, and the user searches for the keyword “HTTP”, events713to715will be identified based on the results returned from the keyword index. As noted above, the index contains reference pointers to the events containing the keyword, which allows for efficient retrieval of the relevant events from the raw record data store. If a user searches for a keyword that has not been indexed by the indexer, the data intake and query system would nevertheless be able to retrieve the events by searching the event data for the keyword in the raw record data store directly as shown inFIG.7B. For example, if a user searches for the keyword “frank”, and the name “frank” has not been indexed at index time, the DATA INTAKE AND QUERY system will search the event data directly and return the first event713. Note that whether the keyword has been indexed at index time or not, in both cases the raw data with the events712is accessed from the raw data record store to service the keyword search. In the case where the keyword has been indexed, the index will contain a reference pointer that will allow for a more efficient retrieval of the event data from the data store. If the keyword has not been indexed, the search engine will need to search through all the records in the data store to service the search. In most cases, however, in addition to keywords, a user's search will also include fields. The term “field” refers to a location in the event data containing one or more values for a specific data item. Often, a field is a value with a fixed, delimited position on a line, or a name and value pair, where there is a single value to each field name. A field can also be multivalued, that is, it can appear more than once in an event and have a different value for each appearance, e.g., email address fields. Fields are searchable by the field name or field name-value pairs. Some examples of fields are “clientip” for IP addresses accessing a web server, or the “From” and “To” fields in email addresses. By way of further example, consider the search, “status=404”. This search query finds events with “status” fields that have a value of “404.” When the search is run, the search engine does not look for events with any other “status” value. It also does not look for events containing other fields that share “404” as a value. As a result, the search returns a set of results that are more focused than if “404” had been used in the search string as part of a keyword search. Note also that fields can appear in events as “key=value” pairs such as “user_name=Bob.” But in most cases, field values appear in fixed, delimited positions without identifying keys. For example, the data store may contain events where the “user_name” value always appears by itself after the timestamp as illustrated by the following string: “November 15 09:33:22 johnmedlock.” The data intake and query system advantageously allows for search time field extraction. In other words, fields can be extracted from the event data at search time using late-binding schema as opposed to at data ingestion time, which was a major limitation of the prior art systems. In response to receiving the search query, search head210uses extraction rules to extract values for the fields associated with a field or fields in the event data being searched. The search head210obtains extraction rules that specify how to extract a value for certain fields from an event. Extraction rules can comprise regex rules that specify how to extract values for the relevant fields. In addition to specifying how to extract field values, the extraction rules may also include instructions for deriving a field value by performing a function on a character string or value retrieved by the extraction rule. For example, a transformation rule may truncate a character string, or convert the character string into a different data format. In some cases, the query itself can specify one or more extraction rules. FIG.7Billustrates the manner in which configuration files may be used to configure custom fields at search time in accordance with the disclosed implementations. In response to receiving a search query, the data intake and query system determines if the query references a “field.” For example, a query may request a list of events where the “clientip” field equals “127.0.0.1.” If the query itself does not specify an extraction rule and if the field is not a metadata field, e.g., time, host, source, source type, etc., then in order to determine an extraction rule, the search engine may, in one or more implementations, need to locate configuration file712during the execution of the search as shown inFIG.7B. Configuration file712may contain extraction rules for all the various fields that are not metadata fields, e.g., the “clientip” field. The extraction rules may be inserted into the configuration file in a variety of ways. In some implementations, the extraction rules can comprise regular expression rules that are manually entered in by the user. Regular expressions match patterns of characters in text and are used for extracting custom fields in text. In one or more implementations, as noted above, a field extractor may be configured to automatically generate extraction rules for certain field values in the events when the events are being created, indexed, or stored, or possibly at a later time. In one implementation, a user may be able to dynamically create custom fields by highlighting portions of a sample event that should be extracted as fields using a graphical user interface. The system would then generate a regular expression that extracts those fields from similar events and store the regular expression as an extraction rule for the associated field in the configuration file712. In some implementations, the indexers may automatically discover certain custom fields at index time and the regular expressions for those fields will be automatically generated at index time and stored as part of extraction rules in configuration file712. For example, fields that appear in the event data as “key=value” pairs may be automatically extracted as part of an automatic field discovery process. Note that there may be several other ways of adding field definitions to configuration files in addition to the methods discussed herein. The search head210can apply the extraction rules derived from configuration file1402to event data that it receives from indexers206. Indexers206may apply the extraction rules from the configuration file to events in an associated data store208. Extraction rules can be applied to all the events in a data store, or to a subset of the events that have been filtered based on some criteria (e.g., event time stamp values, etc.). Extraction rules can be used to extract one or more values for a field from events by parsing the event data and examining the event data for one or more patterns of characters, numbers, delimiters, etc., that indicate where the field begins and, optionally, ends. In one more implementations, the extraction rule in configuration file712will also need to define the type or set of events that the rule applies to. Because the raw record data store will contain events from multiple heterogeneous sources, multiple events may contain the same fields in different locations because of discrepancies in the format of the data generated by the various sources. Furthermore, certain events may not contain a particular field at all. For example, event719also contains “clientip” field, however, the “clientip” field is in a different format from events713-715. To address the discrepancies in the format and content of the different types of events, the configuration file will also need to specify the set of events that an extraction rule applies to, e.g., extraction rule716specifies a rule for filtering by the type of event and contains a regular expression for parsing out the field value. Accordingly, each extraction rule will pertain to only a particular type of event. If a particular field, e.g., “clientip” occurs in multiple events, each of those types of events would need its own corresponding extraction rule in the configuration file712and each of the extraction rules would comprise a different regular expression to parse out the associated field value. The most common way to categorize events is by source type because events generated by a particular source can have the same format. The field extraction rules stored in configuration file712perform search-time field extractions. For example, for a query that requests a list of events with source type “access_combined” where the “clientip” field equals “127.0.0.1,” the query search engine would first locate the configuration file712to retrieve extraction rule716that would allow it to extract values associated with the “clientip” field from the event data720“where the source type is “access_combined. After the “clientip” field has been extracted from all the events comprising the “clientip” field where the source type is “access_combined,” the query search engine can then execute the field criteria by performing the compare operation to filter out the events where the “clientip” field equals “127.0.0.1.” In the example shown inFIG.78, events713-715would be returned in response to the user query. In this manner, the search engine can service queries containing field criteria in addition to queries containing keyword criteria (as explained above). The configuration file can be created during indexing. It may either be manually created by the user or automatically generated with certain predetermined field extraction rules. As discussed above, the events may be distributed across several indexers, wherein each indexer may be responsible for storing and searching a subset of the events contained in a corresponding data store. In a distributed indexer system, each indexer would need to maintain a local copy of the configuration file that is synchronized periodically across the various indexers. The ability to add schema to the configuration file at search time results in increased efficiency. A user can create new fields at search time and simply add field definitions to the configuration file. As a user learns more about the data in the events, the user can continue to refine the late-binding schema by adding new fields, deleting fields, or modifying the field extraction rules in the configuration file for use the next time the schema is used by the system. Because the data intake and query system maintains the underlying raw data and uses late-binding schema for searching the raw data, it enables a user to continue investigating and learn valuable insights about the raw data long after data ingestion time. The ability to add multiple field definitions to the configuration file at search time also results in increased flexibility. For example, multiple field definitions can be added to the configuration file to capture the same field across events generated by different source types. This allows the data intake and query system to search and correlate data across heterogeneous sources flexibly and efficiently. Further, by providing the field definitions for the queried fields at search time, the configuration file712allows the record data store712to be field searchable. In other words, the raw record data store712can be searched using keywords as well as fields, wherein the fields are searchable name/value pairings that distinguish one event from another and can be defined in configuration file1402using extraction rules. In comparison to a search containing field names, a keyword search does not need the configuration file and can search the event data directly as shown inFIG.7B. It should also be noted that any events filtered out by performing a search-time field extraction using a configuration file can be further processed by directing the results of the filtering step to a processing step using a pipelined search language. Using the prior example, a user could pipeline the results of the compare step to an aggregate function by asking the query search engine to count the number of events where the “clientip” field equals “127.0.0.1.” 2.11. Example Search Screen FIG.8Ais an interface diagram of an example user interface for a search screen800, in accordance with example implementations. Search screen800includes a search bar802that accepts user input in the form of a search string. It also includes a time range picker812that enables the user to specify a time range for the search. For historical searches (e.g., searches based on a particular historical time range), the user can select a specific time range, or alternatively a relative time range, such as “today,” “yesterday” or “last week.” For real-time searches (e.g., searches whose results are based on data received in real-time), the user can select the size of a preceding time window to search for real-time events. Search screen800also initially may display a “data summary” dialog as is illustrated inFIG.8Bthat enables the user to select different sources for the events, such as by selecting specific hosts and log files. After the search is executed, the search screen800inFIG.8Acan display the results through search results tabs804, wherein search results tabs804includes: an “events tab” that may display various information about events returned by the search; a “statistics tab” that may display statistics about the search results; and a “visualization tab” that may display various visualizations of the search results. The events tab illustrated inFIG.8Amay display a timeline graph805that graphically illustrates the number of events that occurred in one-hour intervals over the selected time range. The events tab also may display an events list808that enables a user to view the machine data in each of the returned events. The events tab additionally may display a sidebar that is an interactive field picker806. The field picker806may be displayed to a user in response to the search being executed and allows the user to further analyze the search results based on the fields in the events of the search results. The field picker806includes field names that reference fields present in the events in the search results. The field picker may display any Selected Fields820that a user has pre-selected for display (e.g., host, source, sourcetype) and may also display any Interesting Fields822that the system determines may be interesting to the user based on pre-specified criteria (e.g., action, bytes, categoryid, clientip, date_hour, date_mday, date_minute, etc.). The field picker also provides an option to display field names for all the fields present in the events of the search results using the All Fields control824. Each field name in the field picker806has a value type identifier to the left of the field name, such as value type identifier826. A value type identifier identifies the type of value for the respective field, such as an “a” for fields that include literal values or a “#” for fields that include numerical values. Each field name in the field picker also has a unique value count to the right of the field name, such as unique value count828. The unique value count indicates the number of unique values for the respective field in the events of the search results. Each field name is selectable to view the events in the search results that have the field referenced by that field name. For example, a user can select the “host” field name, and the events shown in the events list808will be updated with events in the search results that have the field that is reference by the field name “host.” 2.12. Data Models A data model is a hierarchically structured search-time mapping of semantic knowledge about one or more datasets. It encodes the domain knowledge used to build a variety of specialized searches of those datasets. Those searches, in turn, can be used to generate reports. A data model is composed of one or more “objects” (or “data model objects”) that define or otherwise correspond to a specific set of data. An object is defined by constraints and attributes. An object's constraints are search criteria that define the set of events to be operated on by running a search having that search criteria at the time the data model is selected. An object's attributes are the set of fields to be exposed for operating on that set of events generated by the search criteria. Objects in data models can be arranged hierarchically in parent/child relationships. Each child object represents a subset of the dataset covered by its parent object. The top-level objects in data models are collectively referred to as “root objects.” Child objects have inheritance. Child objects inherit constraints and attributes from their parent objects and may have additional constraints and attributes of their own. Child objects provide a way of filtering events from parent objects. Because a child object may provide an additional constraint in addition to the constraints it has inherited from its parent object, the dataset it represents may be a subset of the dataset that its parent represents. For example, a first data model object may define a broad set of data pertaining to e-mail activity generally, and another data model object may define specific datasets within the broad dataset, such as a subset of the e-mail data pertaining specifically to e-mails sent. For example, a user can simply select an “e-mail activity” data model object to access a dataset relating to e-mails generally (e.g., sent or received), or select an “e-mails sent” data model object (or data sub-model object) to access a dataset relating to e-mails sent. Because a data model object is defined by its constraints (e.g., a set of search criteria) and attributes (e.g., a set of fields), a data model object can be used to quickly search data to identify a set of events and to identify a set of fields to be associated with the set of events. For example, an “e-mails sent” data model object may specify a search for events relating to e-mails that have been sent, and specify a set of fields that are associated with the events. Thus, a user can retrieve and use the “e-mails sent” data model object to quickly search source data for events relating to sent e-mails, and may be provided with a listing of the set of fields relevant to the events in a user interface screen. Examples of data models can include electronic mail, authentication, databases, intrusion detection, malware, application state, alerts, compute inventory, network sessions, network traffic, performance, audits, updates, vulnerabilities, etc. Data models and their objects can be designed by knowledge managers in an organization, and they can enable downstream users to quickly focus on a specific set of data. A user iteratively applies a model development tool (not shown inFIG.8A) to prepare a query that defines a subset of events and assigns an object name to that subset. A child subset is created by further limiting a query that generated a parent subset. Data definitions in associated schemas can be taken from the common information model (CIM) or can be devised for a particular schema and optionally added to the CIM. Child objects inherit fields from parents and can include fields not present in parents. A model developer can select fewer extraction rules than are available for the sources returned by the query that defines events belonging to a model. Selecting a limited set of extraction rules can be a tool for simplifying and focusing the data model, while allowing a user flexibility to explore the data subset. Development of a data model is further explained in U.S. Pat. Nos. 8,788,525 and 8,788,526, both entitled “DATA MODEL FOR MACHINE DATA FOR SEMANTIC SEARCH”, both issued on 22 Jul. 2014, U.S. Pat. No. 8,983,994, entitled “GENERATION OF A DATA MODEL FOR SEARCHING MACHINE DATA”, issued on 17 March, 2015, U.S. Pat. No. 9,128,980, entitled “GENERATION OF A DATA MODEL APPLIED TO QUERIES”, issued on 8 Sep. 2015, and U.S. Pat. No. 9,589,012, entitled “GENERATION OF A DATA MODEL APPLIED TO OBJECT QUERIES”, issued on 7 Mar. 2017, each of which is hereby incorporated by reference in its entirety for all purposes. A data model can also include reports. One or more report formats can be associated with a particular data model and be made available to run against the data model. A user can use child objects to design reports with object datasets that already have extraneous data pre-filtered out. In some implementations, the data intake and query system108provides the user with the ability to produce reports (e.g., a table, chart, visualization, etc.) without having to enter SPL, SQL, or other query language terms into a search screen. Data models are used as the basis for the search feature. Data models may be selected in a report generation interface. The report generator supports drag-and-drop organization of fields to be summarized in a report. When a model is selected, the fields with available extraction rules are made available for use in the report. The user may refine and/or filter search results to produce more precise reports. The user may select some fields for organizing the report and select other fields for providing detail according to the report organization. For example, “region” and “salesperson” are fields used for organizing the report and sales data can be summarized (subtotaled and totaled) within this organization. The report generator allows the user to specify one or more fields within events and apply statistical analysis on values extracted from the specified one or more fields. The report generator may aggregate search results across sets of events and generate statistics based on aggregated search results. Building reports using the report generation interface is further explained in U.S. patent application Ser. No. 14/503,335, entitled “GENERATING REPORTS FROM UNSTRUCTURED DATA”, filed on 30 Sep. 2014, and which is hereby incorporated by reference in its entirety for all purposes. Data visualizations also can be generated in a variety of formats, by reference to the data model. Reports, data visualizations, and data model objects can be saved and associated with the data model for future use. The data model object may be used to perform searches of other data. FIGS.9-15are interface diagrams of example report generation user interfaces, in accordance with example implementations. The report generation process may be driven by a predefined data model object, such as a data model object defined and/or saved via a reporting application or a data model object obtained from another source. A user can load a saved data model object using a report editor. For example, the initial search query and fields used to drive the report editor may be obtained from a data model object. The data model object that is used to drive a report generation process may define a search and a set of fields. Upon loading of the data model object, the report generation process may enable a user to use the fields (e.g., the fields defined by the data model object) to define criteria for a report (e.g., filters, split rows/columns, aggregates, etc.) and the search may be used to identify events (e.g., to identify events responsive to the search) used to generate the report. That is, for example, if a data model object is selected to drive a report editor, the graphical user interface of the report editor may enable a user to define reporting criteria for the report using the fields associated with the selected data model object, and the events used to generate the report may be constrained to the events that match, or otherwise satisfy, the search constraints of the selected data model object. The selection of a data model object for use in driving a report generation may be facilitated by a data model object selection interface.FIG.9illustrates an example interactive data model selection graphical user interface900of a report editor that may display a listing of available data models901. The user may select one of the data models902. FIG.10illustrates an example data model object selection graphical user interface1000that may display available data objects1001for the selected data object model902. The user may select one of the displayed data model objects1002for use in driving the report generation process. Once a data model object is selected by the user, a user interface screen1100shown inFIG.11Amay display an interactive listing of automatic field identification options1101based on the selected data model object. For example, a user may select one of the three illustrated options (e.g., the “All Fields” option1102, the “Selected Fields” option1103, or the “Coverage” option (e.g., fields with at least a specified % of coverage)1104). If the user selects the “All Fields” option1102, all of the fields identified from the events that were returned in response to an initial search query may be selected. That is, for example, all of the fields of the identified data model object fields may be selected. If the user selects the “Selected Fields” option1103, only the fields from the fields of the identified data model object fields that are selected by the user may be used. If the user selects the “Coverage” option1104, only the fields of the identified data model object fields meeting a specified coverage criteria may be selected. A percent coverage may refer to the percentage of events returned by the initial search query that a given field appears in. Thus, for example, if an object dataset includes 10,000 events returned in response to an initial search query, and the “avg_age” field appears in854of those 10,000 events, then the “avg_age” field would have a coverage of 8.54% for that object dataset. If, for example, the user selects the “Coverage” option and specifies a coverage value of 2%, only fields having a coverage value equal to or greater than 2% may be selected. The number of fields corresponding to each selectable option may be displayed in association with each option. For example, “97” displayed next to the “All Fields” option1102indicates that 97 fields will be selected if the “All Fields” option is selected. The “3” displayed next to the “Selected Fields” option1103indicates that 3 of the 97 fields will be selected if the “Selected Fields” option is selected. The “49” displayed next to the “Coverage” option1104indicates that 49 of the 97 fields (e.g., the 49 fields having a coverage of 2% or greater) will be selected if the “Coverage” option is selected. The number of fields corresponding to the “Coverage” option may be dynamically updated based on the specified percent of coverage. FIG.11Billustrates an example graphical user interface screen1105displaying the reporting application's “Report Editor” page. The screen may display interactive elements for defining various elements of a report. For example, the page includes a “Filters” element1106, a “Split Rows” element1107, a “Split Columns” element1108, and a “Column Values” element1109. The page may include a list of search results1111. In this example, the Split Rows element1107is expanded, revealing a listing of fields1110that can be used to define additional criteria (e.g., reporting criteria). The listing of fields1110may correspond to the selected fields. That is, the listing of fields1110may list only the fields previously selected, either automatically and/or manually by a user.FIG.11Cillustrates a formatting dialogue1112that may be displayed upon selecting a field from the listing of fields1110. The dialogue can be used to format the display of the results of the selection (e.g., label the column for the selected field to be displayed as “component”). FIG.11Dillustrates an example graphical user interface screen1105including a table of results1113based on the selected criteria including splitting the rows by the “component” field. A column1114having an associated count for each component listed in the table may be displayed that indicates an aggregate count of the number of times that the particular field-value pair (e.g., the value in a row for a particular field, such as the value “BucketMover” for the field “component”) occurs in the set of events responsive to the initial search query. FIG.12illustrates an example graphical user interface screen1200that allows the user to filter search results and to perform statistical analysis on values extracted from specific fields in the set of events. In this example, the top ten product names ranked by price are selected as a filter1201that causes the display of the ten most popular products sorted by price. Each row is displayed by product name and price1202. This results in each product displayed in a column labeled “product name” along with an associated price in a column labeled “price”1206. Statistical analysis of other fields in the events associated with the ten most popular products have been specified as column values1203. A count of the number of successful purchases for each product is displayed in column1204. These statistics may be produced by filtering the search results by the product name, finding all occurrences of a successful purchase in a field within the events and generating a total of the number of occurrences. A sum of the total sales is displayed in column1205, which is a result of the multiplication of the price and the number of successful purchases for each product. The reporting application allows the user to create graphical visualizations of the statistics generated for a report. For example,FIG.13illustrates an example graphical user interface1300that may display a set of components and associated statistics1301. The reporting application allows the user to select a visualization of the statistics in a graph (e.g., bar chart, scatter plot, area chart, line chart, pie chart, radial gauge, marker gauge, filler gauge, etc.), where the format of the graph may be selected using the user interface controls1302along the left panel of the user interface1300.FIG.14illustrates an example of a bar chart visualization1400of an aspect of the statistical data1301.FIG.15illustrates a scatter plot visualization1500of an aspect of the statistical data1301. 2.13. Acceleration Technique The above-described system provides significant flexibility by enabling a user to analyze massive quantities of minimally-processed data “on the fly” at search time using a late-binding schema, instead of storing pre-specified portions of the data in a database at ingestion time. This flexibility enables a user to see valuable insights, correlate data, and perform subsequent queries to examine interesting aspects of the data that may not have been apparent at ingestion time. However, performing extraction and analysis operations at search time can involve a large amount of data and require a large number of computational operations, which can cause delays in processing the queries. Advantageously, the data intake and query system also employs a number of unique acceleration techniques that have been developed to speed up analysis operations performed at search time. These techniques include: (1) performing search operations in parallel across multiple indexers; (2) using a keyword index; (3) using a high performance analytics store; and (4) accelerating the process of generating reports. These novel techniques are described in more detail below. 2.13.1. Aggregation Technique To facilitate faster query processing, a query can be structured such that multiple indexers perform the query in parallel, while aggregation of search results from the multiple indexers is performed locally at the search head. For example,FIG.16is an example search query received from a client and executed by search peers, in accordance with example implementations.FIG.16illustrates how a search query1602received from a client at a search head210can split into two phases, including: (1) subtasks1604(e.g., data retrieval or simple filtering) that may be performed in parallel by indexers206for execution, and (2) a search results aggregation operation1606to be executed by the search head when the results are ultimately collected from the indexers. During operation, upon receiving search query1602, a search head210determines that a portion of the operations involved with the search query may be performed locally by the search head. The search head modifies search query1602by substituting “stats” (create aggregate statistics over results sets received from the indexers at the search head) with “prestats” (create statistics by the indexer from local results set) to produce search query1604, and then distributes search query1604to distributed indexers, which are also referred to as “search peers” or “peer indexers.” Note that search queries may generally specify search criteria or operations to be performed on events that meet the search criteria. Search queries may also specify field names, as well as search criteria for the values in the fields or operations to be performed on the values in the fields. Moreover, the search head may distribute the full search query to the search peers as illustrated inFIG.6A, or may alternatively distribute a modified version (e.g., a more restricted version) of the search query to the search peers. In this example, the indexers are responsible for producing the results and sending them to the search head. After the indexers return the results to the search head, the search head aggregates the received results1606to form a single search result set. By executing the query in this manner, the system effectively distributes the computational operations across the indexers while minimizing data transfers. 2.13.2. Keyword Index As described above with reference to the flow charts inFIG.5AandFIG.6A, data intake and query system108can construct and maintain one or more keyword indices to quickly identify events containing specific keywords. This technique can greatly speed up the processing of queries involving specific keywords. As mentioned above, to build a keyword index, an indexer first identifies a set of keywords. Then, the indexer includes the identified keywords in an index, which associates each stored keyword with references to events containing that keyword, or to locations within events where that keyword is located. When an indexer subsequently receives a keyword-based query, the indexer can access the keyword index to quickly identify events containing the keyword. 2.13.3. High Performance Analytics Store To speed up certain types of queries, some implementations of system108create a high performance analytics store, which is referred to as a “summarization table,” that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the events and includes references to events containing the specific value in the specific field. For example, an example entry in a summarization table can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. This optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the summarization table to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. Also, if the system needs to process all events that have a specific field-value combination, the system can use the references in the summarization table entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In some implementations, the system maintains a separate summarization table for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific summarization table includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate summarization table for each indexer. The indexer-specific summarization table includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific summarization tables may also be bucket-specific. The summarization table can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some cases, when the summarization tables may not cover all of the events that are relevant to a query, the system can use the summarization tables to obtain partial results for the events that are covered by summarization tables, but may also have to search through other events that are not covered by the summarization tables to produce additional results. These additional results can then be combined with the partial results to produce a final set of results for the query. The summarization table and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “Distributed High Performance Analytics Store”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, issued on 8 Sep. 2015, and U.S. patent application Ser. No. 14/815,973, entitled “GENERATING AND STORING SUMMARIZATION TABLES FOR SETS OF SEARCHABLE EVENTS”, filed on 1 Aug. 2015, each of which is hereby incorporated by reference in its entirety for all purposes. To speed up certain types of queries, e.g., frequently encountered queries or computationally intensive queries, some implementations of system108create a high performance analytics store, which is referred to as a “summarization table,” (also referred to as a “lexicon” or “inverted index”) that contains entries for specific field-value pairs. Each of these entries keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. For example, an example entry in an inverted index can keep track of occurrences of the value “94107” in a “ZIP code” field of a set of events and the entry includes references to all of the events that contain the value “94107” in the ZIP code field. Creating the inverted index data structure avoids needing to incur the computational overhead each time a statistical query needs to be run on a frequently encountered field-value pair. In order to expedite queries, in most implementations, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. Note that the term “summarization table” or “inverted index” as used herein is a data structure that may be generated by an indexer that includes at least field names and field values that have been extracted and/or indexed from event records. An inverted index may also include reference values that point to the location(s) in the field searchable data store where the event records that include the field may be found. Also, an inverted index may be stored using well-known compression techniques to reduce its storage size. Further, note that the term “reference value” (also referred to as a “posting value”) as used herein is a value that references the location of a source record in the field searchable data store. In some implementations, the reference value may include additional information about each record, such as timestamps, record size, meta-data, or the like. Each reference value may be a unique identifier which may be used to access the event data directly in the field searchable data store. In some implementations, the reference values may be ordered based on each event record's timestamp. For example, if numbers are used as identifiers, they may be sorted so event records having a later timestamp always have a lower valued identifier than event records with an earlier timestamp, or vice-versa. Reference values are often included in inverted indexes for retrieving and/or identifying event records. In one or more implementations, an inverted index is generated in response to a user-initiated collection query. The term “collection query” as used herein refers to queries that include commands that generate summarization information and inverted indexes (or summarization tables) from event records stored in the field searchable data store. Note that a collection query is a special type of query that can be user-generated and is used to create an inverted index. A collection query is not the same as a query that is used to call up or invoke a pre-existing inverted index. In one or more implementation, a query can comprise an initial step that calls up a pre-generated inverted index on which further filtering and processing can be performed. For example, referring back toFIG.13, a set of events generated at block1320by either using a “collection” query to create a new inverted index or by calling up a pre-generated inverted index. A query with several pipelined steps will start with a pre-generated index to accelerate the query. FIG.7Cillustrates the manner in which an inverted index is created and used in accordance with the disclosed implementations. As shown inFIG.7C, an inverted index722can be created in response to a user-initiated collection query using the event data723stored in the raw record data store. For example, a non-limiting example of a collection query may include “collect clientip=127.0.0.1” which may result in an inverted index722being generated from the event data723as shown inFIG.7C. Each entry in inverted index722includes an event reference value that references the location of a source record in the field searchable data store. The reference value may be used to access the original event record directly from the field searchable data store. In one or more implementations, if one or more of the queries is a collection query, the responsive indexers may generate summarization information based on the fields of the event records located in the field searchable data store. In at least one of the various implementations, one or more of the fields used in the summarization information may be listed in the collection query and/or they may be determined based on terms included in the collection query. For example, a collection query may include an explicit list of fields to summarize. Or, in at least one of the various implementations, a collection query may include terms or expressions that explicitly define the fields, e.g., using regex rules. InFIG.7C, prior to running the collection query that generates the inverted index722, the field name “clientip” may need to be defined in a configuration file by specifying the “access_combined” source type and a regular expression rule to parse out the client IP address. Alternatively, the collection query may contain an explicit definition for the field name “clientip” which may obviate the need to reference the configuration file at search time. In one or more implementations, collection queries may be saved and scheduled to run periodically. These scheduled collection queries may periodically update the summarization information corresponding to the query. For example, if the collection query that generates inverted index722is scheduled to run periodically, one or more indexers would periodically search through the relevant buckets to update inverted index722with event data for any new events with the “clientip” value of “127.0.0.1.” In some implementations, the inverted indexes that include fields, values, and reference value (e.g., inverted index722) for event records may be included in the summarization information provided to the user. In other implementations, a user may not be interested in specific fields and values contained in the inverted index, but may need to perform a statistical query on the data in the inverted index. For example, referencing the example ofFIG.7Crather than viewing the fields within summarization table722, a user may want to generate a count of all client requests from IP address “127.0.0.1.” In this case, the search engine would simply return a result of “4” rather than including details about the inverted index722in the information provided to the user. The pipelined search language, e.g., SPL of the SPLUNK® ENTERPRISE system can be used to pipe the contents of an inverted index to a statistical query using the “stats” command for example. A “stats” query refers to queries that generate result sets that may produce aggregate and statistical results from event records, e.g., average, mean, max, min, rms, etc. Where sufficient information is available in an inverted index, a “stats” query may generate their result sets rapidly from the summarization information available in the inverted index rather than directly scanning event records. For example, the contents of inverted index722can be pipelined to a stats query, e.g., a “count” function that counts the number of entries in the inverted index and returns a value of “4.” In this way, inverted indexes may enable various stats queries to be performed absent scanning or search the event records. Accordingly, this optimization technique enables the system to quickly process queries that seek to determine how many events have a particular value for a particular field. To this end, the system can examine the entry in the inverted index to count instances of the specific value in the field without having to go through the individual events or perform data extractions at search time. In some implementations, the system maintains a separate inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. Alternatively, the system can maintain a separate inverted index for each indexer. The indexer-specific inverted index includes entries for the events in a data store that are managed by the specific indexer. Indexer-specific inverted indexes may also be bucket-specific. In at least one or more implementations, if one or more of the queries is a stats query, each indexer may generate a partial result set from previously generated summarization information. The partial result sets may be returned to the search head that received the query and combined into a single result set for the query As mentioned above, the inverted index can be populated by running a periodic query that scans a set of events to find instances of a specific field-value combination, or alternatively instances of all field-value combinations for a specific field. A periodic query can be initiated by a user, or can be scheduled to occur automatically at specific time intervals. A periodic query can also be automatically launched in response to a query that asks for a specific field-value combination. In some implementations, if summarization information is absent from an indexer that includes responsive event records, further actions may be taken, such as, the summarization information may generated on the fly, warnings may be provided the user, the collection query operation may be halted, the absence of summarization information may be ignored, or the like, or combination thereof. In one or more implementations, an inverted index may be set up to update continually. For example, the query may ask for the inverted index to update its result periodically, e.g., every hour. In such instances, the inverted index may be a dynamic data structure that is regularly updated to include information regarding incoming events. In some cases, e.g., where a query is executed before an inverted index updates, when the inverted index may not cover all of the events that are relevant to a query, the system can use the inverted index to obtain partial results for the events that are covered by inverted index, but may also have to search through other events that are not covered by the inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data on the data store to supplement the partial results. These additional results can then be combined with the partial results to produce a final set of results for the query. Note that in typical instances where an inverted index is not completely up to date, the number of events that an indexer would need to search through to supplement the results from the inverted index would be relatively small. In other words, the search to get the most recent results can be quick and efficient because only a small number of event records will be searched through to supplement the information from the inverted index. The inverted index and associated techniques are described in more detail in U.S. Pat. No. 8,682,925, entitled “Distributed High Performance Analytics Store”, issued on 25 Mar. 2014, U.S. Pat. No. 9,128,985, entitled “SUPPLEMENTING A HIGH PERFORMANCE ANALYTICS STORE WITH EVALUATION OF INDIVIDUAL EVENTS TO RESPOND TO AN EVENT QUERY”, filed on 31 Jan. 2014, and U.S. patent application Ser. No. 14/815,973, entitled “STORAGE MEDIUM AND CONTROL DEVICE”, filed on 21 Feb. 2014, each of which is hereby incorporated by reference in its entirety. 2.13.3.1 Extracting Event Data Using Posting In one or more implementations, if the system needs to process all events that have a specific field-value combination, the system can use the references in the inverted index entry to directly access the events to extract further information without having to search all of the events to find the specific field-value combination at search time. In other words, the system can use the reference values to locate the associated event data in the field searchable data store and extract further information from those events, e.g., extract further field values from the events for purposes of filtering or processing or both. The information extracted from the event data using the reference values can be directed for further filtering or processing in a query using the pipeline search language. The pipelined search language will, in one implementation, include syntax that can direct the initial filtering step in a query to an inverted index. In one implementation, a user would include syntax in the query that explicitly directs the initial searching or filtering step to the inverted index. Referencing the example inFIG.15, if the user determines that she needs the user id fields associated with the client requests from IP address “127.0.0.1,” instead of incurring the computational overhead of performing a brand new search or regenerating the inverted index with an additional field, the user can generate a query that explicitly directs or pipes the contents of the already generated inverted index1502to another filtering step requesting the user ids for the entries in inverted index1502where the server response time is greater than “0.0900” microseconds. The search engine would use the reference values stored in inverted index722to retrieve the event data from the field searchable data store, filter the results based on the “response time” field values and, further, extract the user id field from the resulting event data to return to the user. In the present instance, the user ids “frank” and “carlos” would be returned to the user from the generated results table722. In one implementation, the same methodology can be used to pipe the contents of the inverted index to a processing step. In other words, the user is able to use the inverted index to efficiently and quickly perform aggregate functions on field values that were not part of the initially generated inverted index. For example, a user may want to determine an average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” In this case, the search engine would again use the reference values stored in inverted index722to retrieve the event data from the field searchable data store and, further, extract the object size field values from the associated events731,732,733and734. Once, the corresponding object sizes have been extracted (i.e.2326,2900,2920, and5000), the average can be computed and returned to the user. In one implementation, instead of explicitly invoking the inverted index in a user-generated query, e.g., by the use of special commands or syntax, the SPLUNK® ENTERPRISE system can be configured to automatically determine if any prior-generated inverted index can be used to expedite a user query. For example, the user's query may request the average object size (size of the requested gif) requested by clients from IP address “127.0.0.1.” without any reference to or use of inverted index722. The search engine, in this case, would automatically determine that an inverted index722already exists in the system that could expedite this query. In one implementation, prior to running any search comprising a field-value pair, for example, a search engine may search though all the existing inverted indexes to determine if a pre-generated inverted index could be used to expedite the search comprising the field-value pair. Accordingly, the search engine would automatically use the pre-generated inverted index, e.g., index722to generate the results without any user-involvement that directs the use of the index. Using the reference values in an inverted index to be able to directly access the event data in the field searchable data store and extract further information from the associated event data for further filtering and processing is highly advantageous because it avoids incurring the computation overhead of regenerating the inverted index with additional fields or performing a new search. The data intake and query system includes one or more forwarders that receive raw machine data from a variety of input data sources, and one or more indexers that process and store the data in one or more data stores. By distributing events among the indexers and data stores, the indexers can analyze events for a query in parallel. In one or more implementations, a multiple indexer implementation of the search system would maintain a separate and respective inverted index for each of the above-described time-specific buckets that stores events for a specific time range. A bucket-specific inverted index includes entries for specific field-value combinations that occur in events in the specific bucket. As explained above, a search head would be able to correlate and synthesize data from across the various buckets and indexers. This feature advantageously expedites searches because instead of performing a computationally intensive search in a centrally located inverted index that catalogues all the relevant events, an indexer is able to directly search an inverted index stored in a bucket associated with the time-range specified in the query. This allows the search to be performed in parallel across the various indexers. Further, if the query requests further filtering or processing to be conducted on the event data referenced by the locally stored bucket-specific inverted index, the indexer is able to simply access the event records stored in the associated bucket for further filtering and processing instead of needing to access a central repository of event records, which would dramatically add to the computational overhead. In one implementation, there may be multiple buckets associated with the time-range specified in a query. If the query is directed to an inverted index, or if the search engine automatically determines that using an inverted index would expedite the processing of the query, the indexers will search through each of the inverted indexes associated with the buckets for the specified time-range. This feature allows the High Performance Analytics Store to be scaled easily. In certain instances, where a query is executed before a bucket-specific inverted index updates, when the bucket-specific inverted index may not cover all of the events that are relevant to a query, the system can use the bucket-specific inverted index to obtain partial results for the events that are covered by bucket-specific inverted index, but may also have to search through the event data in the bucket associated with the bucket-specific inverted index to produce additional results on the fly. In other words, an indexer would need to search through event data stored in the bucket (that was not yet processed by the indexer for the corresponding inverted index) to supplement the partial results from the bucket-specific inverted index. FIG.7Dpresents a flowchart illustrating how an inverted index in a pipelined search query can be used to determine a set of event data that can be further limited by filtering or processing in accordance with the disclosed implementations. At block742, a query is received by a data intake and query system. In some implementations, the query can be received as a user generated query entered into a search bar of a graphical user search interface. The search interface also includes a time range control element that enables specification of a time range for the query. At block744, an inverted index is retrieved. Note, that the inverted index can be retrieved in response to an explicit user search command inputted as part of the user generated query. Alternatively, the search engine can be configured to automatically use an inverted index if it determines that using the inverted index would expedite the servicing of the user generated query. Each of the entries in an inverted index keeps track of instances of a specific value in a specific field in the event data and includes references to events containing the specific value in the specific field. In order to expedite queries, in most implementations, the search engine will employ the inverted index separate from the raw record data store to generate responses to the received queries. At block746, the query engine determines if the query contains further filtering and processing steps. If the query contains no further commands, then, in one implementation, summarization information can be provided to the user at block754. If, however, the query does contain further filtering and processing commands, then at block750, the query engine determines if the commands relate to further filtering or processing of the data extracted as part of the inverted index or whether the commands are directed to using the inverted index as an initial filtering step to further filter and process event data referenced by the entries in the inverted index. If the query can be completed using data already in the generated inverted index, then the further filtering or processing steps, e.g., a “count” number of records function, “average” number of records per hour etc. are performed and the results are provided to the user at block752. If, however, the query references fields that are not extracted in the inverted index, then the indexers will access event data pointed to by the reference values in the inverted index to retrieve any further information required at block756. Subsequently, any further filtering or processing steps are performed on the fields extracted directly from the event data and the results are provided to the user at step758. 2.13.4. Accelerating Report Generation In some implementations, a data server system such as the data intake and query system can accelerate the process of periodically generating updated reports based on query results. To accelerate this process, a summarization engine automatically examines the query to determine whether generation of updated reports can be accelerated by creating intermediate summaries. If reports can be accelerated, the summarization engine periodically generates a summary covering data obtained during a latest non-overlapping time period. For example, where the query seeks events meeting a specified criteria, a summary for the time period includes only events within the time period that meet the specified criteria. Similarly, if the query seeks statistics calculated from the events, such as the number of events that match the specified criteria, then the summary for the time period includes the number of events in the period that match the specified criteria. In addition to the creation of the summaries, the summarization engine schedules the periodic updating of the report associated with the query. During each scheduled report update, the query engine determines whether intermediate summaries have been generated covering portions of the time period covered by the report update. If so, then the report is generated based on the information contained in the summaries. Also, if additional event data has been received and has not yet been summarized, and is required to generate the complete report, the query can be run on these additional events. Then, the results returned by this query on the additional events, along with the partial results obtained from the intermediate summaries, can be combined to generate the updated report. This process is repeated each time the report is updated. Alternatively, if the system stores events in buckets covering specific time ranges, then the summaries can be generated on a bucket-by-bucket basis. Note that producing intermediate summaries can save the work involved in re-running the query for previous time periods, so advantageously only the newer events needs to be processed while generating an updated report. These report acceleration techniques are described in more detail in U.S. Pat. No. 8,589,403, entitled “Compressed Journaling In Event Tracking Files For Metadata Recovery And Replication”, issued on 19 Nov. 2013, U.S. Pat. No. 8,412,696, entitled “Real Time Searching And Reporting”, issued on 2 Apr. 2011, and U.S. Pat. Nos. 8,589,375 and 8,589,432, both also entitled “REAL TIME SEARCHING AND REPORTING”, both issued on 19 Nov. 2013, each of which is hereby incorporated by reference in its entirety for all purposes. 2.14. Security Features The data intake and query system provides various schemas, dashboards, and visualizations that simplify developers' tasks to create applications with additional capabilities. One such application is the an enterprise security application, such as SPLUNK® ENTERPRISE SECURITY, which performs monitoring and alerting operations and includes analytics to facilitate identifying both known and unknown security threats based on large volumes of data stored by the data intake and query system. The enterprise security application provides the security practitioner with visibility into security-relevant threats found in the enterprise infrastructure by capturing, monitoring, and reporting on data from enterprise security devices, systems, and applications. Through the use of the data intake and query system searching and reporting capabilities, the enterprise security application provides a top-down and bottom-up view of an organization's security posture. The enterprise security application leverages the data intake and query system search-time normalization techniques, saved searches, and correlation searches to provide visibility into security-relevant threats and activity and generate notable events for tracking. The enterprise security application enables the security practitioner to investigate and explore the data to find new or unknown threats that do not follow signature-based patterns. Conventional Security Information and Event Management (SIEM) systems lack the infrastructure to effectively store and analyze large volumes of security-related data. Traditional SIEM systems typically use fixed schemas to extract data from pre-defined security-related fields at data ingestion time and store the extracted data in a relational database. This traditional data extraction process (and associated reduction in data size) that occurs at data ingestion time inevitably hampers future incident investigations that may need original data to determine the root cause of a security issue, or to detect the onset of an impending security threat. In contrast, the enterprise security application system stores large volumes of minimally-processed security-related data at ingestion time for later retrieval and analysis at search time when a live security threat is being investigated. To facilitate this data retrieval process, the enterprise security application provides pre-specified schemas for extracting relevant values from the different types of security-related events and enables a user to define such schemas. The enterprise security application can process many types of security-related information. In general, this security-related information can include any information that can be used to identify security threats. For example, the security-related information can include network-related information, such as IP addresses, domain names, asset identifiers, network traffic volume, uniform resource locator strings, and source addresses. The process of detecting security threats for network-related information is further described in U.S. Pat. No. 8,826,434, entitled “SECURITY THREAT DETECTION BASED ON INDICATIONS IN BIG DATA OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 2 Sep. 2014, U.S. Pat. No. 9,215,240, entitled “INVESTIGATIVE AND DYNAMIC DETECTION OF POTENTIAL SECURITY-THREAT INDICATORS FROM EVENTS IN BIG DATA”, issued on 15 Dec. 2015, U.S. Pat. No. 9,173,801, entitled “GRAPHIC DISPLAY OF SECURITY THREATS BASED ON INDICATIONS OF ACCESS TO NEWLY REGISTERED DOMAINS”, issued on 3 Nov. 2015, U.S. Pat. No. 9,248,068, entitled “SECURITY THREAT DETECTION OF NEWLY REGISTERED DOMAINS”, issued on 2 Feb. 2016, U.S. Pat. No. 9,426,172, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME ACCESSES”, issued on 23 Aug. 2016, and U.S. Pat. No. 9,432,396, entitled “SECURITY THREAT DETECTION USING DOMAIN NAME REGISTRATIONS”, issued on 30 Aug. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. Security-related information can also include malware infection data and system configuration information, as well as access control information, such as login/logout information and access failure notifications. The security-related information can originate from various sources within a data center, such as hosts, virtual machines, storage devices and sensors. The security-related information can also originate from various sources in a network, such as routers, switches, email servers, proxy servers, gateways, firewalls and intrusion-detection systems. During operation, the enterprise security application facilitates detecting “notable events” that are likely to indicate a security threat. A notable event represents one or more anomalous incidents, the occurrence of which can be identified based on one or more events (e.g., time stamped portions of raw machine data) fulfilling pre-specified and/or dynamically-determined (e.g., based on machine-learning) criteria defined for that notable event. Examples of notable events include the repeated occurrence of an abnormal spike in network usage over a period of time, a single occurrence of unauthorized access to system, a host communicating with a server on a known threat list, and the like. These notable events can be detected in a number of ways, such as: (1) a user can notice a correlation in events and can manually identify that a corresponding group of one or more events amounts to a notable event; or (2) a user can define a “correlation search” specifying criteria for a notable event, and every time one or more events satisfy the criteria, the application can indicate that the one or more events correspond to a notable event; and the like. A user can alternatively select a pre-defined correlation search provided by the application. Note that correlation searches can be run continuously or at regular intervals (e.g., every hour) to search for notable events. Upon detection, notable events can be stored in a dedicated “notable events index,” which can be subsequently accessed to generate various visualizations containing security-related information. Also, alerts can be generated to notify system operators when important notable events are discovered. The enterprise security application provides various visualizations to aid in discovering security threats, such as a “key indicators view” that enables a user to view security metrics, such as counts of different types of notable events. For example,FIG.17Aillustrates an example key indicators view1700that comprises a dashboard, which can display a value1701, for various security-related metrics, such as malware infections1702. It can also display a change in a metric value1703, which indicates that the number of malware infections increased by 63 during the preceding interval. Key indicators view1700additionally displays a histogram panel1704that displays a histogram of notable events organized by urgency values, and a histogram of notable events organized by time intervals. This key indicators view is described in further detail in pending U.S. patent application Ser. No. 13/956,338, entitled “Key Indicators View”, filed on 31 Jul. 2013, and which is hereby incorporated by reference in its entirety for all purposes. These visualizations can also include an “incident review dashboard” that enables a user to view and act on “notable events.” These notable events can include: (1) a single event of high importance, such as any activity from a known web attacker; or (2) multiple events that collectively warrant review, such as a large number of authentication failures on a host followed by a successful authentication. For example,FIG.17Billustrates an example incident review dashboard1710that includes a set of incident attribute fields1711that, for example, enables a user to specify a time range field1712for the displayed events. It also includes a timeline1713that graphically illustrates the number of incidents that occurred in time intervals over the selected time range. It additionally displays an events list1714that enables a user to view a list of all of the notable events that match the criteria in the incident attributes fields1711. To facilitate identifying patterns among the notable events, each notable event can be associated with an urgency value (e.g., low, medium, high, critical), which is indicated in the incident review dashboard. The urgency value for a detected event can be determined based on the severity of the event and the priority of the system component associated with the event. 2.15. Data Center Monitoring As mentioned above, the data intake and query platform provides various features that simplify the developers' task to create various applications. One such application is a virtual machine monitoring application, such as SPLUNK® APP FOR VMWARE® that provides operational visibility into granular performance metrics, logs, tasks and events, and topology from hosts, virtual machines and virtual centers. It empowers administrators with an accurate real-time picture of the health of the environment, proactively identifying performance and capacity bottlenecks. Conventional data-center-monitoring systems lack the infrastructure to effectively store and analyze large volumes of machine-generated data, such as performance information and log data obtained from the data center. In conventional data-center-monitoring systems, machine-generated data is typically pre-processed prior to being stored, for example, by extracting pre-specified data items and storing them in a database to facilitate subsequent retrieval and analysis at search time. However, the rest of the data is not saved and discarded during pre-processing. In contrast, the virtual machine monitoring application stores large volumes of minimally processed machine data, such as performance information and log data, at ingestion time for later retrieval and analysis at search time when a live performance issue is being investigated. In addition to data obtained from various log files, this performance-related information can include values for performance metrics obtained through an application programming interface (API) provided as part of the vSphere Hypervisor™ system distributed by VMware, Inc. of Palo Alto, California. For example, these performance metrics can include: (1) CPU-related performance metrics; (2) disk-related performance metrics; (3) memory-related performance metrics; (4) network-related performance metrics; (5) energy-usage statistics; (6) data-traffic-related performance metrics; (7) overall system availability performance metrics; (8) cluster-related performance metrics; and (9) virtual machine performance statistics. Such performance metrics are described in U.S. patent application Ser. No. 14/167,316, entitled “Correlation For User-Selected Time Ranges Of Values For Performance Metrics Of Components In An Information-Technology Environment With Log Data From That Information-Technology Environment”, filed on 29 Jan. 2014, and which is hereby incorporated by reference in its entirety for all purposes. To facilitate retrieving information of interest from performance data and log files, the virtual machine monitoring application provides pre-specified schemas for extracting relevant values from different types of performance-related events, and also enables a user to define such schemas. The virtual machine monitoring application additionally provides various visualizations to facilitate detecting and diagnosing the root cause of performance problems. For example, one such visualization is a “proactive monitoring tree” that enables a user to easily view and understand relationships among various factors that affect the performance of a hierarchically structured computing system. This proactive monitoring tree enables a user to easily navigate the hierarchy by selectively expanding nodes representing various entities (e.g., virtual centers or computing clusters) to view performance information for lower-level nodes associated with lower-level entities (e.g., virtual machines or host systems). Example node-expansion operations are illustrated inFIG.17C, wherein nodes1733and1734are selectively expanded. Note that nodes1731-1739can be displayed using different patterns or colors to represent different performance states, such as a critical state, a warning state, a normal state or an unknown/offline state. The ease of navigation provided by selective expansion in combination with the associated performance-state information enables a user to quickly diagnose the root cause of a performance problem. The proactive monitoring tree is described in further detail in U.S. Pat. No. 9,185,007, entitled “PROACTIVE MONITORING TREE WITH SEVERITY STATE SORTING”, issued on 10 Nov. 2015, and U.S. Pat. No. 9,426,045, also entitled “PROACTIVE MONITORING TREE WITH SEVERITY STATE SORTING”, issued on 23 Aug. 2016, each of which is hereby incorporated by reference in its entirety for all purposes. The virtual machine monitoring application also provides a user interface that enables a user to select a specific time range and then view heterogeneous data comprising events, log data, and associated performance metrics for the selected time range. For example, the screen illustrated inFIG.17Ddisplays a listing of recent “tasks and events” and a listing of recent “log entries” for a selected time range above a performance-metric graph for “average CPU core utilization” for the selected time range. Note that a user is able to operate pull-down menus1742to selectively display different performance metric graphs for the selected time range. This enables the user to correlate trends in the performance-metric graph with corresponding event and log data to quickly determine the root cause of a performance problem. This user interface is described in more detail in U.S. patent application Ser. No. 14/167,316, entitled “Correlation For User-Selected Time Ranges Of Values For Performance Metrics Of Components In An Information-Technology Environment With Log Data From That Information-Technology Environment”, filed on 29 Jan. 2014, and which is hereby incorporated by reference in its entirety for all purposes. 3.0. Streaming Data Visualizations Conventional approaches to viewing and analyzing data event information include reviewing logs of data events for multiple machines and/or sub-systems. One problem with such approaches is that the multiple machines and/or sub-systems can produce a large number of logs, each of which can record a very large number of data events. Reviewing these logs can be very cumbersome and manhour-intensive. Accordingly, in various implementations disclosed herein, a stream visualization of streams of data events (which may also be referred to below as “streaming data visualization” or “streaming data event visualization”) can be implemented to provide a visualization of streams of data events over time. The stream visualization is generated based on event stream data for one or more data paths included in a data structure. The event stream data for a data path can specify respective counts of data events between entities in the data path over multiple time periods. The stream visualization includes representations of events streaming between visualizations of entities in the data path. Such techniques are described below in further detail in conjunction withFIGS.18-22. 3.1. Systems for Streaming Data Visualizations FIG.18illustrates a more detailed view of the example networked computer environment100ofFIG.1, in accordance with example implementations. As shown, a networked computer environment1800may include, without limitation, a data intake and query system108and a client device404that communicate with one another via one or more networks420. The data intake and query system108and the client device404may function substantially the same as described in conjunction withFIGS.1and4, except as further described herein. Examples of client devices404may include, without limitation, smartphones, tablet computers, handheld computers, wearable devices, laptop computers, desktop computers, servers, portable media players, gaming devices, and so forth. The client device404may include, without limitation, a processor1802, a storage1804, an input/output (I/O) device interface1806, a network interface1808, an interconnect1810, and a system memory1812. In general, the processor1802may retrieve and execute programming instructions stored in system memory1812. The processor1802may be any technically feasible form of processing device configured to process data and execute program code. The processor1802could be, for example, a central processing unit (CPU), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), and so forth. The processor1802stores and retrieves application data residing in system memory1812. The processor1802is included to be representative of a single processing device, multiple processing devices, a single processing device having one or more processing cores, and the like. In operation, the processor1802is the master processor of the client device404, controlling and coordinating operations of other system components. The system memory1812stores software application programs and data for use by the processor1802. The processor1802executes software application programs stored within the system memory1812and optionally an operating system. In particular, the processor1802executes software and then performs one or more of the functions and operations set forth in the present application. The storage1804may be a disk drive storage device. Although shown as a single unit, the storage1804may be a combination of fixed and/or removable storage devices, such as fixed disc drives, floppy disc drives, tape drives, removable memory cards, or optical storage, network attached storage (NAS), or a storage area-network (SAN). The processor1802communicates with other computing devices and systems via the network interface1808, where the network interface1808is configured to transmit and receive data via one or more communications networks420. The interconnect1810facilitates transmission, such as transmission of programming instructions and application data, between the processor1802, the input/output (I/O) devices interface1806, the storage1804, the network interface1808, and the system memory1812. The I/O devices interface1806is configured to receive input data from user I/O devices and sensors, and/or transmit output data to user1/O devices. These I/O devices include, without limitation, sensor(s)1820, input device(s)1822, and one or more display devices1824. The display device1824generally represents any technically feasible means for displaying an image. For example, the display device may be a liquid crystal display (LCD) display, organic light emitting diode (OLED) display, or digital light processing (DLP) display. In various implementations, the display device1824includes a stereoscopic head-mounted display (e.g., a stereoscopic display in a virtual reality headset). In some other implementations, the display device1824includes a display on a portable device (e.g., a display on a smartphone or tablet computer), and the display may be a touch-sensitive display. The sensor(s)1820acquire data from the physical environment of client device404. For example, the sensor(s)1820may include, without limitation, a gyroscope, a motion sensor, an accelerometer, and an altimeter. These sensors1820may be used by the client device404to determine various parameters associated with a user (e.g., an orientation of the user, movement of the user, a height of the user) and/or gestures performed by the user. The input device(s)1822may include, without limitation, one or more buttons, a keyboard, a touch-sensitive display, a touch-sensitive pad, a mouse or other pointing device, a joystick, a virtual reality controller, and/or a control pad. The input device(s)1822facilitate manipulation of an extended reality environment and/or objects within the extended reality environment by a user. The I/O devices interface1806may also include an audio output unit configured to generate an electrical audio output signal. The additional user I/O devices may further include a speaker configured to generate an acoustic output in response to the electrical audio output signal, and/or a microphone configured to acquire audio signals (L, capture sound waves) from the environment and/or the user. The system memory1812may include, without limitation, a visualization application1814and a database1816. The processor1802executes the visualization application1814to perform one or more of the techniques disclosed herein, to communicate with the data intake and query system108, and to store data in and retrieve data from the database1816. In various implementations, the visualization application1814processes data structures that include data event information (e.g., event stream data), generates streaming data visualizations based on the data structures, and outputs the streaming data visualizations to the display device1824. In some implementations, the database1816may include user profiles and configurations. The user profiles and configurations may store information on users and their roles. Who a user is and what role the user has may affect what data may be accessed by or shown to the user within an extended reality environment (e.g., what data events may be included in a streaming data visualization). In some implementations, the visualization application1814can generate extended reality environments, including any number of virtual objects (e.g., streaming data visualizations) in the extended reality environment, and can output visual content, and optionally also audio and/or haptic content, corresponding to the extended reality environments to the display device1824and other output devices coupled to the client device404(e.g., a speaker, a haptic device). The visualization application1814may receive inputs (e.g., user inputs from the input devices1822, sensor data from the sensors1820) and generate extended reality environments based at least on the received inputs. In various implementations, the display device1824and the sensor(s)1820are integrated into a headset (e.g., a virtual reality headset) coupled to the client device404via the I/O devices interface1806. The virtual reality headset may further include one or more audio speakers and/or a microphone integrated into the headset. In some implementations, the client device404is a virtual reality headset that includes at least the sensor(s)1820and the display device1824and is coupled to one or more input devices1822, as well as including the other components of the client device404as shown. The visualization application1814may generate extended reality environments that are configured for display via a headset-integrated display device1824(virtual reality (VR) environments), and output visual content corresponding to such extended reality environments to the headset-integrated display device1824. The display device1824may be a stereoscopic display device, which may be integrated into a virtual reality headset worn by a user. In some implementations, the stereoscopic display device1824may be configured to display an immersive VR environment to the user. In various implementations, the data intake and query system108includes a data event module1832, data event query(s)1834, data event log(s)1836, and event data structure(s)1838. The data event module1832can be implemented in hardware and/or software. The data event module1832queries the data event logs1846according to the data event queries1834and generates the event data structures1838based on the results of the data event queries1834. In some implementations, the data event module1832is implemented at a data stream processor at the data intake and query system108. The data event queries1834are queries that can be executed on the data event logs1836. A data event query1834specifies certain data event information to be extracted from data event logs1836. For example, a data event query1834can specify that certain data (e.g., records of data events in a certain data path and/or involving certain machines, records of data events of a certain type, or any combination thereof) be extracted, and can optionally specify the one or more data event logs1836on which the data event query1834is to be executed. When the data event module1832executes the data event queries1834, the data event module1832searches for the specified data in the data event logs1836and extracts the specified data from the data event logs1836. The data event query1834can be created and/or configured by a user via a user interface (e.g., an interface provided by the data intake and query system108). In some implementations, the data event query1834is written in a search processing language. The data event logs1836are logs of data events on one or more data paths between one or more machines, assets, and/or the like, for which the data intake and query system108is gathering data. As used herein, a data event in a data path can be any transmission, or a response to a transmission, between one entity (e.g., a machine, an asset, a group of machines or assets, etc.) and another entity. In some implementations, a data path can be defined as a path of data transmission or data flow from an origin entity to a destination entity (origin to destination of the transmission, source to sink of a piece of data being transmitted), including any branching paths along the flow from the origin to the destination, and optionally including any relevant intermediate entities (e.g., forwarding or relaying machines) along the path or flow of data transmission. Thus, a data path includes a set of entities that includes at least an origin and a destination, and optionally further includes one or more intermediate entities. In some implementations, a data path is further defined to include events of a specific type or status between the origin and the destination. A collection of data events occurring on a data path over time can be referred to as an “event stream” or a “data event stream.” The transmission can be, for example and without limitation, a request for certain data, a request to perform a certain operation, or a handshaking request. The response to the transmission can be, for example and without limitation, data responsive to the request, a response indicating that the request cannot be fulfilled, or an acknowledgement. The data event logs1836can be generated and stored by the data intake and query system108as the data intake and query system108collects data from various machines, assets, and other entities. In some implementations, a data event query1834extracts, for each occurrence of a data event on a data path that is recorded in a data event log1836, a timestamp of the occurrence, entities associated with the data event (e.g., the data source, the data sink, the machine that sent the transmission in the data event, the machine that received the transmission in the data event, any intermediate machine involved in the transmission), a type or status of the data event (e.g., a type of the transmission, such as whether the transmission is a web page request, a cache request, a404page-not-found response, a response with the requested page, a cache hit response with the requested cache data, a cache miss response, etc.), and optionally a host entity (e.g., a machine that recorded the data event in a data event log1836). The data event module1832processes the data extracted by the data event queries1834to analyze the extracted data. In various implementations, the data event module1832analyzes the extracted data to identify information associated with data paths, including for example, timestamps of data event occurrences, entities associated with the data events, and types or statuses of data events (e.g., transmissions and responses, particular types of transmissions and responses). The data event module1832generates event stream data based on the identified information. In some implementations, event stream data for a given data path specifies an origin entity (e.g., the data source, the entity sending the transmission or response in the data event), a destination entity (e.g., the data sink, the entity receiving the transmission or response in the data event) and a type or status of data event. The designation of origin entity and destination entity implies a direction of the data path, the direction of data travel on the data path). In some implementations, the event stream data for the data path further includes an identification of the data event log1836(e.g., a name of the log file) that is the source of the extracted data event information on which the event stream data is based. In some implementations, information identifying the origin entity, destination entity, type/status of data event, and the log file can be included in the event stream data as a concatenation of delineated (e.g., comma or semicolon separated) strings holding the information. In some implementations, the data event module1832can group individual entities and aggregate event stream data for the group of entities into aggregated event stream data. For example, for data paths involving different user devices as the origin entity and a common destination entity, the different user devices can be grouped together as a grouped origin entity, thereby combining the data paths involving the different user devices into one data path. As another example, different destination entities performing the same function in data paths can be grouped together as well. In some implementations, the data event module1832can map a specific data event type or status to a high-level data event type or status classification for inclusion in the event stream data. For example, various types of request transmissions (e.g., web page requests, cache requests, database requests) can be mapped to a “request” type or status. As another example, various types of successful responses (e.g., response with the requested web page, a cache hit response with the requested data, a database hit response with the requested data) can be mapped to a “success” status. As a further example, various types of failure transmissions (e.g., a404page-not-found response, a cache miss response, a database miss response) can be mapped to a “failure” status. Event stream data would indicate the data event type or status associated with a data path using the high-level data event type classifications (e.g., “request,” “success,” “failure,” etc.) instead of the particular data event types (e.g., web page request, cache request, cache hit response, cache miss response, 404 response, etc.). Additionally, the data event module1832also processes the extracted data to determine counts of data events occurring on a data path per time period in a sequence of time periods. For example, for a given data path from an origin entity to a destination entity, the data event module1832can determine, from the extracted data, per-period counts of events of one or different types in the event stream on the data path (9., per-minute counts over a 12-hour window, per-hour counts over a one-week window, continuous per-minute counts in real-time). The data event module1832generates one or more event data structures1838that includes the event stream data, and the event stream data includes the per-period counts described above. In some implementations, the event data structure1838is a table, where each row corresponds to a time period. Each column in the table can correspond to a certain type of event on a data path. Accordingly, for a given type of event on a data path, the cells in the corresponding column hold the counts of the data events of a certain type on a data path in the indicated time periods. In some implementations, the data event logs1836are updated in real-time, and the data event module1832can execute the data event queries1834on the data event logs1836periodically, upon a receiving a trigger, and/or in real-time to extract new and/or updated data from the data event logs1836. The data event module1832can process the extracted new and/or update data to update the event data structures1838with new and/or updated event stream data (e.g., counts of events in newly elapsed time periods, event stream data in new data paths, etc.). The data event module1832can transmit the event data structures1838to the client device404periodically, upon update, and/or on-demand (e.g., upon request by the visualization application1814). The visualization application1814reads the event stream data included in the event data structures1838to identify data paths to be visualized, and the entities and counts of data events associated with the data paths. The visualization application1814then proceeds to generate a streaming data visualization that includes visualizations (e.g., representations) of the identified entities and visualizations (e.g., representations) of event streams on data paths. The visualization application1814outputs the stream data visualization to the display device1824for display to a user. In some implementations, the visualization application1814outputs the streaming data visualization for display in a virtual reality environment (et, output to a virtual reality headset). In some implementations, the data event queries1834are configurable by the user. For example, the data event module1832could provide a user interface, accessible via the client device404, for creating and/or configuring data event queries1834. A user can, via the user interface, specify and/or configure one or more data event queries1834in any technically feasible manner (e.g., writing a query using a search processing language, writing a natural language query). In some implementations, one or more data path specifications in the event data structures1838can specify branching paths of a data flow from an origin to a destination. For example, there can be multiple branches of data flow from an origin to a destination, and each of those paths can be specified with individual data path specifications. Alternatively, data path specifications in the event data structures1838can have a format that enumerates, between the same origin and destination and for the event type, the different branching paths going from the origin to the destination. In some implementations, the data intake and query system108further includes definitions of data stream pipeline topologies. In some implementations, a data stream pipeline is a path or “pipeline” of data flowing within a system from an origin/source to a destination/sink, for which data can be gathered and analyzed by a data stream processor (e.g., a data stream processor module or system running in and/or in conjunction with the data intake and query system108). A topology of a data stream pipeline specifies the origin/source and destination/sink entities, any intermediate entities, and types of data associated with the data stream pipeline. Accordingly, a data path specification for the event data structures1838can specify a data path that follows the topology of a data stream pipeline, and counts of data events matching that data path specification can correspond to data flows in that data stream pipeline. 3.2. Techniques for Streaming Data Visualizations As described, visualization application1814reads the event data structures1838to generate a streaming data visualization.FIG.19illustrates a conceptual diagram of an event data structure1838, in accordance with example implementations. A data structure1900, as an example of an event data structure1838, is illustrated as a table, but it should be appreciated that event data structures1838can be generated, stored, and accessed as any suitable data structure format, including but not limited to a table. The data structure1900includes multiple rows1906, each of which corresponds to a time period or increment in a sequence of time periods/increments over a time window. In some implementations, the time periods/increments in the sequence of time periods/increments are 1-minute increments over a time window (e.g, 24-hour day). Further, in some implementations, the span of a time period/increment and the span of the time window is configurable by a user or a machine-learned system. For example, instead of minutes over a 24-hour day, the time periods can be configured to be 1-hour increments and the time window to be one week. Further, in some implementations, event stream data for more coarse time periods can be determined from event stream data for finer time periods (e.g., event stream data for 1-hour increments can be determined from event stream data for 1-minute increments, given sufficient event stream data in the data structure). The sequence of time periods is indicated in header column1904. For example, row1906-1corresponds to the 9:11 AM minute on Aug. 12, 2020, and row1906-2corresponds to the 9:10 AM minute on Aug. 12, 2020. The data structure1900also includes one or more columns1902, each of which corresponds to a data path, and, in particular, a type of event on a certain data path. The specifications1908of data paths, including event types, for the columns1902are indicated in a header row. The data structure1900as shown includes columns1902-1thru1902-n, with data path specifications1908-1thru1908-n, respectively. For example, a data path specification1908as shown includes a concatenated, comma-separated string that includes a string (Logfile) identifying the source data event log1836, a string identifying a host (Host) that recorded the data events corresponding to the data path specification, a string identifying an origin entity (From) of the data events corresponding to the data path specification, a string identifying the destination entity (To) of the data events corresponding to the data path specification, and a string identifying a type or status (Status) of the data events corresponding to the data path specification. In some implementations, the host, origin entity, and destination entity strings are strings indicating IP addresses of the host, origin entity, and destination entity, respectively. In some implementations, a group entity (e.g., a group of machines, user machines) can be represented by a predefined string, in an IP address format, that indicates the type of group entity (e.g., “0.0.0.0” for indicating user machines). In some implementations, the data path specification1908can include, in addition or alternatively to the host, one or more strings indicating one or more intermediate entities in the data path (e.g., an entity, specified in IP address format, that relays data events from the origin entity to the destination entity) As a concrete example, a data path specification string may be “webpageserver.log,22.6.77.1,0.0.0.0,22.6.77.1,request”, where the string “webpageserver.log” indicates a source data event log1836, IP address string “22.6.77.1” identifies the host that recorded the data events, the IP address string “0.0.0.0” identifies the origin entity as user machines, the IP address string “22.6.77.1” identifies the destination entity (accordingly, in this example data path specification the same entity is the host and the destination), and the string “request” indicates the type of data event specified by the data path specification. In some implementations, each data path specification is a unique combination of the strings included in the data path specification. Thus, for example, data path specifications1908-1and1908-2are placed in separate columns in the data structure in accordance with the different statuses (Status1 vs. Status2), even though the source data event log, host, origin entity, and destination entity are the same. Similarly, data path specifications1908-1and1908-3are placed in separate columns in the data structure in accordance with the different origin entity (To1 vs. To2), even though the source data event log, host, destination entity, and status are the same. For each data path specification1908and per time period in the sequence of time periods, the data structure1900includes a count of occurrences of data events matching the data path specification. For example, for data path specification1908-1, a count1910-1in row1906-1indicates that 44 data events matching the data path specification1908-1occurred in the 9:11 AM minute on Aug. 12, 2020. A count1910-2at row1906-2indicates that 49 data events matching the data path specification1908-1occurred in the 9:10 AM minute on Aug. 12, 2020. Of course, counts of occurrences are enumerated in non-negative integers (e.g., 0 or above). FIGS.20A-20Eillustrate an example visualization of streams of data events, in accordance with example implementations. A visualization2000, which can be generated by the visualization application1814, includes visualizations (e.g., representations) of entities, a visualization timeline2018, and visualization (e.g., representations) of streams of data event between entities. The visualizations of entities can include two-dimensional (2D) objects (e.g., 2D icons or images) and/or three-dimensional (3D) objects (e.g., 3D icons, 3D objects in virtual reality). For description purposes, the visualization2000is described as a visualization of data events occurring in a web server system, but it should be appreciated that the features of the visualization2000described below are applicable to visualizations of any system, sets of entities, and/or types of data events. As shown inFIG.20A, the visualization2000includes a representation2002of an aggregate of user devices trying to access content in the web server system (e.g., browsing pages hosted by the web server system). The visualization2000also includes representations2004,2006,2008,2010, and2012representing respective entities (e.g., machines and/or sub-systems) within the web server system. For example, these entities can include servers that receive and process requests from user devices, a user authentication sub-system, an account creating sub-system, and/or the like. The visualization2000further includes a representation2014of a cache and a representation2016of a database in the web server system. The visualization2000also includes lines connecting entity representations. The lines represent data paths in the web server system (e.g., data paths). For example, an entity representation2004is connected to the cache representation2014by lines2024and2028. Line2024represents one or more data paths from the entity2004to the cache2014(that is, the entity representation2004represents the origin entity and the cache representation2014represents the destination entity), and line2028represents one or more data paths from the cache representation2014to the entity representation2004(that is, the entity representation2004represents the destination entity and the cache2014represents the origin entity). In some implementations, the two lines2024and2028may be combined into one line. In some implementations, a line2024and/or2028may be further separated into separate lines representing each data event type or status on the data path. In some implementations, a line connecting two entities can be wider on one end than the other (e.g., tapered) to indicate the directionality of the corresponding data paths. For example, the line2024can be tapered to be wider at the end proximate to the cache representation2014, indicating the directionality of data paths from the entity representation2004to the cache representation2014. In various implementations, the visualization2000visualizes streams of data events as a graph of nodes and edges. Nodes in the graph correspond to entities, and edges in the graph correspond to data paths. Accordingly, in the visualization2000, the entity representations2002,2004,2006,2008,2010,2012,2014, and2016are nodes of a graph, representing entities in the web server system. The lines (e.g., lines2024and2028) connecting the entity representation are edges of the graph, representing data paths between the entities in the web server system. The visualization2000also includes a timeline2018, which includes multiple time unit indicators2020. Each time unit indicator2020corresponds to a time period in a sequence of time periods (e.g., time periods in the event data structure1838). For example, a time unit indicator2020can represent a 1-minute time period in a sequence of 1-minute time periods. In some implementations, the time units in the timeline2018is configurable by the user. That is, the scale of the timeline2018can be adjusted to the preference of the user (e.g., the user may want to view the visualization2000at an hour-by-hour scale rather than minute-by-minute). The visualization2000further includes “particles” that represent data events. As shown, a particle in the visualization2000is an object (triangle-shaped as shown) that is emitted by the origin entity representation and moves to a destination entity representation along a line connecting the origin entity representation and destination entity representation. Thus, for example, particles2026moving along line2024represent one or more event streams flowing from entity representation2004to cache representation2014, and particles2030moving along line2028represent one or more event streams flowing from cache representation2014to entity representation2004. In some implementations, a particle represents a unit of data event occurrences (e.g., unit of 1 occurrences, unit of 5 occurrences rounded up or down, unit of 10 occurrences rounded up or down) matching a particular data path specification. For example, for a first data path specification from the entity2004to the cache2014that has 47 occurrences in a time period, the visualization2000can show 5 particles emitted from the entity representation2004and moving to the cache representation2014in the corresponding time period in the visualization2000. Accordingly, for a data path specification, the number of particles emitted by the origin entity and moving to the destination entity per time period represents a flow rate associated with the data path specification; the flow rate is determined based on the counts of the data events matching the data path specification across the time periods included in event data structures1838. In some implementations, the number of occurrences of data events matching a data path specification is represented by a size of the particle in addition or alternatively to the number of particles. For example, a particle corresponding to a larger number of occurrences could be shown with a larger size than a particle corresponding to a smaller number of occurrences. Further, in some implementations, a data path specification can be mapped to a color or some other visual pattern (ea, a fill pattern) for visualization purposes, and particles associated with the data path specification are generated and displayed with the mapped color or visual pattern. For example, particles for “success” data events could be colored green, particles for “failure” data events could be colored red, and particles for “request” data events could be colored blue. As another example, in the visualization2000as shown, the particles2026and2030have different fill patterns indicating the different data event type or status those particles represent. In some other implementations, the type or status of data event is represented by a texture or shape of the particle, in addition or alternatively to color or visual pattern within the particle. For example, particles for “success” data events can be shaped as arrows, particles for “failure” data events can be shaped as squares, and particles for “request” data events can be shaped as triangles. The visualization application1814can animate or play back the visualization2000to show streams of data events over time. Thus, when the visualization2000is being animated or played back, particles are emitted by representations of origin entities and move toward representations of destination entities. As the visualization2000is played back, a current time period indicator2022moves across the timeline2018and highlights a time unit indicator2020corresponding to the time period being visualized in the visualization2000. A user can manually move, via an input device1822and optionally a cursor, pointer, and/or the like displayed in conjunction with the visualization2000, the current time period indicator2022across the timeline2018to go forward or backward in the animation of the visualization2000, similar to scrubbing playback of a video clip. In some implementations, a particle speed (a speed at which a given particle moves from the origin entity representation to the destination entity representation) is based on the time scale of the timeline2018. For example, when the visualization is played back, particles would be shown as moving faster along a line if the timeline2018is scaled to 1-hour periods (each time unit indicator2020represents a 1-hour time period) rather than 1-minute periods (each time unit indicator2020represents a 1-minute time period). In various implementations, a user can select or hover over an entity representation in the visualization to hide data event streams not associated with the selected or hovered-over entity (e.g., in order to focus on data paths involving a particular entity). For example, inFIG.20B, the user can make an input via a cursor2040and an input device1822to hover the cursor2040over or to select the entity representation2012. In response to the selection or hover-over, the visualization application1814can hide or obscure (e.g, make less visible, make transparent) the data paths (L, edges and particles) not involving the entity representation2012, as shown inFIG.20C. Accordingly, inFIG.20Cthe visualization2000shows edges connecting the entity representation2012to the entity representations2004,2006,2008,2010,2014, and2016, respectively, and particles moving along those edges. In some implementations, if the selected or hovered-over entity representation2012represents an entity that is an intermediate entity on one branching path of a data flow from an origin to a destination, representations entities and event streams corresponding to those other branching paths in that data flow can be displayed or hidden in the visualization2000depending on the preferences of the user. Similarly, as shown inFIG.20D, the user can make an input via the cursor2040and the input device1822to hover the cursor2040over or to select the entity representation2004. In response to the selection or hover-over, the visualization application1814can hide the edges and particles not involving the entity representation2004, as shown inFIG.20E. Accordingly, inFIG.20E, the visualization2000shows edges connecting the entity representation2012to the entity representations2002,2012, and2014, respectively, and particles moving along these edges. In various representations, the visualization application1814places and/or arranges the entity representations2002,2004,2006,2008,2010,2012,2014, and2016according to the data paths included the visualization2000. For example, representations of entities that are origin and/or destination entities and not intermediate entities can be placed near the ends of the visualization2000(e.g., user devices2002near one end of the visualization2000, and cache2014and database2016near the opposite end of the visualization2000). An entity that is an intermediate entity on a data path included in the visualization2000can be placed between the associated origin and destination entities. In various implementations, the visualization2000can be further manipulated by the user to change viewing angles, position, etc. For example, when displayed in a virtual reality environment, the user could, via the input device1822, move and/or rotate the visualization2000. As a further example, the user could move around in the physical environment to cause a position change within the virtual reality environment, thereby “walking around” the visualization2000to change position, viewing angle, and/or the like relative to the visualization2000. Further, in some implementations, representations of entities within the visualization2000are also manipulable (e.g., movable within the visualization). For example, a user could select and grab the entity representation2012(e.g., using the cursor2040and the input device1822) to move the entity representation2012to another position. The visualization application1814automatically adjust the edges (e.g., adjust the length, angle, etc. thereof) connected to the entity representation2012in the visualization2000so that the edges remain connected the entity representation. FIG.21illustrates a flow diagram of method steps for generating a data structure of data event information, in accordance with various implementations. Although the method is described in conjunction with the systems ofFIGS.1-20E, persons of ordinary skill in the art will understand that any system configured to perform the method, in any order, is within the scope of the present disclosure. As shown, a method2100begins at step2102, where an application or module obtains one or more logs of data events. The data event module1832can obtain the data event logs1846from the data intake and query system108. At step2104, the data event module1832queries data event logs to extract data event information. The data event module1834executes the data event queries1834on the data event logs1834to extract data event information (L, records of data events). At step2106, the data event module1832generates a data structure of data event information. The data event module1832generates one or more event data structures1838based on the data event information extracted in step2104via the data event queries1834. At step2108, the data event module1832transmits the data structure to a client device. The data event module1834can transmit the event data structure(s)1838to the client device404, and in particular to the visualization application1814. FIG.22illustrates a flow diagram of method steps for generating a visualization of streams of data events, in accordance with example implementations. Although the method is described in conjunction with the systems ofFIGS.1-20E, persons of ordinary skill in the art will understand that any system configured to perform the method, in any order, is within the scope of the present disclosure. As shown, a method2200begins at step2202, where an application or module receives a data structure of event stream data. The visualization application1814at the client device404receives one or more event data structures1838from the data intake and query system108. At step2204, the visualization application1814processes the data structure to determine data events for visualization. The visualization application1814processes the event data structures1838(e.g., process the event stream data included within) to identify the included data path specifications and counts of data events matching the data path specifications over the time periods covered by the event data structures1838. At step2206, the visualization application1814generates a stream visualization of data events. The visualization application1814generates a stream visualization (e.g., the visualization2000) based on the event stream data (a, data path specifications and counts of data events) included in the event data structures1838. At step2208, the visualization application1814outputs the stream visualization to a display device. The visualization application1814can output the visualization2000to a display device1824. In some implementations, the visualization application1814outputs the visualization2000for display within a virtual reality environment, individually or in conjunction with other virtual reality content (e.g., panels showing dashboards that include visualizations of data). In sum, techniques are presented for generating visualizations of streams of data events. A visualization application determines data events for visualization based on information in a data structure. The data structure can be generated based on logs of data events, and specifies counts of data events per data path specification and per time period in a sequence of time periods. A data path specification can be derived from the data event logs, and the data path specification can specify an origin entity, a destination entity, and an event type/status. The visualization application generates a visualization of the data events, where the visualization includes, for a given data event, representations of the origin entity, the destination entity, and a stream of the data events. The visualization application outputs the visualization to a display device, which can display the visualization in an extended reality environment (e.g., a virtual reality environment). The visualization can be manipulated by a user (e.g., a viewer) to change the viewing angle, focus on certain entities, or the like. One advantage of the disclosed techniques is that information for streams of data events associated with multiple machines and sub-systems can be collected in one visual presentation. That visual presentation facilitates more efficient analysis and identification of issues of interest associated with the data events and/or the machines and sub-systems. Another advantage of the disclosed techniques is that streams of data events can be presented in a format that facilitates both system-wide analysis and partial analysis. The graphical and/or animated format of the visualization of data events facilitates analysis of data events system-wide and in a particular portion of the system (e.g., in and out of a selected entity). 1. In some implementations, a computer-implemented method comprises receiving a data structure from a data intake and query system, wherein the data structure includes event stream data associated with a data path, and wherein the data path comprises a set of entities, including an origin entity and a destination entity; generating a first entity visualization of the origin entity and a second entity visualization of the destination entity; generating a stream visualization of the event stream data, wherein the stream visualization comprises a visualization of one or more events streaming between the first entity visualization and the second entity visualization; and causing the first entity visualization, the second entity visualization, and the stream visualization to be presented in an extended reality (XR) environment. 2. The method of clause 1, wherein the XR environment comprises a virtual reality (VR) environment. 3. The method of clauses 1 or 2, wherein the origin entity or the destination entity comprises a machine. 4. The method of any of clauses 1-3, wherein the origin entity or the destination entity comprises a group of machines. 5. The method of any of clauses 1-4, wherein the data structure is generated based on a log file associated with the data path. 6. The method of any of clauses 1-5, wherein the one or more events are associated with an event type. 7. The method of any of clauses 1-6, wherein the event stream data associated with the data path comprises, for each time period in a sequence of time periods, a count of events transmitted between the origin entity and the destination entity in the data path. 8. The method of any of clauses 1-7, wherein causing the first entity visualization, the second entity visualization, and the stream visualization to be presented in the XR environment comprises causing an XR object to be presented in the XR environment, wherein the XR object comprises a graph, the first entity visualization and the second entity visualization are nodes in the graph, and the stream visualization is an edge in the graph. 9. The method of any of clauses 1-8, wherein the visualization of the one or more events streaming between the first entity visualization and the second entity visualization comprises a flow of particles streaming from the first entity visualization to the second entity visualization, and a particle rate of the flow of particles is based on counts of the one or more events in a sequence of time periods. 10. The method of any of clauses 1-9, wherein the visualization of the one or more events streaming between the first entity visualization and the second entity visualization comprises a flow of particles streaming from the first entity visualization to the second entity visualization, and wherein the particles have a particle color associated with the data path. 11. The method of any of clauses 1-10, wherein the visualization of the one or more events streaming between the first entity visualization and the second entity visualization comprises a line having a first width at a first end proximate to the first entity visualization and a second width at a second end proximate to the second entity visualization. 12. The method of any of clauses 1-11, wherein the visualization of the one or more events streaming between the first entity visualization and the second entity visualization comprises a flow of particles streaming from the first entity visualization to the second entity visualization, and wherein the particles have a particle shape associated with the data path. 13. The method of any of clauses 1-12, wherein the visualization of the one or more events streaming between the first entity visualization and the second entity visualization comprises a flow of particles streaming from the first entity visualization to the second entity visualization, and the particles stream from the first entity visualization to the second entity visualization at a speed based on a time scale at which the stream visualization is displayed. 14. The method of any of clauses 1-13, wherein the data path is defined based on a topology of a data stream pipeline, wherein the topology of the data stream pipeline specifies the origin entity and the destination entity. 15. The method of any of clauses 1-14, wherein the data path is defined based on a topology of a data stream pipeline, wherein the topology of the data stream pipeline specifies the origin entity and the destination entity, and wherein the origin entity represents a data source, the destination entity represents a data sink, and other entities of the set of entities represent branching paths of a data flow from the origin entity to the destination entity. 16. The method of any of clauses 1-15, wherein the set of entities further comprises a further entity, wherein the stream visualization of the event stream data further comprises a visualization of one or more events streaming between the origin entity and the further entity, and one or more events streaming between the destination entity and the further entity. 17. The method of any of clauses 1-16, wherein the origin entity and the destination entity represent branching paths of a data flow. 18. In some implementations, one or more non-transitory computer-readable storage media include instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of receiving a data structure from a data intake and query system, wherein the data structure includes event stream data associated with a data path, and wherein the data path comprises a set of entities, including an origin entity and a destination entity; generating a first entity visualization of the origin entity and a second entity visualization of the destination entity; generating a stream visualization of the event stream data, wherein the stream visualization comprises a visualization of one or more events streaming between the first entity visualization and the second entity visualization; and causing the first entity visualization, the second entity visualization, and the stream visualization to be presented in an extended reality (XR) environment. 19. The one or more non-transitory computer-readable storage media of clause 18, wherein the event stream data associated with the data path comprises, for each time period in a sequence of time periods, a count of events transmitted between the origin entity and the destination entity in the data path. 20. In some implementations, a computing device comprises a memory that includes a visualization application program; and a processor that is coupled to the memory and, when executing the visualization application program, is configured to receive a data structure from a data intake and query system, wherein the data structure includes event stream data associated with a data path, and wherein the data path comprises a set of entities, including an origin entity and a destination entity; generate a first entity visualization of the origin entity and a second entity visualization of the destination entity; generate a stream visualization of the event stream data, wherein the stream visualization comprises a visualization of one or more events streaming between the first entity visualization and the second entity visualization; and cause the first entity visualization, the second entity visualization, and the stream visualization to be presented in an extended reality (XR) environment. 21. The device of clause 20, wherein the data path is defined based on a topology of a data stream pipeline, wherein the topology of the data stream pipeline specifies the origin entity and the destination entity. Any and all combinations of any of the claim elements recited in any of the claims and/or any elements described in this application, in any fashion, fall within the contemplated scope of the present disclosure and protection. The descriptions of the various implementations have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. Aspects of the present implementations may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware implementation, an entirely software implementation (including firmware, resident software, micro-code, etc.) or an implementation combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” In addition, any hardware and/or software technique, process, function, component, engine, module, or system described in the present disclosure may be implemented as a circuit or set of circuits. Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to implementations of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such processors may be, without limitation, general purpose processors, special-purpose processors, application-specific processors, or field-programmable gate arrays. The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent one or more modules, segments, or portions of code, which each comprise one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While the preceding is directed to implementations of the present disclosure, other and further implementations of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
253,019
11861768
In one or more implementations, not all of the depicted components in each figure can be required, and one or more implementations can include additional components not shown in a figure. Variations in the arrangement and type of the components can be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components can be utilized within the scope of the subject disclosure. DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one ordinarily skilled in the art, that the embodiments of the present disclosure can be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail so as not to obscure the disclosure. The subject disclosure addresses the shortcomings of conventional data analysis techniques, which have failed to provide adequate tools for approximation of data clusters (including collections of data points, and the like), by providing for systems and methods for generating polygon representations of data. Data analysis is an important tool for discovering useful information in data, which aids in informing conclusions and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and includes inspecting, cleansing, transforming, and modelling data. Data analysis is used across different disciplines, such as business, science, and social science. Conventionally, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively. Data analysis can also leverage the power of data visualization to produce effective results. Data visualization includes mappings between the original data (e.g., numerical data) and graphic elements (e.g., lines or points in a chart). The mapping determines how the attributes of these elements vary according to the data. For example, a bar chart can map a length of a bar to a magnitude of a variable. Additionally, data visualization is an efficient way of communicating data significance when the data is numerous. In computational geometry, an alpha shape (e.g., α-shape) is a family of piecewise linear simple curves in the Euclidean plane associated with the shape of a finite set of points. According to illustrative embodiments, an alpha shape associated with a set of points includes a convex hull. For example, the convex hull of a shape can include the smallest convex set that contains it. The convex hull can be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a bounded subset of the plane, the convex hull can be visualized as the shape enclosed by a rubber band stretched around the subset. Alpha shapes and convex hulls can be utilized to generate polygon representations of data sets, however, conventionally neither are very robust against noise and/or outliers. Therefore, there is a need for better alternatives to alpha shapes (and the like) for visualizing data. The subject disclosure provides for systems and methods for generating polygon representations of data. In exemplary implementations, core data points are calculated. An epsilon distance is also calculated. The core data points can be dilated by a multiple of the epsilon distance. All the dilations can be combined and reduced to polygons. According to illustrative embodiments, a process utilizes denser (e.g., core) regions of the data to dilate, union, reduce, and simplify the data points into polygons to create a polygon representation of the data that is not distorted by outliers and noise. According to illustrative embodiments, a method for generating a polygon representation of a plurality of data points includes receiving a representation of data points from a data source. For example, the representation can include at least a two-dimensional (2D) data plot. The method can also include calculating a core representation of the data points. The method can also include dilating the core representation of the data points by multiplying each data point by a multiple of an epsilon distance. The method can also include generating dilated points based on the multiplying of each data point. The method can also include generating a polygon representation of the data points based at least in part on intersections between the dilated points. The method can also include causing display of the polygon representation through a user interface. The disclosed system(s) address a problem in traditional data analysis techniques tied to computer technology, namely, the technical problem of visualizing data. The disclosed system solves this technical problem by providing a solution also rooted in computer technology, namely, by providing for generating polygon representations of data. The disclosed subject technology further provides improvements to the functioning of the computer itself because it improves processing of the data, and also improves efficiency in grouping and analyzing the data. FIGS.1A and1Billustrate exemplary data visualization diagrams100and150that are generated from data102, according to certain illustrative embodiments of the present disclosure. Referring now toFIG.1A, exemplary visualization diagrams100includes diagrams for data102, convex hulls104, and alpha shapes106. For example, data diagrams102a-102ecan illustrate different distributions of data. In an implementation, the data102a-102ecan be from a data source, including, but not limited to, cloud storage, servers, hard drives, etc. According to illustrative embodiments, convex hulls104can be generated as lines around the data102. As illustrated, convex hulls104dand104eare much more complex than convex hulls104a,104b, and104c. According to illustrative embodiments, alpha shapes106can be generated based on outlines that more closely follow the data102than for the convex hulls104. However, alpha shapes106dand106eare more complex than alpha shapes106a,106b, and106c. Referring now toFIG.1B, exemplary visualization diagrams150includes diagrams for the data102, dense points108(e.g., core points), and dilated points110, combined (e.g., unioned) buffers112, reduced polygons114, simplified polygons116, largest polygons118, and isolated largest polygons120(e.g., polygons only). According to illustrative embodiments, core points108a-108ecan be generated based on a density of points. Similarly, the dilated points110a-110ecan be generated based on a dilation of points. The combined buffers112can also generate enclosures (e.g., combined buffers) that are too large for the data102, such as in112a,112b, and112d. The reduced polygons114a-114ealso do not adequately capture outliers, such as in114band114c. The simplified polygons116a-116eand the largest polygons118a-118ealso do not adequately capture outliers. According to illustrative embodiments, the isolated largest polygons120a-120eillustrate a simplified visual representation of the data102a-102e. However, the data points themselves are not adequately illustrated by the isolated largest polygons120a-120e. FIG.2illustrates exemplary approximation polygon generation200, according to certain illustrative embodiments of the present disclosure. For example, data202can be received and displayed in a data plot. The data plot includes at least a two-dimensional (2D) representation of data. According to illustrative embodiments, a convex hull204can be generated for the data202. For example, a single curve can be drawn around the data plot that best includes all of the data202. Next, an alpha shape206can be generated that further improves on the accuracy of the convex hull204. Finally, an approximation polygon208can be generated that most closely includes the data202while also reducing outliers. The approximation polygon208can then be utilized to generate interactive data visualizations for the data202. According to illustrative embodiments, core data points can be calculated for the data202. For example, kernel density estimation can be utilized to calculate a probability distribution function. In an implementation, a Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) can also be utilized. It is understood that other outlier detection algorithms can be utilized without departing from the scope of the disclosure. According to illustrative embodiments, an epsilon distance can be calculated. For example, a k-nearest neighbors algorithm (k-NN) can be utilized to determine a kthneighbor's distance, and the epsilon distance includes an average of the kthneighbors distances. The epsilon distance can be utilized to determine at what density and scale the data exists. According to illustrative embodiments, core points can be calculated by multiplying each data point by a multiple (M) of the epsilon distance (epsilon). In an implementation, the core points can be dilated by multiplying a radius of each core point by a scaling factor, wherein the scaling factor is a number greater than one. After dilating the data points, each point becomes a circle of radius epsilon*M. According to illustrative embodiments, all of the dilated points are unioned together, and the dilations that intersect become dilation polygons. If for some reason a dilation has no intersection, then it will remain a circle. According to illustrative embodiments, the dilation polygons can be reduced. For example, the dilation polygons can each be reduced by a factor of epsilon*(M−1). The dilation polygons can then be simplified. For example, each polygon can be simplified such that all points within the polygon are within an epsilon distance of the polygon. FIG.3illustrates a system300configured for generating a polygon representation of data points, in accordance with one or more implementations. In some implementations, system300includes one or more computing platforms302. Computing platform(s)302can be configured to communicate with one or more remote platforms304according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Remote platform(s)304can be configured to communicate with other remote platforms via computing platform(s)302and/or according to a client/server architecture, a peer-to-peer architecture, and/or other architectures. Users can access system300via remote platform(s)304. Computing platform(s)302can be configured by machine-readable instructions306. Machine-readable instructions306includes one or more instruction modules. The instruction modules includes computer program modules. The instruction modules include one or more of receiving module308, calculating module310, dilating module312, generating module314, outputting module316, combining module318, and/or reducing module320, and/or other instruction modules. Receiving module308can be configured to receive a representation of data points from a data source. For example, the representation can include at least a two-dimensional (2D) data plot. Calculating module310can be configured to calculate a core representation (e.g., a core representation includes core points) of the data points. Calculating module310can also be configured to calculate a kernel density estimation. According to illustrative embodiments, a Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) can also be utilized. It is understood that other outlier detection algorithms/approaches can be utilized without departing from the scope of the disclosure. Calculating module310can also be configured to calculate the epsilon distance. According to illustrative embodiments, calculating the epsilon distance includes utilizing a k-nearest neighbors algorithm (k-NN). Dilating module312can be configured to dilate the core representation of the data points. For example, each data point can be multiplied by a multiple of the epsilon distance. Generating module314can be configured to generate dilated points based on the multiplying of each data point. Generating module314can also be configured to generate a polygon representation of the data points based at least in part on intersections between the dilated points. Outputting module316can be configured to cause output (e.g., display) of the polygon representation through a user interface. Combining module318can be configured to take a union of all the dilated points, such that intersections between the dilated points includes polygons. Reducing module320can be configured to reduce the polygon representation by a factor of the epsilon distance. According to illustrative embodiments, the data points are grouped together. For example, the data points can be grouped together based on at least one of classifications, clusters, labels, a same subset of characteristics among a set of characteristics, etc. It is understood that the data can be grouped together in other ways, and is not limited to the above. According to illustrative embodiments, the epsilon distance includes an average of the k-NN. According to illustrative embodiments, each dilated point includes a circle having a radius that is a multiple of the epsilon distance. According to illustrative embodiments, dilated points without intersections are circles. According to illustrative embodiments, the reducing simplifies the polygon representation such that all points within the polygon representation are within the epsilon distance from each other and/or the polygon. According to illustrative embodiments, an epsilon distance can be calculated. For example, a k-nearest neighbors algorithm (k-NN) can be utilized to determine a kthneighbor's distance, and the epsilon distance includes an average of the kthneighbors distances. According to illustrative embodiments, core points can be calculated by multiplying each data point by a multiple (M) of the epsilon distance (epsilon). After dilating the data points, each point becomes a circle of radius epsilon*M. According to illustrative embodiments, all of the dilated points are unioned together, and the dilations that intersect become dilation polygons. If for some reason a dilation has no intersection, then it will remain a circle. According to illustrative embodiments, the dilation polygons can be reduced. For example, the dilation polygons can each be reduced by a factor of epsilon*(M−1). The dilation polygons can then be simplified. For example, each polygon can be simplified such that all points within the polygon are within an epsilon distance of the polygon. In some implementations, computing platform(s)302, remote platform(s)304, and/or external resources324can be operatively linked via one or more electronic communication links. For example, such electronic communication links can be established, at least in part, via a network such as the Internet and/or other networks. It will be appreciated that this is not intended to be limiting, and that the scope of this disclosure includes implementations in which computing platform(s)302, remote platform(s)304, and/or external resources324can be operatively linked via some other communication media. A given remote platform304includes one or more processors configured to execute computer program modules. The computer program modules can be configured to enable an expert or user associated with the given remote platform304to interface with system300and/or external resources324, and/or provide other functionality attributed herein to remote platform(s)304. By way of non-limiting example, a given remote platform304and/or a given computing platform302includes one or more of a server, a desktop computer, a laptop computer, a handheld computer, a tablet computing platform, a NetBook, a Smartphone, a gaming console, and/or other computing platforms. External resources324includes sources of information outside of system300, external entities participating with system300, and/or other resources. In some implementations, some or all of the functionality attributed herein to external resources324can be provided by resources included in system300. Computing platform(s)302include(s) electronic storage326, one or more processors328, and/or other components. Computing platform(s)302include(s) communication lines, or ports to enable the exchange of information with a network and/or other computing platforms. Illustration of computing platform(s)302inFIG.3is not intended to be limiting. Computing platform(s)302include(s) a plurality of hardware, software, and/or firmware components operating together to provide the functionality attributed herein to computing platform(s)302. For example, computing platform(s)302can be implemented by a cloud of computing platforms operating together as computing platform(s)302. Electronic storage326can include non-transitory storage media that electronically stores information. The electronic storage media of electronic storage326includes one or both of system storage that is provided integrally (i.e., substantially non-removable) with computing platform(s)302and/or removable storage that is removably connectable to computing platform(s)302via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). Electronic storage326includes one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media. Electronic storage326includes one or more virtual storage resources (e.g., cloud storage, a virtual private network, and/or other virtual storage resources). Electronic storage326can store software algorithms, information determined by processor(s)328, information received from computing platform(s)302, information received from remote platform(s)304, and/or other information that enables computing platform(s)302to function as described herein. Processor(s)328can be configured to provide information processing capabilities in computing platform(s)302. As such, processor(s)328includes one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. Although processor(s)328is shown inFIG.3as a single entity, this is for illustrative purposes only. In some implementations, processor(s)328includes a plurality of processing units. These processing units can be physically located within the same device, or processor(s)328can represent processing functionality of a plurality of devices operating in coordination. Processor(s)328can be configured to execute modules308,310,312,314,316,318, and/or320, and/or other modules. Processor(s)328can be configured to execute modules308,310,312,314,316,318, and/or320, and/or other modules by software, hardware, firmware, some combination of software, hardware, and/or firmware, and/or other mechanisms for configuring processing capabilities on processor(s)328. As used herein, the term “module” can refer to any component or set of components that perform the functionality attributed to the module. This includes one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components. It should be appreciated that although modules308,310,312,314,316,318, and/or320are illustrated inFIG.3as being implemented within a single processing unit, in implementations in which processor(s)328includes multiple processing units, one or more of modules308,310,312,314,316,318, and/or320can be implemented remotely from the other modules. The description of the functionality provided by the different modules308,310,312,314,316,318, and/or320described below is for illustrative purposes, and is not intended to be limiting, as any of modules308,310,312,314,316,318, and/or320can provide more or less functionality than is described. For example, one or more of modules308,310,312,314,316,318, and/or320can be eliminated, and some or all of its functionality can be provided by other ones of modules308,310,312,314,316,318, and/or320. As another example, processor(s)328can be configured to execute one or more additional modules that can perform some or all of the functionality attributed below to one of modules308,310,312,314,316,318, and/or320. The techniques described herein can be implemented as method(s) that are performed by physical computing device(s); as one or more non-transitory computer-readable storage media storing instructions which, when executed by computing device(s), cause performance of the method(s); or, as physical computing device(s) that are specially configured with a combination of hardware and software that causes performance of the method(s). FIG.4illustrates an example flow diagram (e.g., logic400) for generating a polygon representation of a plurality of data points, according to certain illustrative embodiments of the disclosure. For explanatory purposes, the example logic400is described herein with reference toFIGS.1-3. Further for explanatory purposes, the steps of the example logic400is described herein as occurring in serial, or linearly. However, multiple instances of the example logic400can occur in parallel. For purposes of explanation of the subject technology, the logic400will be discussed in reference toFIGS.1-3. As used herein, “logic” refers to (i) logic implemented as computer instructions and/or data within one or more computer processes and/or (ii) logic implemented in electronic circuitry. At step402, the logic400includes receiving a representation of data points from a data source. For example, the representation includes at least a two-dimensional (2D) data plot. The data source can include but is not limited to, cloud storage, servers, hard drives, etc. At step404, the logic400includes calculating a core representation of the data points is. For example, the core representation can group core points together. At step406, the logic400includes dilating the core representation of the data points. For example, the dilating can include multiplying each data point by a multiple of an epsilon distance. In this way each point can be dilated. At step408, the logic400includes generating dilated points based on the multiplying of each data point by the epsilon distance. At step410, the logic400includes generating a polygon representation of the data points based at least in part on intersections between the dilated points. At step412, the logic400includes causing the polygon representation to be displayed through a user interface. For example, the polygon representation can include a continuous curve that minimizes outliers. For example, as described above in relation toFIGS.1-3, at step402, a representation of data points (e.g., data102and202) from a data source is received. For example, the representation includes at least a two-dimensional (2D) data plot (e.g.,100,150, and200). The data source includes but is not limited to, cloud storage, servers, hard drives, etc. At step404, a core representation (e.g.,108) of the data points is calculated. For example, the core representation can group core points together. At step406, the core representation of the data points is dilated (e.g., via dilating module312). For example, the dilating can include multiplying each data point by a multiple of an epsilon distance. In this way each point can be dilated. At step408, dilated points can be generated based on the multiplying of each data point by the epsilon distance. At step410, a polygon representation (e.g.,208) of the data points can be generated based at least in part on intersections between the dilated points. At step412, the polygon representation is caused to be displayed through a user interface (e.g., via outputting module316). For example, the polygon representation can include a singular polygon representation (e.g., a continuous curve) that minimizes the number of outliers that are within the polygon's enclosure. According to an illustrative embodiment, the data points can be grouped together based on at least one of classifications, clusters, labels, and/or the like. According to an illustrative embodiment, multiplying each data point by the multiple of the epsilon distance further includes generating a circle of radius R that is centered at a given data point P, wherein R is a distance value. For example, given a data point P and a distance R value, a circle of radius R that is centered at point P can be generated/created. According to an illustrative embodiment, R is a multiple of the epsilon value. According to an illustrative embodiment, calculating the core representation can further include calculating a kernel density estimation or utilizing a Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN). It is understood that other outlier detection algorithms can be utilized without departing from the scope of the disclosure. According to an illustrative embodiment, the logic400further includes calculating the epsilon distance. According to an illustrative embodiment, dilated points without intersections can be circles. According to an illustrative embodiment, calculating the epsilon distance includes utilizing a k-nearest neighbors algorithm (k-NN). According to an illustrative embodiment, the epsilon distance includes an average of the k-NN. According to an illustrative embodiment, each dilated point includes a circle having a radius that is a multiple of the epsilon distance. According to an illustrative embodiment, the logic400further includes taking a union of all the dilated points. For example, the union can be taken such that intersections between the dilated points include polygons. According to an illustrative embodiment, the logic400further includes reducing the polygon representation by a factor of the epsilon distance. According to an illustrative embodiment, the reducing simplifies the polygon representation such that all points within the polygon representation are within the epsilon distance from each other and/or the polygon. According to illustrative embodiments, an epsilon distance can be calculated. For example, a k-nearest neighbors algorithm (k-NN) can be utilized to determine a kthneighbor's distance. According to illustrative embodiments, the epsilon distance includes an average of the kthneighbors distances. According to illustrative embodiments, core points can be calculated by multiplying each data point by a multiple (M) of the epsilon distance (epsilon). After dilating the data points, each point becomes a circle of radius epsilon*M. According to illustrative embodiments, all of the dilated points are combined together, and the dilations that intersect become dilation polygons. If for some reason a dilation has no intersection, then it will remain a circle. According to illustrative embodiments, the dilation polygons can be reduced. For example, the dilation polygons can each be reduced by a factor of epsilon*(M−1). The dilation polygons can then be simplified. For example, each polygon can be simplified such that all points within the polygon are within an epsilon distance of the polygon. FIG.5is a block diagram illustrating an exemplary computer system500with which illustrative embodiments of the subject technology can be implemented. In certain illustrative embodiments, the computer system500can be implemented using hardware or a combination of software and hardware, either in a dedicated server, integrated into another entity, or distributed across multiple entities. Computer system500(e.g., server and/or client) includes a bus508or other communication mechanism for communicating information, and a processor502coupled with bus508for processing information. By way of example, the computer system500can be implemented with one or more processors502. Processor502can be a general-purpose microprocessor, a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable entity that can perform calculations or other manipulations of information. Computer system500can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them stored in an included memory504, such as a Random Access Memory (RAM), a flash memory, a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable PROM (EPROM), registers, a hard disk, a removable disk, a CD-ROM, a DVD, or any other suitable storage device, coupled to bus508for storing information and instructions to be executed by processor502. The processor502and the memory504can be supplemented by, or incorporated in, special purpose logic circuitry. The instructions can be stored in the memory504and implemented in one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, the computer system500, and according to any method well-known to those of skill in the art, including, but not limited to, computer languages such as data-oriented languages (e.g., SQL, dBase), system languages (e.g., C, Objective-C, C++, Assembly), architectural languages (e.g., Java, .NET), and application languages (e.g., PHP, Ruby, Perl, Python). Instructions can also be implemented in computer languages such as array languages, aspect-oriented languages, assembly languages, authoring languages, command line interface languages, compiled languages, concurrent languages, curly-bracket languages, dataflow languages, data-structured languages, declarative languages, esoteric languages, extension languages, fourth-generation languages, functional languages, interactive mode languages, interpreted languages, iterative languages, list-based languages, little languages, logic-based languages, machine languages, macro languages, metaprogramming languages, multiparadigm languages, numerical analysis, non-English-based languages, object-oriented class-based languages, object-oriented prototype-based languages, off-side rule languages, procedural languages, reflective languages, rule-based languages, scripting languages, stack-based languages, synchronous languages, syntax handling languages, visual languages, wirth languages, and xml-based languages. Memory504can also be used for storing temporary variable or other intermediate information during execution of instructions to be executed by processor502. A computer program as discussed herein does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, subprograms, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. Computer system500further includes a data storage device506such as a magnetic disk or optical disk, coupled to bus508for storing information and instructions. Computer system500can be coupled via input/output module510to various devices. The input/output module510can be any input/output module. Exemplary input/output modules510include data ports such as USB ports. The input/output module510is configured to connect to a communications module512. Exemplary communications modules512include networking interface cards, such as Ethernet cards and modems. In certain illustrative embodiments, the input/output module510is configured to connect to a plurality of devices, such as an input device514and/or an output device516. Exemplary input devices514include a keyboard and a pointing device, e.g., a mouse or a trackball, by which a user can provide input to the computer system500. Other kinds of input devices514can be used to provide for interaction with a user as well, such as a tactile input device, visual input device, audio input device, or brain-computer interface device. For example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback, and input from the user can be received in any form, including acoustic, speech, tactile, or brain wave input. Exemplary output devices516include display devices such as an LCD (liquid crystal display) monitor, for displaying information to the user. According to one illustrative embodiment of the present disclosure, the above-described gaming systems can be implemented using a computer system500in response to processor502executing one or more sequences of one or more instructions contained in memory504. Such instructions can be read into memory504from another machine-readable medium, such as data storage device506. Execution of the sequences of instructions contained in the main memory504causes processor502to perform the process steps described herein. One or more processors in a multi-processing arrangement can also be employed to execute the sequences of instructions contained in memory504. In alternative illustrative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions to implement various illustrative embodiments of the present disclosure. Thus, illustrative embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software. Various illustrative embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., such as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. The communication network can include, for example, any one or more of a LAN, a WAN, the Internet, and the like. Further, the communication network can include, but is not limited to, for example, any one or more of the following network topologies, including a bus network, a star network, a ring network, a mesh network, a star-bus network, tree or hierarchical network, or the like. The communications modules can be, for example, modems or Ethernet cards. Computer system500can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Computer system500can be, for example, and without limitation, a desktop computer, laptop computer, or tablet computer. Computer system500can also be embedded in another device, for example, and without limitation, a mobile telephone, a PDA, a mobile audio player, a Global Positioning System (GPS) receiver, a video console, and/or a television set top box. The term “machine-readable storage medium” or “computer-readable medium” as used herein refers to any medium or media that participates in providing instructions to processor502for execution. Such a medium can take many forms, including, but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical or magnetic disks, such as data storage device506. Volatile media include dynamic memory, such as memory504. Transmission media include coaxial cables, copper wire, and fiber optics, including the wires that comprise bus508. Common forms of machine-readable media include, for example, floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH EPROM, any other memory chip or cartridge, or any other medium from which a computer can read. The machine-readable storage medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. As the user computing system500reads data and provides a result, information can be read from the data and stored in a memory device, such as the memory504. Additionally, data from the memory504servers accessed via a network, the bus508, or the data storage506can be read and loaded into the memory504. Although data is described as being found in the memory504, it will be understood that data does not have to be stored in the memory504and can be stored in other memory accessible to the processor502or distributed among several media, such as the data storage506. As used herein, the phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C. The phrase “illustrative embodiment” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as an “illustrative embodiment” is not necessarily to be construed as preferred or advantageous over other embodiments. A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” All structural and functional equivalents to the elements of the various configurations described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description. While this specification contains many specifics, these should not be construed as limitations on the scope of what can be claimed, but rather as descriptions of particular implementations of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a subcombination or variation of a subcombination. The subject matter of this specification has been described in terms of particular illustrative embodiments, but other illustrative embodiments can be implemented and are within the scope of the following claims. For example, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed to achieve desirable results. The actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the illustrative embodiments described above should not be understood as requiring such separation in all illustrative embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Other variations are within the scope of the following claims.
42,147
11861769
DETAILED DESCRIPTION Terminology used herein will now be briefly described. Although the terms used herein are selected, as much as possible, from general terms that are widely used at present while taking into consideration the functions obtained in accordance with embodiments, these terms may be replaced by other terms based on intentions of one of ordinary skill in the art, customs, emergence of new technologies, or the like. In a particular case, terms that are arbitrarily selected by the applicant may be used and, in this case, the meanings of these terms may be described herein. Therefore, it is noted that the terms used herein are construed based on practical meanings thereof and the content of embodiments described herein, rather than being simply construed based on names of the terms. It will be understood that the terms “comprises”, “comprising”, “includes” and/or “including”, when used herein, specify the presence of stated elements, but do not preclude the presence or addition of one or more other elements, unless otherwise indicated herein. As used herein, the term “unit” or “module” denotes an entity for performing at least one function or operation, and may be implemented as hardware, software, or a combination of hardware and software. Herein, the expression “at least one of a, b or c” indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. Hereinafter, certain embodiments will be described with reference to the attached drawings. Embodiments may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. In the drawings, parts not related to the description are not illustrated for clarity of explanation, and like reference numerals denote like elements throughout. As used herein, the term “user” denotes a person who controls the function or operation of an image display device by using a controller, and may include a viewer, a manager, or an installation engineer. Referring toFIG.2, the electronic device100receives an input image200to be used for image synthesis and a selection of a synthesis region, for example, via a touch input of a user202who uses a finger, a stylus, and the like, in the input image200to be combined with a target object image. The electronic device100combines two different images. For example, among the two different images, an image serving as a background is called an input image and an image including an object to be combined with the input image is called a target object image. However, the two different images may also be called a first image and a second image. According to an embodiment, the electronic device100may generate a target object image to be combined with the input image200, and generate and output a composite image300by combining the input image200with the target object image, based on the input image200and the selection of the synthesis region to be combined with the target object image, by using one or more neural networks. According to an embodiment, the electronic device100may detect one or more objects in the input image200by using one or more neural networks. For example, the electronic device100may detect objects such as a boy, a bird, and a house in the input image200illustrated inFIG.2. Information on the objects detected in the input image200may include object classes of the objects and distance information from the synthesis region to the objects. According to an embodiment, the electronic device100may generate a target object to be combined with the synthesis region of the input image200, based on the object classes of the one or more objects detected in the input image200and the distances from the synthesis region to the objects by using one or more neural networks. For example, the electronic device100may generate a dog or a cat as the target object to be combined with the synthesis region, based on the objects such as the boy, the bird, and the house detected in the input image200illustrated inFIG.2. According to an embodiment, the electronic device100may generate a target object image corresponding to the generated target object, by using one or more neural networks. The electronic device100may generate the target object image to be naturally combined with the input image200, by reflecting features of the synthesis region of the input image200to generate the target object image. For example, when the target object image including a dog or a cat is generated and when the features of the synthesis region indicate a grass image, the electronic device100may change the style of the target object image including a dog or a cat, to be appropriate for grass texture or color. According to an embodiment, the electronic device100may generate the composite image300by combining the generated target object image with the input image200by using one or more neural networks. The electronic device100may naturally combine the input image200with the target object image by reflecting the features of the synthesis region of the input image200to generate the composite image300. For example, when the input image200is combined with the target object image including a dog or a cat and when the features of the synthesis region indicate a grass image, the electronic device100may naturally generate edges of the target object image including a dog or a cat, by using grass features. According to an embodiment, the electronic device100may update the target object image according to a user input for controlling the target object image in the composite image300, and generate and output an updated composite image400by combining the updated target object image with the input image200. For example, when the electronic device100outputs the composite image300including a <Bulldog> as the target object and when a user desires a dog of another species than a Bulldog as the target object, the electronic device100may generate and output the updated composite image400including a <Cocker Spaniel> as the target object. As described above, the electronic device100may use one or more neural networks appropriate for each of one or more operations performed to combine two or more images. A neural network is a statistical learning algorithm for implementing machine learning by emulating the brain of a human. FIG.3is a block diagram of the electronic device100according to an embodiment. Referring toFIG.3, the electronic device100may include a memory110and one or more processors120. However, the electronic device100may include a greater number of elements compared to the illustrated elements, and is not limited to the above examples. The memory110according to an embodiment may store programs for process and control operations of the processor120, and store data input to or to be output from the electronic device100. The memory110may include at least one type of storage medium from among flash memory, a hard disk, a multimedia card micro, a memory card (e.g., a secure digital (SD) or extreme digital (XD) card), random access memory (RAM), static RAM (SRAM), read-only memory (ROM), electrically erasable programmable ROM (EEPROM), programmable ROM (PROM), magnetic memory, a magnetic disc, or an optical disc. The processor120controls overall operations of the electronic device100. For example, the processor120may execute one or more instructions stored in the memory110, to perform functions of the electronic device100. The processor120according to an embodiment may combine two or more different images and output a composite image by using a learning model using one or more neural networks. In an embodiment, the processor120may execute the one or more instructions stored in the memory110, to control the various operations to be performed. In an embodiment, the processor120may include an internal memory storing one or more instructions, and execute the one or more instructions stored in the internal memory, to control the various operations to be performed. That is, the processor120may execute at least one instruction or program stored in the memory110or the internal memory included in the processor120, to perform a certain operation. Although one processor120is illustrated inFIG.3, a plurality of processors may be included. In this case, each of operations to be performed by the electronic device100according to an embodiment may be performed using at least one of the plurality of processors. According to an embodiment, the electronic device100may further include a neural network processor. The neural network processor may control a certain operation to be performed, by performing computation through a neural network. Specifically, in an embodiment, the neural network processor may execute one or more instructions to perform computation through a neural network. According to an embodiment, the processor120may execute the one or more instructions stored in the memory110, to receive a selection of a location of a synthesis region in an input image to be combined with a target object image, obtain, by using one or more neural networks, the target object image to be located in the synthesis region by using one or more objects detected in the input image, and generate, by using one or more neural networks, a composite image by combining the input image with the obtained target object image. According to an embodiment, the processor120may execute the one or more instructions stored in the memory110, to detect the one or more objects in the input image and obtain an object class corresponding to each of the one or more objects, and location information of each of the one or more objects in the input image, by using a first neural network, and obtain distance information from the synthesis region to each of the one or more objects, based on location information of the synthesis region and the location information of each of the one or more objects. By obtaining the distance information from the synthesis region to each of the one or more objects, target objects appropriate for the synthesis region may be obtained based on relative locations between the synthesis region of the input image and the objects detected in the input image. According to an embodiment, the processor120may execute the one or more instructions to obtain a class vector of a target object, based on the obtained one or more object classes, by using a second neural network. According to an embodiment, the processor120may execute the one or more instructions to obtain a class vector of each of the one or more objects by using the second neural network, and obtain a class vector of the target object adjacent to the obtained class vectors of the one or more objects, based on the distance information from the synthesis region to each of the one or more objects, by using a word embedding lookup table generated based on the second neural network. When the class vector of the target object adjacent to the obtained class vectors of the one or more objects is obtained based on the distance information from the synthesis region to each of the one or more objects, the user does not need to manually search for target objects to be located in the synthesis region and appropriate target objects may be automatically obtained using artificial intelligence (AI). According to an embodiment, the processor120may execute the one or more instructions to obtain the class vector of the target object by allocating a higher weight to the class vector of the object closer to the location of the synthesis region. By allocating a higher weight to the class vector of the object closer to the location of the synthesis region, a greater number of target objects more appropriate for an object located close to the synthesis region may be extracted. According to an embodiment, the processor120may execute the one or more instructions to generate a target object image corresponding to the obtained class vector of the target object, by using a third neural network. According to an embodiment, the processor120may execute the one or more instructions to extract synthesis region image features corresponding to an image of the synthesis region, by using a fourth neural network, and generate the target object image corresponding to the class vector of the target object, by reflecting the extracted synthesis region image features, by using the third neural network. By reflecting the synthesis region image features to generate the target object images, the target object images having a style that is more natural to the synthesis region may be obtained. According to an embodiment, the processor120may execute the one or more instructions to combine the target object image with the input image by reflecting the extracted synthesis region image features, by using a fifth neural network. By reflecting the synthesis region image features to combine the input image with the target object image, edges of the target object images to be combined with the input image may be extracted to look more naturally to a user. According to an embodiment, the processor120may execute the one or more instructions to display the composite image on a display, and generate an updated target object image according to a user input for controlling the target object image included in the displayed composite image, and display the updated target object image. By providing the composite image including the updated target object image, according to the user input for controlling the target object image, when the user does not like the target object included in the initially generated composite image, a composite image including another target object image may be easily provided according to a simple user input. According to an embodiment, the processor120may execute the one or more instructions to output a scroll bar for controlling an update of the target object image, obtain an updated class vector adjacent to the class vector corresponding to the target object, according to the user input to control the output scroll bar, obtain the updated target object image, based on the updated class vector, and generate the composite image by combining the input image with the updated target object image, and display the composite image. Functions associated with combination of two or more images using AI according to the disclosure are performed using the processor120and the memory110. The processor120may include one or more processors. In this case, the one or more processors may include processors such as central processing units (CPUs), application processors (APs), and digital signal processors (DSPs), dedicated graphics processors such as graphics processing units (GPUs) and vision processing units (VPUs), or dedicated AI processors such as neural processing units (NPUs). The one or more processors control input data to be processed according to a predefined operation rule or AI model stored in memory. Alternatively, when the one or more processors are dedicated AI processors, the dedicated AI processors may be designed in a hardware structure specialized in processing of a specific AI model. The predefined operation rule or AI model is made through training. Herein, being made through training means that a basic AI model is trained based on a learning algorithm by using multiple pieces of training data and thus a predefined operation rule or AI model configured to achieve desired characteristics (or purposes) is made. The training may be performed directly by a device having an AI function according to the disclosure, or through a separate server and/or system. The learning algorithm may include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but is not limited thereto. The AI model may include a plurality of neural network layers. Each of the plurality of neural network layers has a plurality of weight values and performs neural network computation through computation between a computation result of a previous layer and the plurality of weight values. The plurality of weight values of the plurality of neural network layers may be optimized based on a result of training the AI model. For example, the plurality of weight values may be updated to reduce or minimize a loss value or a cost value obtained by the AI model during the training process. An artificial neural network may include, for example, a CNN, a deep neural network (DNN), a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), or a deep Q-network, but is not limited thereto. According to an embodiment, the electronic device100may generate a composite image by combining an input image with one or more target object images by using one or more neural networks, and transmit the generated composite image to an externally connected display device by using a video/audio signal output port or wireless communication to display the composite image. For example, the electronic device100may include a device for mainly processing data and transmitting the processed data to an external display device, e.g., a set-top box. FIG.4is a detailed block diagram of the electronic device100according to an embodiment. InFIGS.3and4, like reference numerals denote like elements. Therefore, the descriptions provided above in relation toFIG.3will not be repeated herein to describe the electronic device100. Referring toFIG.4, in addition to the memory110and the processor120, the electronic device100may further include a display130, one or more antennas155, a communicator150, a detector160, an inputter/outputter180, a video processor135, an audio processor145, an audio outputter140, and a user inputter190. The descriptions of the memory110and the processor120are provided above in relation toFIG.3and will not be repeated in relation toFIG.4. The display130may display an image on a screen under the control of the processor120. The image to be displayed on the screen may be received from the communicator150, the inputter/outputter180, or the memory110. According to an embodiment, the display130may display a composite image generated by the processor120. The display130may also display a composite image including a target object image updated according to a user input for controlling the target object image included in the composite image. The antenna155serves to receive or transmit signals from or to other devices. Although one antenna155is illustrated, a plurality of antennas may be included. Therefore, the electronic device100according to the disclosure may support multiple-input multiple-output (MIMO) systems. The communicator150may include one or more modules capable of enabling wireless communication between the electronic device100and a wireless communication system or between the electronic device100and a network including another electronic device. For example, the communicator150may include a broadcast receiver module151, a mobile communication module152, a wireless internet module153, and a short-range wireless communication module154. The communicator150may also be called a transmitter/receiver. The broadcast receiver module151receives broadcast signals and/or broadcast information through broadcast channels from an external broadcast management server. The broadcast signals may include TV broadcast signals, radio broadcast signals, and data broadcast signals, and/or may include broadcast signals in which TV broadcast signals or radio broadcast signals are combined with data broadcast signals. The mobile communication module152transmits and receives wireless signals to and from at least one of a base station, an external device, or a server in a mobile communication network. The wireless signals may include voice call signals, video call signals, or various types of data associated with transmission and reception of text/multimedia messages. The wireless internet module153refers to a module for wireless internet access, and may be provided as an embedded or external module. As wireless internet technology, for example, wireless local area network (WLAN) (e.g., Wi-Fi), wireless broadband (Wibro), worldwide interoperability for microwave access (Wimax), or high-speed downlink packet access (HSDPA) may be used. Through the wireless internet module153, the electronic device100may establish a Wi-Fi peer-to-peer (P2P) connection to another device. Due to the Wi-Fi P2P connection, a streaming service, a data transmission/reception service, or a printing service based on connection to a printer may be provided between devices. The short-range wireless communication module154refers to a module for short-range wireless communication. As short-range wireless communication technology, for example, Bluetooth, radio-frequency identification (RFID), Infrared Data Association (IrDA), ultra-wideband (UWB), or ZigBee may be used. The communicator150according to an embodiment may receive, from an external server, a learning model using one or more neural networks. The detector160detects voice of a user, an image of the user, or interaction of the user, and may include a microphone161, a camera162, and an optical receiver163. The microphone161receives voice uttered by the user. The microphone161may convert the received voice into an electrical signal and output the electrical signal to the processor120. The camera162may receive an image (e.g., consecutive frames) corresponding to motion (including a gesture) of the user in a camera recognition range. The optical receiver163receives an optical signal (including a control signal) from a remote controller. The optical receiver163may receive, from the remote controller, an optical signal corresponding to a user input (e.g., touch, press, a touch gesture, voice, or motion). A control signal may be extracted from the received optical signal under the control of the processor120. In an embodiment, a selection of a location of a synthesis region in an input image to be combined with a target object image may be received through one or more of the microphone161, the camera162, and the optical receiver163. When the uses desires to control an update of a target object in an output composite image, a target object control command may be received through one or more of the microphone161, the camera162, and the optical receiver163. For example, the selection of the location or the target object control command may include one or more of a user utterance detected using the microphone161, a user gesture detected using the camera162, or a control signal received by the optical receiver163from the remote controller. However, this is not limiting and the user selection input may be received via a touch screen of the display130. The inputter/outputter180receives, for example, video data (e.g., a moving image), audio data (e.g., voice or music), and additional information (e.g., an electronic program guide (EPG)) from outside the electronic device100under the control of the processor120. The inputter/outputter180may include one of a high-definition multimedia interface (HDMI) port181, a component jack182, a computer port183, and a Universal Serial Bus (USB) port184. The inputter/outputter180may include a combination of the HDMI port181, the component jack182, the computer port183, and the USB port184. The memory110according to an embodiment may store programs for process and control operations of the processor120, and store data input to or to be output from the electronic device100. The memory110may also store data required for operations of the electronic device100. The programs stored in the memory110may be classified into a plurality of modules depending on functions thereof. Specifically, the memory110may store one or more programs for performing a certain operation by using a neural network. For example, the one or more programs stored in the memory110may include an object detection module610, an embedding module620, an image generation module630, an image synthesis module640, a feature extraction module650, and an object control module660. The object detection module610may include one or more instructions for detecting one or more objects in an input image by using one or more neural networks. The embedding module620may generate a target object to be combined with a synthesis region of the input image, based on the one or more objects detected in the input image and locations of the one or more objects by using one or more neural networks. The image generation module630may generate a target object image corresponding to the generated target object, by using one or more neural networks. The image generation module630may generate the target object image to be naturally combined with the input image, by reflecting features of the synthesis region of the input image to generate the target object image. The image synthesis module640may generate a composite image by combining the generated target object image with the input image by using one or more neural networks. The image synthesis module640may naturally combine the input image with the target object image by reflecting the features of the synthesis region of the input image to generate the composite image. The feature extraction module650may extract the features from the synthesis region in the input image to be combined with the target object image, by using one or more neural networks. The object control module660may update the target object image according to a user input for controlling the target object image in the composite image, and generate and output a composite image by combining the updated target object image with the input image. The processor120serves to control overall operations of the electronic device100and signal flows between the internal elements of the electronic device100, and process data. When a user input is made or a preset and prestored condition is satisfied, the processor120may execute an operating system (OS) and various applications stored in the memory110. The processor120may include an internal memory. In this case, at least one of data, programs, or instructions stored in the memory110may be stored in the internal memory of the processor120. For example, the internal memory of the processor120may store one or more programs for performing certain operations by using a neural network, or one or more instructions for performing certain operations by using a neural network. According to an embodiment, the processor120may execute one or more instructions included in the object detection module610, the embedding module620, the image generation module630, the image synthesis module640, the feature extraction module650, and the object control module660stored in the memory110, to an image synthesis function of the electronic device100described herein. The video processor135may process image data to be displayed on the display130, and perform various image processing operations such as decoding, rendering, scaling, noise filtering, frame rate conversion, and resolution conversion on the image data. The audio processor145processes audio data. The audio processor145may perform various processing operations such as decoding, amplification, and noise filtering on the audio data. The audio outputter140may output audio data included in a broadcast signal received through a tuner, audio data input through the communicator150or the inputter/outputter180, or audio data stored in the memory110, under the control of the processor120. The audio outputter140may include at least one of a speaker141, a headphone output terminal142, or a Sony/Philips Digital Interface (S/PDIF) output terminal143. The user inputter190refers to a means used by the user to input data for controlling the electronic device100. For example, the user inputter190may include a keypad, a dome switch, a touchpad, a jog wheel, or a jog switch, but is not limited thereto. The user inputter190according to an embodiment may receive a command for receiving a selection of an input image, receiving a selection of a synthesis region in the input image to be combined with a target object image, or controlling a target object image in a composite image. In the block diagrams of the electronic device100illustrated inFIGS.3and4, the elements illustrated may be integrated, added, or omitted. For example, two or more elements may be combined into one element, or one element may be divided into two or more elements as described in more detail below. Functions performed by the blocks are merely to describe embodiments, and specific operations or devices are not limiting. FIG.5is a flowchart of an example of an operating method of the electronic device100, according to an embodiment. Referring toFIG.5, in operation S510, the electronic device100may receive a selection of a synthesis region in an input image to be combined with a target object image. When the electronic device100combines two different images, an image serving as a background is called an input image and an image including a target object is called a target object image. According to an embodiment, the electronic device100may receive the input image and receive the selection of the synthesis region in the input image to be combined with the target object image. For example, a user may input a location of a portion of the input image to be combined with the target object image, to the electronic device100by using various user interaction means such as touch, a gesture, and a pointer. In operation S520, the electronic device100may detect one or more objects in the input image by using one or more neural networks. The electronic device100may detect the one or more objects different from a background, by extracting features from the input image, classify categories of the detected one or more objects, and obtain location information of the detected objects. The neural networks used when the electronic device100detects the one or more objects in the input image may include, for example, two stage methods such as a faster region-based convolutional neural network (faster R-CNN), a region-based fully convolutional network (R_FCN), and a feature pyramid network (FPN)-FRCN, or single stage methods such as you only look once (YOLO), a single shot MultiBox detector (SSD), and RetinaNet. In operation S530, the electronic device100may obtain images of one or more target objects to be located in the synthesis region, based on the detected one or more objects by using one or more neural networks. The electronic device100may obtain class vectors corresponding to the one or more objects detected in the input image, and obtain class vectors of the one or more target objects to be located in the synthesis region, in consideration of a location of the synthesis region in the input image. Neural networks used when the electronic device100obtains the class vectors of the target objects by using the one or more objects may include, for example, latent semantic analysis (LSA), Word2Vec, GloVe, and fastText. The electronic device100may generate target object images by using the obtained class vectors of the target objects. Neural networks used when the electronic device100generates the target object images by using the class vectors of the target objects may include, for example, a GAN. In operation S540, the electronic device100may obtain a composite image by combining the input image with the obtained one or more target object images by using one or more neural networks. The neural networks used when the electronic device100combines the input image with the obtained one or more target object images may include, for example, U-Net. FIG.6shows an example configuration of an electronic device100for performing the operations illustrated inFIG.5, according to an embodiment. Referring toFIG.6, the electronic device100may include the object detection module610, the embedding module620, the image generation module630, and the image synthesis module640. The object detection module610may include appropriate logics, circuits, interfaces, and/or codes for detecting one or more objects in the input image200. According to an embodiment, the object detection module610may receive the input image200and location information of a synthesis region in the input image200to be combined with a target object image. According to an embodiment, the object detection module610may detect one or more objects in the input image200by using one or more neural networks, and provide, to the embedding module620, an object list including distance information from the synthesis region to the detected objects. The object list may include information on the detected one or more objects, and the information on the one or more objects may include (object class of each object, distance information from synthesis region to each object). For example, when the object detection module610detects objects such as a boy, a bird, and a house in the input image200, the object list may include (boy, 3), (bird, 5), and (house, 20) as (object class, distance information, e.g., measured in appropriate units such as pixels, millimeters, centimeters, etc.). The embedding module620may include appropriate logics, circuits, interfaces, and/or codes for generating class vectors of one or more target objects to be located in the synthesis region of the input image200, by using the one or more objects included in the object list received from the object detection module610. According to an embodiment, the embedding module620may receive, from the object detection module610, the object list including the information on the one or more objects. According to an embodiment, the embedding module620may obtain class vectors corresponding to one or more object classes included in the received object list, by using one or more neural networks, and obtain one or more target object class vectors adjacent to the obtained class vectors, by using the distance information of the one or more objects included in the object list. For example, the embedding module620may output fifteen class vectors corresponding to a dog and three class vectors corresponding to a cat, as the target object class vectors obtained using the object list received from the object detection module610, e.g., (boy, 3), (bird, 5), and (house, 20). The image generation module630may include appropriate logics, circuits, interfaces, and/or codes for generating one or more target object images corresponding to the one or more target object class vectors received from the embedding module620. According to an embodiment, the image generation module630may generate the target object images corresponding to the one or more target object class vectors received from the embedding module620, by using one or more neural networks. For example, the image generation module630may generate fifteen dog images and three cat images by using the fifteen class vectors corresponding to a dog and the three class vectors corresponding to a cat, which are received from the embedding module620. The image synthesis module640may include appropriate logics, circuits, interfaces, and/or codes for combining the input image200with the target object images received from the image generation module630. According to an embodiment, the image synthesis module640may output a composite image generated by combining the input image200with the target object images received from the image generation module630, by using one or more neural networks. For example, the image synthesis module640may generate the composite image300by combining the input image200with each of the fifteen dog images and the three cat images received from the image generation module630. FIG.7is a flowchart of an example of an operating method of the electronic device100, according to an embodiment. The descriptions provided above in relation toFIG.5will be provided in brief herein to describe operations ofFIG.7. Referring toFIG.7, in operation S710, the electronic device100may receive a selection of a synthesis region in an input image to be combined with a target object image. In operation S720, the electronic device100may detect one or more objects in the input image by using one or more neural networks. In operation S730, the electronic device100may extract image features from the synthesis region of the input image by using one or more neural networks. The neural networks used when the electronic device100extracts the image features from the synthesis region of the input image may include, for example, CNNs. In operation S740, the electronic device100may obtain one or more target object images to be located in the synthesis region by reflecting the features of the synthesis region, based on the detected one or more objects by using one or more neural networks. The electronic device100may obtain class vectors corresponding to the one or more objects detected in the input image, and obtain one or more target object class vectors to be located in the synthesis region, in consideration of a location of the synthesis region in the input image. The electronic device100may generate target object images by using the obtained target object class vectors and, in this case, the target object images more appropriate for the synthesis region may be generated by reflecting the features of synthesis region. Neural networks used when the electronic device100generates the target object images by using the target object class vectors may include, for example, conditional GANs. In operation S750, the electronic device100may obtain a composite image by combining the input image with the obtained one or more target object images, based on the features of the synthesis region by using one or more neural networks. The electronic device100may obtain the composite image having a natural boundary between a target object and the input image, by reflecting the features of the synthesis region to combine the input image with the one or more target object images. FIG.8shows an example configuration of an electronic device100for performing the operations illustrated inFIG.7, according to an embodiment. Referring toFIG.8, in addition to the object detection module610, the embedding module620, the image generation module630, and the image synthesis module640, the electronic device100may further include the feature extraction module650. The descriptions of the object detection module610and the embedding module620are provided above in relation toFIG.6and will not be repeated herein. The feature extraction module650may include appropriate logics, circuits, interfaces, and/or codes for receiving a synthesis region of the input image200and extracting features from the synthesis region of the input image200. According to an embodiment, the feature extraction module650may extract one or more features from the synthesis region of the input image200by using one or more neural networks. The one or more features may include an image color or texture. The feature extraction module650may configure a style vector by using the extracted features of the synthesis region and provide the configured style vector to at least one of the image generation module630or the image synthesis module640. The image generation module630may include appropriate logics, circuits, interfaces, and/or codes for reflecting the style vector corresponding to the features of the synthesis region to generate one or more target object images corresponding to one or more target object class vectors received from the embedding module620. According to an embodiment, the image generation module630may reflect the features of the synthesis region to generate the target object images corresponding to the one or more target object class vectors received from the embedding module620, by using one or more neural networks. For example, the image generation module630may generate fifteen dog images and three cat images by using fifteen class vectors corresponding to a dog and three class vectors corresponding to a cat, which are received from the embedding module620, and, in this case, the dog images and the cat images may be generated by reflecting image features, e.g., image color or texture information, of the synthesis region. The image synthesis module640may include appropriate logics, circuits, interfaces, and/or codes for reflecting the style vector corresponding to the features of the synthesis region, to combine the input image200with the one or more target object images received from the image generation module630. According to an embodiment, the image synthesis module640may combine the input image200with the one or more target object images received from the image generation module630, by reflecting the image features of the synthesis region by using one or more neural networks. For example, the image synthesis module640may naturally process edges of a dog or a cat in consideration of the image features of the synthesis region, e.g., a color or texture of the synthesis region, to combine the input image200with each of the fifteen dog images and the three cat images received from the image generation module630. FIG.9is a flowchart of an example of an operating method of the electronic device100, according to an embodiment. Referring toFIG.9, in operation S910, the electronic device100may receive a user input for controlling a target object included in a composite image. In operation S920, the electronic device100may update the target object according to the user input, and obtain an image of the updated target object. The electronic device100may obtain a class vector adjacent to a class vector of the target object currently included in the composite image, to update the target object, and generate a target object image corresponding to the class vector of the updated target object. For example, when a Poodle class vector is obtained as a class vector adjacent to a <Bulldog> class vector, the electronic device100may obtain an image corresponding to a Poodle. In operation S930, the electronic device100may obtain a composite image by combining an input image with the updated target object image by using one or more neural networks. For example, the electronic device100may output a composite image generated by combining the input image with a Poodle image corresponding to the updated target object image. FIG.10shows an example configuration of an electronic device100for performing the operations illustrated inFIG.9, according to an embodiment. Referring toFIG.10, in addition to the object detection module610, the embedding module620, the image generation module630, the image synthesis module640, and the feature extraction module650, the electronic device100may further include the object control module660. The object detection module610, the embedding module620, the image generation module630, the image synthesis module640, and the feature extraction module650are substantially the same as described above, and the redundant descriptions will be omitted. The object control module660may include appropriate logics, circuits, interfaces, and/or codes for receiving a user input for instructing to update a target object in a composite image (i.e., a target object control command), and controlling the embedding module620and the image generation module630according to the target object control command. According to an embodiment, the object control module660may instruct the embedding module620to generate a class vector of a target object adjacent to a class vector of the target object currently included in the composite image. For example, when the electronic device100outputs a composite image generated by combining an input image with a <Bulldog> image as a target object image, a user may desire a dog of another species than a <Bulldog> as the target object. In this case, the object control module660may receive, from the user, a control command for changing or updating the target object in the composite image. The embedding module620may generate a target object class vector adjacent to a class vector of the target object currently included in the composite image, and transmit the generated target object class vector to the image generation module630under the control of the object control module660. For example, the embedding module620may obtain a Poodle class vector as a class vector adjacent to a <Bulldog> class vector and transmit the Poodle class vector to the image generation module630. The image generation module630may generate an updated target object image corresponding to the updated target object class vector received from the embedding module620. For example, the image generation module630may generate a <Bulldog> image corresponding to the <Bulldog> class vector, as the updated target object image. The image synthesis module640may generate a composite image by combining the input image with the updated target object image received from the image generation module630. For example, the image synthesis module640may output a composite image generated by combining the input image with the <Bulldog> image as the updated target object image. Each module of an electronic device will now be described in detail with reference toFIGS.11to32. Object Detection Module FIG.11is a block diagram of an example of the object detection module610according to an embodiment. Referring toFIG.11, the object detection module610includes an object detection model611and a distance calculation module612. The object detection model611may detect one or more objects in the input image200by using one or more neural networks, and output object information1100including object classes and object locations corresponding to the detected one or more objects. Object detection includes object localization for determining locations of objects in a given image, and object classification for determining classes to which the one or more objects belong. Therefore, the object detection model611may include three stages, i.e., informative region selection for selecting informative regions, feature extraction for extracting features from each informative region, and classification for classifying each informative region by applying a classifier to the extracted features. Localization performance may be increased through post-processing such as bounding-box regression according to a detection method. An example of an object detection model will now be described with reference toFIG.12. FIG.12shows a network architecture of an R-CNN as an object detection method using a combination of region proposal and a CNN, according to an embodiment. Referring toFIG.12, an object detection model611may include a region proposal module1210, a CNN1220, a classifier module1230, and a bounding-box regression module1240. The region proposal module1210extracts informative regions from the input image200. A certain number of informative regions, e.g., 2,000 informative regions, may be extracted. The R-CNN uses selective-search as one of region proposal algorithms. The CNN1220extracts fixed-length feature vectors from the regions generated by the region proposal module1210. The CNN1220(e.g., AlexNet or VGGNet) receives certain-sized inputs and thus various rectangular image regions given by the region proposal algorithm need to be warped to the certain size regardless of sizes and aspect ratios thereof. The CNN1220receives the warped region and extracts a result of a layer before the classifier module1230. The classifier module1230receives the fixed-length feature vector as an input and performs classification. The bounding-box regression module1240receives the fixed-length feature vector as an input and calculates four coordinates (x, y, w, h) representing a box. A location of an object may be specified by the four coordinates (x, y, w, h) representing a box. That is, the R-CNN performs object detection by performing object localization through region proposal extraction and performing object classification using extracted features. Bounding-box regression may be performed to reduce localization errors. To train the object detection model611, a classification layer (e.g., an output layer) of a pre-trained CNN is newly changed to “the number of object classes+a background” for object detection and weight initialization is performed only at corresponding parts in order to modify the pre-trained CNN appropriately for object detection. To train a linear support-vector machine (SVM) per class, when positive samples (regions for which final result values of objects per class are output to be equal to or greater than a certain threshold from among the regions generated by the region proposal algorithm) and negative samples (regions for which final result values of objects per class are output to be less than the certain threshold from among the regions generated by the region proposal algorithm) are selected per class and the positive and negative samples are configured for a linear SVM per class to be trained, fixed-length feature vectors are extracted using a fine-tuned CNN and the linear SVM per class may be trained using the fixed-length feature vectors as inputs. For example, one or more objects may be detected in the input image200by the above-described object detection model611. The object information1100includes information on the one or more objects, and the information on each object may be represented as (object class, location). FIG.12merely illustrates an example for implementing an object detection module, and the object detection module may be implemented in a simpler configuration. FIG.13shows an example of the object information1100and an object list1110illustrated inFIG.11, according to an embodiment. Referring toFIG.13, the object detection module610may detect, for example, three objects in the input image200and output (Boy, Location of Boy), (Bird, Location of Bird), and (House, Location of House) as the object information1100. Referring back toFIG.11, the distance calculation module612may receive the object information1100(i.e., (object class, object location)) from the object detection model611, calculate a distance from each object to the synthesis region, e.g., a center point1250, by using location information of the synthesis region in the input image200to be combined with a target object image, and output (object class, distance from each object to synthesis region) as the object list1110. Referring toFIG.13, for example, the distance calculation module612may calculate a distance from the boy to the synthesis region to be 3, calculate a distance from the bird to the synthesis region to be 5, and calculate a distance from the house to the synthesis region to be 20. Therefore, the distance calculation module612may output (Boy, 3), (Bird, 5), and (House, 20) as the object list1110. Embedding Module FIG.14shows an example of the embedding module620according to an embodiment. The embedding module620receives the object list1110from the object detection module610and obtains one or more target object class vectors1400, based on one or more object classes included in the object list1110. Referring toFIG.14, the embedding module620may include a word embedding model621and a target object class vector extraction module622. The word embedding model621receives one or more object classes included in the object list1110received from the object detection module610, and maps the object classes to object class vectors by using one or more neural networks. The word embedding model621is trained based on a vector of each word by mapping semantically similar words to close points. W(“cat”)=(0.2, −0.4, 0.7, . . . ) W(“mat”)=(0.0, 0.6, −0.1, . . . ) For example, the word embedding model621converts a word such as cat or mat into a certain-dimensional vector. A matrix “W” is used to convert a word into a vector. Due to the conversion through the matrix, a word is changed into a meaningful vector. Two similar words may be converted into similar vectors, and the matrix W may be obtained through training. For example, in a set of 5,000 words, each word may be represented as [0, 0, 0, 1, 0, . . . , 0, 0, 0] (5,000 columns). In this case, when each word is to be represented as a 32-dimensional vector through word embedding, the dimension of the matrix W is 5,000*32. A 32-dimensional vector such as [0.2, 0, 4, 0.5, . . . , 0.8, 8] is obtained through training. FIG.15shows an example of word embedding according to an embodiment. Referring toFIG.15, king and queen, king and man, or queen and woman are in the same direction. Words of similar meanings are located in the similar direction. Word embedding effectively represents meanings of words and thus may increase the performance of training compared to one-hot encoding. In addition, when word embedding is pre-trained with a large amount of data, a relatively high performance may be achieved with less data by using the trained embedding in a task such as document classification. Types of word embedding include, for example, LSA, Word2Vec, GloVe, and fastText. As a type of word embedding, Word2Vec is a model obtained by modifying a previous neural network language model (NNLM) to enable efficient training. A language model refers to a model for predicting a next word based on previously given words. Training is enabled using only text without a label, e.g., classification. The Word2Vec creates word embedding by using byproducts of training this language model. The Word2Vec includes two methods such as CBOW and Skip-Gram. The CBOW predicts a target word by adding embedding of surrounding words, and the Skip-Gram predicts surrounding words by embedding a target word. FIG.16shows an example of a CBOW method of Word2Vec according to an embodiment. Referring toFIG.16, initially, an input layer converts all words of a sentence to be trained, into vectors by using one-hot encoding. 2 m word vectors are given as input values for one center word. Parameters include a parameter matrix W (W∈R V×N) between the input layer and a hidden layer, and a parameter matrix W′ (W′∈R V×N) between the hidden layer and an output layer. The purpose of this model is to maximize a conditional probability of a center word when surrounding words are given. The word vectors of the input layer according to one-hot encoding will be multiplied by the parameter W into embedded word vectors. The hidden layer calculates an average of the 2 m embedded vectors. The number of neurons of this hidden layer is less than that of the input space and thus a neural network is trained with information of the input layer in a compressed form. This neural network creates a hidden layer having a weight matrix of [the number of words x the number of neurons]. For example, when a whole set of words includes 20,000 unique words and the hidden layer includes 300 neurons, the weight matrix of the hidden layer may have a size of 20,000×300. Once this weight matrix is stored, a vector having300elements usable to represent each word is obtained. To calculate score values to be transmitted from the hidden layer to the output layer, a score for each word is obtained by multiplying the parameter W′. Words at close locations have higher score values. Lastly, the output layer calculates each score value as a probability value by using softmax. For training with these parameters, when a word to be predicted is accurately predicted, an objective function by which a value of H becomes 0 may be defined and training may be performed in a direction for minimizing the value of the objective function. When training of the word embedding neural network is completed, the weights of a single hidden layer of the neural network serve as a lookup table for word embedding. After training is completed with all sentences, each row of the matrix W may be used as an embedding vector of each word. The word embedding model621may configure a word embedding lookup table by using embedding vectors of words which are obtained by training a language model for predicting a next word based on previously given words. FIG.17is a view illustrating a word embedding lookup table1700obtained by training the word embedding model621, according to an embodiment. Referring toFIG.17, the word embedding lookup table1700is a vector space corresponding to words. Therefore, an object class vector corresponding to each object class may be obtained using the word embedding lookup table1700obtained after training is completed. It is shown that each symbol is mapped to one class and that, for example, a distance between a dog and a cat is less than a distance between a dog and a car in the vector space because a relationship between a dog and a cat is closer than a relationship between a dog and a car. Referring toFIG.17, for example, a dog class may include various types of dogs. For example, the dog class may include various types of dogs such as Chihuahua, Yorkshire Terrier, Poodle, Dachshund, Maltese, and Beagle. In the dog class, dogs in a semantically close relationship may be located close to each other and dogs in a semantically distant relationship may be located far from each other. When an object list includes a boy, a bird, and a house as object classes, the word embedding model621may obtain a class vector corresponding to the boy, a class vector corresponding to the bird, and a class vector corresponding to the house, by using the word embedding lookup table1700, and provide the class vectors to the target object class vector extraction module622. Then, the target object class vector extraction module622may obtain target object class vectors by using the class vectors and distances of one or more objects, which are received from the word embedding model621, with reference to the word embedding lookup table1700configured by the word embedding model621. That is, the target object class vector extraction module622may determine information on a target object to be located in a synthesis region of an input image, by using objects detected in the input image, e.g., a boy, a bird, and a house. For example, when the detected objects include a boy, a bird, and a house, an appropriate target object to be located adjacent to the objects may include an animal such as a dog or a cat, or a plant such as a tree or a flower(s). FIG.18shows an example of a method used by the target object class vector extraction module622to obtain target object class vectors1400with reference to the word embedding lookup table1700, according to an embodiment. Referring toFIG.18, the target object class vector extraction module622finds, in the word embedding lookup table1700, a class vector corresponding to a boy1810, a class vector corresponding to a bird1820, and a class vector corresponding to a house1830. Because a distance from a synthesis region to a boy object is 3, a distance from the synthesis region to a bird object is 5, and a distance from the synthesis region to a house object is 20 in an input image, target objects may be determined in consideration of the distances from the synthesis region to the objects. For example, a higher weight may be allocated to the boy, which is the closest object to the synthesis region, and a lower weight may be allocated to the house, which is the farthest object from the synthesis region. As described above, a center point1840may be found in consideration of the distances from the synthesis region. After that, the target object class vector extraction module622may determine, as the target object class vectors1400, class vectors within a certain range1850, i.e., a certain distance from the center point1840. For example, referring toFIG.18, the class vectors within the certain distance (e.g., a certain range1850) from the center point1840include D1to D15and C1to C3. D1to D15are class vectors indicating various dogs belonging to a dog class, and C1to C3are class vectors indicating various cats belonging to a cat class. For example, D1to D15may be class vectors corresponding to dog species, e.g., Bulldog, Chihuahua, Yorkshire Terrier, Poodle, Dachshund, Maltese, and Beagle, and C1to C3may be class vectors corresponding to cat species, e.g., Persian, American Shorthair, and Siamese. As described above, the target object class vector extraction module622may output D1to D15and C1to C3as the target object class vectors1400. FIG.19shows an example of a method used by the target object class vector extraction module622to obtain the target object class vectors1400with reference to the word embedding lookup table1700, according to an embodiment. Referring toFIG.19, the target object class vector extraction module622finds, in the word embedding lookup table1700, a class vector corresponding to a boy, a class vector corresponding to a bird, and a class vector corresponding to a house. Because a distance from a synthesis region to a boy object is 3, a distance from the synthesis region to a bird object is 5, and a distance from the synthesis region to a house object is 20 in an input image, target objects may be determined in consideration of the distances from the synthesis region to the objects. For example, a higher weight may be allocated to the boy, which is the closest object to the synthesis region, to obtain a greater number of class vectors in a dog cluster adjacent to the boy, and a lower weight may be allocated to the house, which is the farthest object from the synthesis region, to obtain a smaller number of class vectors in a cat cluster adjacent to the house. According to the above-described method, for example, referring toFIG.19, the target object class vector extraction module622may obtain, as the target object class vectors1400, class vectors D1to D8and D11to D15within a certain range1910of the dog cluster, class vectors C1to C5within a certain range1920of the cat cluster, and class vectors F1to F3within a certain range1930of a flower cluster. Therefore, the target object class vector extraction module622may output D1to D8, D11to D15, C1to C5, and F1to F3as the target object class vectors1400. Methods of extracting target object class vectors adjacent to objects detected in an input image, by using a word embedding lookup table are described above with reference toFIGS.18and19. However, the methods illustrated inFIGS.18and19are merely examples and various methods of extracting target object class vectors by using a word embedding lookup table may be used. Feature Extraction Module FIG.20shows an example of the feature extraction module650according to an embodiment. Referring toFIG.20, the feature extraction module650may receive a synthesis region image210in the input image200to be combined with a target object image, extract one or more features from the synthesis region image210by using one or more neural networks, and output, to at least one of the image generation module630or the image synthesis module640, style vectors2000corresponding to the extracted one or more features. According to an embodiment, a feature extraction model used by the feature extraction module650to extract the features from the synthesis region image210by using one or more neural networks may include, for example, each convolutional layer of a classifier pre-trained using ImageNet. As the feature extraction model, a CNN and variations including attention and skip connection may be used. FIG.21shows a network architecture of a CNN1220according to an embodiment. The CNN1220includes a convolutional feature extraction part2110and a classification part2120. Features of an input image are extracted through convolutional layers, and classification is performed based on the extracted features by using a related art neural network available to those skilled in the art. Each convolutional layer serves to extract features from input data and includes a filter for extracting features, and an activation function for converting a value of the filter into a non-linear value. The filter may be a function for detecting whether features of an image to be extracted are present in target data. When a feature map is extracted through the filter as described above, the value is activated by applying the activation function to the feature map. Representative activation functions include, for example, Sigmoid and rectified linear unit (ReLU) functions. The features extracted as described above are sub-sampled to reduce the amount of computation by reducing the size of the extracted feature map, and this process is called pooling. A fully connected layer corresponds to a process of performing classification by applying feature values extracted by convolutional layers, to a related art neural network, and may use, for example, the softmax function. The feature extraction module650may extract one or more feature maps generated in the above-described CNN architecture, e.g., feature maps1to4, and use the same as image features of a synthesis region. The feature extraction module650may convert the feature maps1to4into the form of vectors and output style vectors1to4. The feature extraction module650may output one of, for example, the style vectors1to4. The feature extraction module650may extract various features by using convolutional layers and various filters in various image scales. For example, lower level features of an image may be extracted through a shallower convolutional layer, and higher level features of the image may be extracted through a deeper convolutional layer. For example, in a face image of a person, the lower level features may include local features indicating colors or lines, e.g., a skin color and an edge direction of a jaw, and the higher level features may include combined features of the eyes, the nose, etc. Therefore, the feature extraction module650may appropriately extract and use a feature map corresponding to the higher level features and a feature map corresponding to the lower level features. In addition, a feature factor to be extracted from the image may be adjusted by using, for example, the filter used by each convolutional layer. For example, color or texture features may be extracted from the image by using a filter used to extract color information from the image, or a filter used to extract texture information from the image. For example, the features of the image may include color and texture features, and the color features may include an image color, a color histogram representing color distribution in the image, color moments, and a color coherence vector whereas the texture features may include edges. For example, when a part of a grass image is input as the synthesis region of the input image200as illustrated inFIG.20, the feature extraction module650may output a style vector corresponding to color features indicating green and a style vector corresponding to texture features indicating grass, based on the synthesis region. As described above, the features of the synthesis region may be used by the image generation module630to generate an image of an object to be located in the synthesis region, or be used by the image synthesis module640to combine the input image200with the target object image. Image Generation Module FIG.22shows an example of the image generation module630according to an embodiment. Referring toFIG.22, according to an embodiment, the image generation module630may receive the target object class vector1400, and generate a target object image2200corresponding to the target object class vector1400, by using an image generation model. According to an example, the image generation module630may further receive the style vector2000in addition to the target object class vector1400, and generate the target object image2200corresponding to the target object class vector1400and the style vector2000, by using the image generation model. As described above, when the style vector2000output from the feature extraction module650is further used to generate the target object image2200, the target object image2200may be generated more appropriately for a synthesis region of an input image. A representative example of the image generation model includes a GAN. FIG.23shows an architecture of a GAN2300which may be used by the image generation module630according to an embodiment. Referring toFIG.23, the GAN2300includes a generator2320and a discriminator2360. The generator2320generates a new instance by using random noise, and the discriminator2360determines whether each data instance corresponds to a real training data set, i.e., whether an input image is a real image or a fake image, by evaluating data authenticity. When features of the data instance are given, a label or a category of the corresponding data is predicted. The generator2320is a function for receiving a random vector or a latent vector ‘z’2310as an input and outputting a fake image sample2330. Herein, ‘z’ is a value randomly extracted simply from a uniform distribution or a normal distribution. The generator2320may be regarded as a function for mapping the simple distribution to a complicated distribution, e.g., a face image of a person. The complicated distribution may be approximated when a generator model includes a sufficient number of parameters. A space including the vector ‘z’ is called a latent space. Herein, the size of the latent space may be arbitrarily determined, e.g., 100 dimensions. The size of the latent space is not particularly limited but needs to be sufficient enough for target information. This is because the GAN2300maps values of the vector ‘z’ to image attributes. The generator2320is aimed to generate fake data that is indistinguishable from real data. The discriminator2360is trained using real training data (e.g., real world images) and fake data generated by the generator2320, and serves to determine whether a sample is real or fake. The discriminator2360is a function for receiving an image as an input and outputting a probability that the image is real, as a number between 0 and 1. By repeatedly training the discriminator2360in a direction for improving a discrimination ability thereof and by repeatedly training the generator2320in a direction for deceiving the discriminator2360, the generator2320is ultimately aimed to generate data that is hard to determine whether it is real or fake, and the discriminator2360is ultimately aimed to gradually improve the discrimination ability thereof. The GAN2300may train the generator2320and the discriminator2360in an adversarial manner until an image is hard determine whether it is real or fake. FIGS.24A and24Bare diagrams illustrating a method of training an image generation model, according to an embodiment. GAN-based learning is performed in two stages, and a first stage is a stage for fixing the generator2320and training the discriminator2360. Because the discriminator2360already knows which images are real and which are fake, unlike a related art discriminator training method, a cost function or a loss function is defined and weights are updated by back-propagating errors. Referring toFIGS.24A and24B, the discriminator2360outputs a probability value close to 1 when a real image sample2350is input from a real image dataset2340, and output a probability value close to 0 when the fake image sample2330is input. Therefore, a loss function2370of the discriminator2360consists of a sum of two values. A sum of a difference between a value 1 and a value output when a real image is input and a difference between a value 0 and a value output when a fake image is input is the loss function2370of the discriminator2360. The discriminator2360is trained by updating parameters of the discriminator2360in a direction for minimizing the value of the loss function2370. Referring toFIG.24B, a second stage is a stage for fixing the discriminator2360and training the generator2320. The generator2320is aimed to deceive the discriminator2360and thus is trained in a direction for making a fake image mistaken for a real image by the discriminator2360. That is, the purpose of the generator2320is to deceive the discriminator2360. In other words, when a fake image generated by the generator2320is input to the discriminator2360, a value close to 1 is output. A difference between this value and a value 1 serves as a loss function of the generator2320, and the generator2320is trained to minimize the same. When the above-described two stages are repeatedly performed, the discriminator2360and the generator2320are improved to equal levels. FIG.25shows an example of an image generation model2500further using the style vector2000, according to an embodiment. The image generation model2500may be used by the image generation module630. Referring toFIG.25, in the image generation model2500, the style vector2000is further added as an input of the generator2320and the discriminator2360. The image generation model2500is trained by adding the style vector2000as a condition y to a related art GAN. The generator2320generates the fake image sample2330by concatenating the latent vector2310and the style vector2000and the discriminator2360receives the fake image sample2330and the style vector2000as inputs. However, combination of the style vector2000is not limited to concatenation and may also use, for example, a simple sum or projection. As described above, when the style vector2000is further used for training, the image generation model2500may generate an image by reflecting image features corresponding to the style vector2000. For example, when the style vector2000relates to features corresponding color information indicating green, the image generation model2500may generate an image by further using the color information indicating green. When the style vector2000relates to features corresponding edge information indicating grass texture, the image generation model2500may generate an image by further using the edge information indicating grass texture. FIG.26is a view illustrating an operation of the image generation module630using the style vector2000, according to an embodiment. Referring toFIG.26, for example, the image generation module630may receive a class vector corresponding to <Bulldog>, as the target object class vector1400, and receive a style vector including color and texture information corresponding to green and grass texture, as the style vector2000. The image generation module630may generate a target object image2600, to which a style vector is reflected, as a <Bulldog> image corresponding to the <Bulldog> class vector, by using the color and texture information of the style vector2000. When the style vector2000is not used, the image generation module630may generate an arbitrary <Bulldog> image corresponding to the received <Bulldog> class vector. However, when the image generation module630receives the style vector2000, the color and texture information of the style vector2000may affect at least one of a foreground or a background of the <Bulldog> image and thus the image generation module630may generate a <Bulldog> image to which green or grass texture is reflected. Image Synthesis Module FIG.27is a diagram showing an example of the image synthesis module640according to an embodiment. Referring toFIG.27, the image synthesis module640may include an image segmentation module641and an image combination module642. The image segmentation module641recognizes a target object in the target object image2600received from the image generation module630, by using one or more neural networks. According to an embodiment, the image segmentation module641may receive the target object image2600output from the image generation module630, a synthesis region image2700extracted from the input image200, and the style vector2000output from the feature extraction module650, recognize the target object in the target object image2600by using the received data, and provide the recognized the target object to the image combination module642. The image combination module642may generate and output the composite image300by combining the input image200with the target object recognized by the image segmentation module641. The image segmentation module641will now be described in detail. Semantic segmentation refers to an operation of accurately extracting boundaries of objects in an image and splitting the image into meaningful regions in order to divide the image into semantically or cognitively similar regions. That is, semantic segmentation defines the boundary of each object in an image by understanding the image at a pixel level and assigning an object class to each pixel of the image. For example, semantic segmentation generates a predicted result by marking class (or label) values of currently viewed pixels. For example, to segment an image into grass and a dog, a pixel region corresponding to a dog is marked with a value ‘1’ and a grass region is marked with a value ‘0’, and then a model puts a blue mask on the region marked with a value1and puts a green mask on the region marked with a value 0, thereby clearly distinguishing between the dog and grass regions. FIG.28Ashows an example of an image segmentation module according to an embodiment. Referring toFIG.28A, the image segmentation module641may receive the target object image2600output from the image generation module630. The image segmentation module641may mark a value 1 for a pixel region corresponding to a dog and mark a value 0 for a background region in the received target object image2600, and put, for example, a gray mask on the region indicated by a value 1 and put, for example, a white mask on the region indicated by a value 0, to check a boundary of the dog corresponding to an object in the target object image2600. By specifying a region corresponding to the dog in the target object image2600and outputting pixel information corresponding to the dog, a target object region2810(e.g., a dog region) in the target object image2600may be output. A resultant image2800may include information for distinguishably identifying the target object region2810and a background region2820in the target object image2600. However, a boundary region2830between the target object region2810and a background region2820may be unnaturally detected because pixels to be recognized as a background may be recognized as an object or pixels to be recognized as an object may be recognized as a background. The boundary region2830unnaturally detected as described above is indicated with a thick line inFIG.28A. FIG.28Bis an example of the image segmentation module according to an embodiment. In an embodiment, to prevent unnatural detection of an object boundary as illustrated inFIG.28A, the image segmentation module641may further include, as inputs, the synthesis region image2700and the style vector2000in addition to the target object image2600. The target object image2600may include three channels of RGB, and the synthesis region image2700may also include three channels of RGB. The style vector2000may include various numbers of channels. For example, when color features and edge features are included, the style vector2000may include four channels including three channels of RGB for the color features and one channel for the edge features. As described above, the image segmentation module641receives data of ten channels and recognizes a target object in a target object image by using the received data. In this case, the image segmentation module641may naturally detect a boundary region2870of the target object in the target object image in further consideration of the synthesis region image2700and the style vector2000indicating features of the synthesis region image2700. The boundary region2870more naturally detected using the style vector2000is indicated with a thin line inFIG.28B. When an image corresponding to a target object region2810or2850is input to the image combination module642by the image segmentation module641, the image combination module642may generate a composite image by combining the image corresponding to the target object region2810or2850, which is received from the image segmentation module641, on a region of the input image200to be combined with a target object. FIG.29shows an example of a semantic segmentation model according to an embodiment. Semantic segmentation models have an encoder-decoder structure. An encoder represents information of an input image as a compressed vector, and a decoder generates a result in a desired size. Referring toFIG.29, the semantic segmentation model according to an embodiment includes an encoder2910and a decoder2920. The encoder2910performs a downsampling process for enabling deep convolution with a small memory by reducing dimensions of an image. For example, convolution with a stride of 2 or above is used or a pooling layer is used. When this process is performed, feature information of the image is lost, and a fully-connected network is mostly used without providing a fully-connected layer at the end. The decoder2920mostly performs an upsampling process for increasing dimensions of the downsampled image to the dimensions of the input image. Such encoder-decoder models include, for example, a fully convolutional network (FCN), SegNet, and U-Net. The pooling layer of the encoder2910may discard location information, increase views, and collect context of the image. However, because semantic segmentation requires an accurate class map alignment, the location information may be retained. The U-Net includes an encoder for gradually reducing spatial dimensions through a pooling layer, a decoder for gradually reconstructing object details and the spatial dimensions, and a shortcut connection from the encoder to the decoder to allow the decoder to reconstruct the object details well. This connection is called a skip connection. The U-Net is a model obtained by adding a skip connection2930to the encoder-decoder structure. When the image size is reduced (downsampling) and then is increased again (upsampling), detailed pixel information is lost. However, because the above problem may be considered seriously in image segmentation which require pixel-based dense prediction, the decoder2920may obtain a much clearer image and perform more accurate prediction through the skip connection2930for directly providing the location information from the encoder2910to the decoder2920. A composite image generated by the image synthesis module640as described above may be output. The composite image may be provided to a display and be displayed on the display. Object Control Module FIG.30is a view illustrating the object control module660according to an embodiment. Referring toFIG.30, the object control module660may receive a user input for controlling and updating a target object included in an output composite image. For example, when the composite image generated by the image synthesis module640and including the target object is output on a display, a user may view the output composite image and then desire to change the target object included in the composite image. In this case, the user may input, to an electronic device, a control command (i.e., a user input) for changing or updating the target object included in the composite image, and the object control module660may receive the user input. The user input for controlling the target object may be implemented in various forms. For example, when the display of the electronic device is implemented as a touchscreen display, a user input for touching the target object in the composite image displayed on the display may be used as the user input for controlling the target object. Alternatively, instead of touch, a gesture or voice of the user may be used. Alternatively, a scroll bar for controlling the target object may be displayed on the display. FIG.31is a view illustrating an example of displaying a scroll bar3100to control a target object, according to an embodiment. Referring toFIG.31, the electronic device100may output, on the display130, the composite image300generated by combining an input image with a target object image. In this case, the electronic device100may display, on the display130, the composite image300together with the scroll bar3100for receiving a user input for controlling the target object. The scroll bar3100may include a scroll button3110movable on the scroll bar3100. A user may move the scroll button3110on the scroll bar3100by using various input means. Distance information and location information of the scroll button3110moved by the control of the user may be used as inputs for an operation of updating a target object class vector. The object control module660may receive the distance information and the location information of the scroll button3110, and provide, to the embedding module620, the received distance and location information together with a command for instructing to update the target object class vector. When the distance information and the location information are received together with the command for instructing to update the target object class vector, the embedding module620may update the target object class vector, based on the distance information or the location information. FIG.31illustrates that the display130generates and displays a composite image including an updated target object image corresponding to a class vector C4, according to the class vector control of the user. That is, although a composite image including a <Bulldog> image as the target object image is initially displayed, by the scroll control of the user who desires to update the target object image, the target object image is updated to other types of dogs D2, D3, . . . and then is updated to types of cats . . . , C4after all types of dogs are displayed. FIG.32is a view illustrating an operation performed by the embedding module620to update a target object class vector, according to an embodiment. The operation performed by the embedding module620to update the target object class vector is basically similar to the operation performed by the embedding module620to obtain the target object class vector, which is described above with reference toFIG.14. As described above with reference toFIG.14, the embedding module620has already generated a certain number of target object class vectors. For example, referring toFIG.18, the embedding module620has obtained fifteen class vectors D1to D15in a DOG class, and three class vectors C1to C3in a CAT class. An electronic device may have generated a target object image and a composite image corresponding to the already obtained target object class vector. Therefore, when a range according to the control of a user is within a certain range1850, the electronic device may output the generated composite images. However, when the range according to the control of the user is out of the certain range1850, the electronic device may update a target object class vector corresponding to a user input. For example, referring toFIG.32, when a class vector control direction3200according to a user input indicates a bottom right direction of the certain range1850, for example, class vectors C4, C5, and C6in a CAT class may be obtained as updated target object class vectors. The target object class vectors C4, C5, and C6newly obtained by the embedding module620may pass through the image generation module630and the image synthesis module640, and composite images including updated target objects corresponding to C4, C5, and C6may be generated. Referring back toFIG.31, the user may control the target object class vector by using a user input. For example, when the electronic device100initially outputs a composite image including a target object corresponding to the class vector D1and when the user moves the scroll button3110in a unit for the class vector control, the electronic device100may output a composite image including a target object corresponding to the class vector D2. For example, when the user continuously moves the scroll button3110, the electronic device100may output a composite image including a target object corresponding to the class vector C4. As described above, the user may update the target object included in the composite image, by moving the scroll button3110until a target object desired by the user is displayed on the composite image. For example, when the user desires to combine a dog of another species, a currently combined dog may be changed sequentially from the most similar dog to the least similar dog by linearly moving a gauge in a direction and be changed to other types of animals such as cats or lions by further moving the gauge. As described above, the object control module660may update the target object to be located in the composite image displayed on the display, according to a simple user input and thus a target object desired by the user may be conveniently obtained. FIG.33is a block diagram illustrating a configuration of a processor3300in terms of training and processing of neural networks, according to an embodiment. For example, the processor3300may correspond to the processor120described above. Referring toFIG.33, the processor3300according to an embodiment may include a data learner3310and a data processor3320. The data learner3310may learn criteria for detecting one or more objects in an input image, to train a first neural network according to an embodiment. The data learner3310may learn criteria for a type of information (e.g., feature information) of the input image to be used to detect the objects. In addition, the data learner3310may learn criteria for a method of detecting the objects by using the feature information of the image. The data learner3310may obtain data (e.g., an image) to be used for learning, and learn the criteria for detecting the one or more objects in the image, by applying the obtained data to a data processing model (i.e., the first neural network). The data learner3310may learn criteria for obtaining an object class vector by using an input object class, to train a second neural network according to an embodiment. The data learner3310may learn criteria for a type of information (e.g., feature information) of the image to be used to obtain emotion information. The data learner3310may learn criteria for extracting image features from the input image, to train a third neural network according to an embodiment. The data learner3310may learn criteria for generating an object image by using an input object class vector, to train a fourth neural network according to an embodiment. The data learner3310may learn criteria for recognizing an object in an input object image, to train a fifth neural network according to an embodiment. The data processing models (e.g., the first to fifth neural networks) may be configured in consideration of, for example, an applicable field of the data processing models, the purpose of learning, or the computing performance of a device. The data processing models may be, for example, neural-network-based models. For example, the data processing models may use DNN-, RNN-, or BRDNN-based models, but are not limited thereto. The data learner3310may train the data processing models by using a learning algorithm including, for example, error back-propagation or gradient descent. The data learner3310may train the data processing models through, for example, supervised learning using training data as input values. The data learner3310may train the data processing models through, for example, unsupervised learning for finding data processing criteria by autonomously learning types of data required for data processing without any supervision. The data learner3310may train the data processing models through, for example, reinforcement learning using feedback on whether training result values are correct. When the data processing models are trained, the data learner3310may store the trained data processing models. In this case, the data learner3310may store the trained data processing models in a memory of an electronic device. Alternatively, the data learner3310may store the trained data processing models in a memory of a server connected to the electronic device in a wired or wireless network. The data processor3320may input an image to the data processing model including the trained first neural network, and the data processing model may output, as a result value, information on one or more objects detected in the image. The output result value may be used to update the data processing model including the first neural network. The data processor3320may input one or more object classes to the data processing model including the trained second neural network, and the data processing model may output, as a result value, target object class vectors adjacent to the one or more object classes. The output result value may be used to update the data processing model including the second neural network. The data processor3320may input an image to the data processing model including the trained third neural network, and the data processing model may output, as a result value, feature information of the image. The output result value may be used to update the data processing model including the third neural network. The data processor3320may input one or more target object class vectors to the data processing model including the trained fourth neural network, and the data processing model may output, as a result value, target object images corresponding to the one or more target object class vectors. The output result value may be used to update the data processing model including the fourth neural network. The data processor3320may input a target object image to the data processing model including the trained fifth neural network, and the data processing model may output, as a result value, a result of recognizing a target object in the target object image. The output result value may be used to update the data processing model including the fifth neural network. At least one of the data learner3310or the data processor3320may be produced in the form of at least one hardware chip and be mounted in the electronic device. For example, at least one of the data learner3310or the data processor3320may be produced in the form of a dedicated hardware chip for AI, or as a part of a processor (e.g., a CPU or an application processor) or a dedicated graphic processor (e.g., a GPU). In a wired or wireless manner, information on models configured by the data learner3310may be provided to the data processor3320, and data input to the data processor3320may be provided to the data learner3310as additional training data. At least one of the data learner3310or the data processor3320may be implemented as a software module. When at least one of the data learner3310or the data processor3320is implemented as a software module (or a program module including instructions), the software module may be stored in a non-transitory computer readable medium. In this case, at least one software module may be provided by an OS or by a certain application. Alternatively, a part of at least one software module may be provided by an OS and the other part may be provided by a certain application. The data learner3310and the data processor3320may be mounted in one electronic device or separate electronic devices. For example, one of the data learner3310and the data processor3320may be included in an electronic device and the other may be included in a server. According to an embodiment, the data learner3310and the data processor3320may be mounted in an electronic device of a user, and both learning and data processing may be performed in the electronic device of a user. According to an embodiment, the data learner3310may be mounted in a server, and the data processor3320including the trained models may be mounted in an electronic device of a user. FIG.34Ashows an example of the data learner3310mounted in a server3400and the data processor3320mounted in an electronic device3410of a user, according to an embodiment. For example, the electronic device3410of a user may correspond to the electronic device100described above or may be a different device. Referring toFIG.34A, the server3400may obtain an image synthesis neural network model by learning a method of combining two or more images according to the method described herein, by using the data learner3310. The server3400may provide the trained neural network model to the electronic device3410of a user. The electronic device3410of a user may implement the data processor3320by using the trained neural network model received from the server3400. When a user desires to combine images, the electronic device3410of a user may autonomously combine the images according to a user request by using the data processor3320without communicating with the server3400, and output a composite image on a display of the electronic device3410of a user. FIG.34Bshows an example of the data learner3310and the data processor3320mounted in the server3400, according to an embodiment. Referring toFIG.34B, the data learner3310and the data processor3320are both mounted in the server3400. Therefore, the server3400may obtain an image synthesis neural network model by learning a method of combining two or more images according to the method described herein, by using the data learner3310, and implement the data processor3320by using the obtained image synthesis neural network model. When a user desires to combine images, the electronic device3410of a user transmits an image synthesis request to the server3400, and the server3400may generate a composite image by combining the images according to the user request by using the data processor3320and output the composite image to the electronic device3410of a user to display the composite image on a display of the electronic device3410of a user. An operating method of an electronic device, according to an embodiment, may be implemented in the form of program commands executable by various computer means and be recorded on a computer-readable recording medium. For example, software (e.g., the program) containing one or more instructions may be stored in a machine-readable (e.g., computer-readable) storage medium (e.g., internal memory) or external memory. The computer-readable recording medium may include program commands, data files, data structures, or a combination thereof. The program commands recorded on the medium may be those specifically designed and configured for embodiments, or those known to one of ordinary skill in the relevant art. Examples of the computer-readable recording medium include magnetic media (e.g., hard disks, floppy disks, and magnetic tape), optical media (e.g., CD-ROMs and DVDs), magneto-optical media (e.g., floptical disks), and hardware devices (e.g., ROMs, RAMs, and flash memories, etc.) that are specially configured to store and execute program commands. Examples of the program commands include both machine code, such as produced by a compiler, and high-level language code that may be executed by the computer using an interpreter. According to embodiments, a user does not need to manually find an appropriate target image to be combined with an input image and an electronic device using machine learning may generate and provide candidate images appropriate for the input image. Further, according to embodiments, the electronic device may change the candidate images to a style appropriate for the input image, by using features of a synthesis location selected in the input image by the user. In addition, according to embodiments, the candidate images appropriate for the input image may be updated according to the control of the user and thus the user may more conveniently and accurately obtain a desired composite image in a time-efficient manner. While have been particularly shown and described with reference to the drawings, embodiments are provided for the purposes of illustration and it will be understood by one of ordinary skill in the art that various modifications and equivalent other embodiments may be made from the disclosure. Accordingly, the true technical scope of the disclosure is defined by the technical spirit of the appended claims.
100,863
11861770
DESCRIPTION OF EMBODIMENT An embodiment of the present invention is described with reference to the drawings. An image processing apparatus1according to the embodiment of the present invention is, for example, a personal computer, a home-use game machine, or a portable terminal such as a smartphone or a tablet and includes a control unit11, a storage unit12, a manipulation unit13, and a display unit14as exemplified inFIG.1. The control unit11includes a program-controlled device such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit). The control unit11operates according to a program stored in the storage unit12and executes processing of an application of, for example, a game. Further, in the process of processing and so forth of various applications, the control unit11executes processing of drawing and displaying image data. In the present embodiment, the control unit11accepts image data representative of a bitmap image and a setting of an initial display range determined by a predetermined method and executes a process of drawing a bitmap image represented by the accepted image data in a target display range different from the initial display range and displaying a result of the drawing. This process is an animation drawing process of moving the drawing range, for example, from the initial display range to the target display range and so forth. The control unit11of the present embodiment sets a target display range different from the initial display range. The control unit11uses information of the set target display range to execute a rasterization process based on image data of a target of the processing. The control unit11generates, by the rasterization process, a bitmap image of a size represented by integerized information. Further, the control unit11executes a process of drawing the generated bitmap image in the set target display range. This operation of the control unit11is hereinafter described in detail. The storage unit12includes at least one memory device such as a RAM (Random Access Memory). The storage unit12stores a program to be executed by the control unit11and data to be processed by the program. The storage unit12also operates as a work memory of the control unit11. Here, the program stored in the storage unit12may be a program that is stored in and provided as a computer-readable non-transitory recording medium and is replicated in the storage unit12. The manipulation unit13accepts information representative of manipulation of a user inputted from a keyboard, a mouse, a game controller, or the like. Then, the manipulation unit13outputs the information to the control unit11. The display unit14outputs an image to a display, a home-use television set, or the like so as to be displayed according to a designation inputted from the control unit11. Further, the image processing apparatus1of the present embodiment may include communication means for performing communication with a different information processing apparatus or the like through a network or the like. The control unit11of the image processing apparatus1of the present embodiment functionally implements such a configuration as exemplified inFIG.2by executing a program stored in the storage unit12. In particular, the control unit11functionally includes a setting processing unit110, a rasterization processing unit111, and a drawing processing unit112. Here, in a case where the control unit11includes a CPU and a GPU, the setting processing unit110and the rasterization processing unit111may be implemented by the CPU and the drawing processing unit112may be implemented by the GPU. Further, in the example here of the present embodiment, it is assumed that the storage unit12has set therein a storage region for a frame buffer210for retaining information of a screen image to be drawn. Further, it is assumed that the image processing apparatus1can acquire, as processing of an application, an operating system, or the like, a designation of vector data to be made a target of processing for rasterization and a designation of a display range for the vector data. Here, the vector data corresponds to image data representative of a bitmap image. In the example described below, the image processing apparatus1performs a designation of a display range by processing of the application, operating system, or the like. As a particular example, in the following description, it is assumed that the image processing apparatus1first designates a display range (referred to as an initial display range) on the frame buffer210as exemplified inFIG.3(a)and then designates a second display range (referred to as a target display range) scaled from the region of the frame buffer210(nFIG.3, reduced by 60%). Further, the image processing apparatus1may additionally designate, as a process of the application, the operating system, or the like, a function f(t) representative of an amount of change corresponding to lapse of time in order to gradually change (to draw an animation of) the display range from the initial display range to the target display range as time passes. This function (f(t)) is depicted, for example, inFIG.4.FIG.4depicts an example representative of an amount of change of the position and represents an amount of change at every Δt (Δt=T/N (N is a natural number determined in advance)) from time 0 to time T corresponding to a value P0 corresponding to the initial display range to a value P1 corresponding to the target display range, respectively. In this example ofFIG.4, the amount of change is specified such that, while the time t is time near to “0,” the position changes comparatively rapidly and, as the time t approaches T, the amount of change decreases. The setting processing unit110sets a display range for the bitmap image acquired by rasterization. In particular, the setting processing unit110sets a predetermined region on the frame buffer210as a display range, according to a designation inputted from the application, the operating system, or the like. In particular, as exemplified inFIG.3, in a case where the initial display range is, for example, a rectangular region with position coordinates of (8, 8) in the upper left corner, a height of 16 pixels (px), and a width of 24 px, the designation to be inputted is a rectangular region of a size reduced by 60% from that of the initial display range and designates a region with position coordinates of (4.8, 4.8) in the upper left corner, a height of 9.6 pixels (px), and a width of 14.4 px as depicted inFIG.3(b). However, in a case where drawing is performed for a region whose position or size is defined by such a value including the decimal point (non-integer value) as given above, a process of what is called subpixel rendering or the like is performed, resulting in deterioration in the image. Therefore, in an example of the present embodiment, the setting processing unit110uses an integer value for information of a position (for example, coordinate values of the upper left corner) and a size (width and height) that define a target display range (FIG.3(c)). As a method for converting a non-integer value including the decimal point into an integer value, widely known methods such as truncation (floor), a rounding method (round), and a rounding up method (ceiling) may be adopted. In the example ofFIG.3(c), an example in which rounding up is performed such that the position coordinates of the upper left corner become (5, 5) and the size becomes a height of 10 px and a width of 15 px is depicted. In this case, the setting processing unit110additionally corrects the function f(t). In particular, according to the designated values, the function f(t) changes from P0=8 (it is assumed that the example ofFIG.4indicates an example in regard to the X-axis direction) to P1=4.8 (broken line inFIG.4). Therefore, the setting processing unit110uses the value P′1=5 that is the integerized value of P1 and multiplies each point of the amount of change at every Δt by P′1/P1. As a result, f(t) represents a change indicated by a solid line inFIG.4. The rasterization processing unit111uses information of a display range set by the setting processing unit110to rasterize vector data (image data) designated from the application, the operating system, or the like to generate a bitmap image. In particular, before the time t=0, the rasterization processing unit111performs rasterization of designated vector data of an initial display range (width and height) set by the setting processing unit110to generate a bitmap image of the size of the initial display range and outputs the generated bitmap image to the drawing processing unit112. Further, if the setting processing unit110sets a target display range, then the rasterization processing unit111performs rasterization of the designated vector data in a region of the size (width and height) of the set target display range to generate a bitmap image of the size of the target display range (size indicated by the integerized information) and outputs the generated bitmap image to the drawing processing unit112. The drawing processing unit112determines a display range on the frame buffer210at every time Δt, according to the function f(t) set (corrected) by the setting processing unit110. Further, the drawing processing unit112expands or reduces the bitmap image outputted from the rasterization processing unit111(one of the image drawn to the size of the initial display range and the image drawn to the size of the target display range) to the size of the display range at the time t, and draws the expanded or reduced bitmap image at the position on the frame buffer210in the display range at the time t. Here, it is assumed that, although, while the time t is within the range of 0<t<T, the size sometimes becomes a non-integer including the decimal point, during animation drawing (while the size is changing with time), scaling to a size of a non-integer is performed. Further, although the drawing processing unit112selects, as a bitmap image to be utilized for scaling, from between bitmap images outputted from the rasterization processing unit111, a bitmap image drawn in the initial display range (initial bitmap image) and a bitmap image drawn in the target display range (final bitmap image), this selection may be performed such that it is switched such that, within the period of time 0≤t<<tc (tc≤T), the initial bitmap image is selected, but within the period of time 0<tc≤t≤T after time tc, the final bitmap image is selected, and thereafter, the final bitmap image is selected. In the present embodiment, the drawing processing unit112determines the timing tc for the switching in the following manner. In particular, the drawing processing unit112calculates the difference in value after every unit time Δt of the function f(t) (f(t)−f(t−Δt), where (t≥Δt>0)). In short, the drawing processing unit112differentiates f(t). Then, the drawing processing unit112obtains tm at which the difference is in the maximum (or exceeds a predetermined threshold value) (point of time at which the amount of change of f(t) per unit time indicates the highest value). Here, if the difference is equal at any t, the drawing processing unit112sets tc=T. On the other hand, when obtaining time tm at which the difference is in the maximum (or exceeds the predetermined threshold value) by the method described above, the drawing processing unit112sets tc=tm. It is to be noted that, in a case where a plurality of values of tm are obtained, the drawing processing unit112sets tc=tm with use of time tm selected by a given method from among the plurality of values (for example, earliest time, latest time, or time at random). According to this example of the present embodiment, when the change in position or size of an image is comparatively fast, replacement of a bitmap image is performed such that the user is less likely to have a sense of discomfort caused by the replacement. [Operation] The embodiment of the present invention basically has such a configuration as described above and operates in the following manner. The image processing apparatus1of the present embodiment outputs, as a process of the application, the operating system, and so forth, a designation of vector data (image data) to be made a target of rasterization and a designation of a display range of the vector data and additionally designates the function f(t) representative of a time change of the display range from the initial display range to the target display range. The function f(t) is designated for each of an X coordinate value and a Y coordinate value of a position, and a value of a width and a value of a height of a size. In the following description, it is assumed that this function f(t) is specified such that, for any of the values, it changes comparatively rapidly at time at which the time t is near to “0” and the amount of change decreases as the time t approaches T (animation ending point of time), as exemplified inFIG.4. As the process of drawing based on vector data, the image processing apparatus1sets, as a display range, a predetermined region defined by a position and a size represented by integer values on the frame buffer210, according to a designation inputted from the application, the operating system, or the like. In particular, the following example is indicated. Assuming that the initial display range is, for example, a rectangular region with position coordinates of (8, 8) in the upper left corner, a height of 16 pixels (px), and a width of 24 px, in a case where the rectangular region is reduced by 60%, the position coordinates of the upper left corner become (5, 5) and the size becomes 10 px high and 15 px wide. At this time, the image processing apparatus1also corrects each of the functions f(t) such that, using the integerized value P′1=5, each point of the amount of change at every Δt of each function f(t) is multiplied by 5/(4.8) times. Further, the image processing apparatus1rasterizes vector data (image data) designated from the application, the operating system, or the like with use of the information of the display range set with the integer values to generate a bitmap image. In particular, before the time t=0, the image processing apparatus1performs rasterization of designated vector data of the initial display range (width and height) set at the time to generate a bitmap image of the size of the initial display range. Further, if a target display range different from this initial display range is set, then the image processing apparatus1performs rasterization of designated vector data in a region of the size (width and height) of the set target display range to generate a bitmap image of the size of the target display range. The image processing apparatus1determines a display range on the frame buffer210at every time Δt, according to the function f(t) set (corrected) as above. The image processing apparatus1selects, as an image to be drawn at the point of time of each time Δt, one of the bitmap image of the size of the initial display range and the bitmap image of the size of the target display range. In the example here of the present embodiment, the image processing apparatus1uses the bitmap image of the size of the initial display range till a point of time at which the amount of change of f(t) per unit time becomes greatest. After the point of time, the image processing apparatus1performs drawing using the bitmap image of the size of the target display range. As described already, since the amount of change of f(t) per unit time becomes greater as t approaches 0, the image processing apparatus1performs drawing using the bitmap image of the size of the target display range after a point of time immediately after the drawing of animation is started (point of time of t=Δt). In particular, in the example here, the image processing apparatus1expands or reduces, sequentially at every Δt, the bitmap image of the size of the target display range to the size of the display range at the time t and draws the expanded or reduced bitmap image at a position on the frame buffer210in the display range at the time t. Here, although, while the time t is within the range of 0<t<T, the size sometimes becomes a non-integer including the decimal point, during animation drawing (while the size is changing with time), the image processing apparatus1performs scaling to a size of a non-integer. Then, when t=T is reached, the image processing apparatus1draws a bitmap image rasterized in the region of the size of the target display range (integerized size) at the position of the target display range (integerized position) in the size of the target display range (integerized size). Consequently, in the screen image after the animation drawing ends (after reduction by 60%), a bitmap image with comparatively little deterioration is drawn. Further, since the function f(t) that defines the change up to the size of the target display range is corrected on the basis of the position or the size of the target display range, the change during animation drawing is also displayed without giving an uncomfortable feeling. [Example in which Error in Integerization is Corrected] The following is to be noted when information that defines a position or a size is integerized in such a manner as described above. If, in a case where a plurality of bitmap images are acquired by the rasterization process, the differences in value from that before the integerization are different from each other when a process for integerization related to individual bitmap images is performed, then, when the bitmap images are arrayed, the array may give an uncomfortable feeling. For example, when same reduction by 60% is to be performed, in a case where there are a bitmap image A of which the X coordinate of the original position is 8 px (the X coordinate after reduction is, in a non-integer state, 4.8 px and is, when integerized by rounding, 5 px) and a bitmap image B of which the X coordinate of the original position is 24 px (the X coordinate after reduction is, in a non-integer state, 14.4 px and is, when integerized by rounding, 14 px), an error by integerization of the bitmap image A (value when the value before integerization is subtracted from the value after the integerization) is −0.2 px, and an error by integerization of the bitmap image B is +0.4 px. Accordingly, if the bitmap image A and the bitmap image B are arrayed, then the distance between the images looks increasing by 0.6 px, which sometimes causes an uncomfortable feeling. Therefore, in an example of the present embodiment, when the setting processing unit110integerizes information of a position (for example, coordinate values of the upper left corner) and a size (width and height) that define a target display range of a rasterized bitmap image and sets the integerized information, the setting processing unit110outputs, together with the information of the position and the size of integerized values, information of the difference (error) between the value before the integerization and the value after the integerization. A particular example is described in connection with the example ofFIG.3. In a case where a rectangular region with position coordinates of (8, 8) in the upper left corner, a height of 16 pixels (px), and a width of 24 px is reduced by 60% to form a target display range, the position coordinates of the upper left corner before the integerization are (4.8, 4.8). On the other hand, in a case where integerization is performed to set the position coordinates of the upper left corner to (5, 5) and set the size to 10 px high and 14 px wide, the setting processing unit subtracts the values before the integerization from the values after the integerization to obtain (−0.2, −0.2), which is an error in the value of the position coordinates, and outputs the information of the error in position to the rasterization processing unit111and the drawing processing unit112together with the values after the integerization (information that defines the target display range). Further, the rasterization processing unit111of this example uses information of the target display range set by the setting processing unit110to rasterize vector data (image data) designated from the application, the operating system, or the like to generate a bitmap image. Since this operation of the rasterization processing unit111is same as that described hereinabove, overlapping description of this is omitted here. In particular, the rasterization processing unit111of this example performs rasterization of the designated vector data in a region of the set target display range size (width and height) to generate a bitmap image and outputs the generated bitmap image to the drawing processing unit112. The drawing processing unit112determines a display range for the bitmap image on the frame buffer210at every time Δt, according to the function f(t) set by integerization of the display range by the setting processing unit110and the information of the error. Also in the example here of the present embodiment, the drawing processing unit112expands or reduces the bitmap image outputted from the rasterization processing unit111(one of the initial bitmap image and the target bitmap image is selected) to the size of the display range at the time t and draws the expanded or reduced bitmap image at the position of the display range on the frame buffer210at the time t. This selection may be performed similarly to that described hereinabove. One of characteristics of this example of the present embodiment is that, after the drawing processing unit112provisionally determines a drawing position for a rasterized image on the basis of the function f(t) set by integerization of the display range, the provisionally determined position is corrected on the basis of information of the error and a rasterized image is drawn at the position after the correction. That is, the drawing processing unit112determines the position after the correction with use of the function f(t) and information d0 of an error related to the corresponding initial display range (information of the error of the display range at the time t=0) or information d1 of the error related to the target display range. In particular, while the initial bitmap image is selected and drawn on the frame buffer210, the drawing processing unit112determines the coordinates p(x, y) of the upper left corner as p(x, y)=(fx(t), fy(t))−(dx0, dy0). Here, fx(t) represents the value of the function f(x) related to the position in the x-axis direction of the upper left corner of the display range, and fy(t) represents the value of the function f(t) related to the position in the y-axis direction of the upper left corner of the display range. Meanwhile, dx0 is information of the error in the x-axis direction related to the initial display range, and dy0 is information of the error in the y-axis direction related to the initial display range. (dx0, dy0)=(int[fx(O), fy(O)]−(fx(O), fy(O)) Here, int[X, Y] signifies to convert each of X and Y into an integer by a predetermined method (truncation, rounding, rounding up, or the like). Further, while selecting a target bitmap image and drawing it on the frame buffer210, the drawing processing unit112determines the coordinates p(x, y) of the upper left corner as p(x, y)=(fx(t), fy(t))−(dxT, dyT). Here, fx(t) represents a value of the function f(t) related to the position in the x-axis direction of the upper left corner of the display range, and fy(t) represents a value of the function f(t) related to the position in the y-axis direction of the upper left corner of the display range. Further, dxT is information of an error in the x-axis direction related to the target display range, and dyT is information of an error in the y-axis direction related to the target display range. That is, (dxT, dyT)=(int[fx(T), fy(T)]−(fx(T), fy(T))). In the example ofFIG.3, the information of the error related to the initial display range is (dx0, dy0)=(0, 0), and the information of the error related to the target display range is (dxT, dyT)=(−0.2, −0.2). According to this example, at the point of time at which animation drawing comes to an end, drawing is performed at a position represented by non-integer values with an error of integerization taken into consideration (as a method for drawing a given bitmap image in a region on a frame buffer represented by non-integer values, a method widely known as a subpixel drawing method can be adopted). In this example, the image processing apparatus1draws a target bitmap image rasterized in a region of the size of the target display range (integerized size) in the size of the target display range (integerized size) at a position corrected by an amount of the error from the target display range (non-integer position). In this example, in a case where a plurality of rasterized images are deployed, the distance between them can be maintained. Further, since a bitmap image rasterized in the integer region is used, as a screen image after animation drawing ends (after reduction by 60%), a bitmap image having comparatively little deterioration can be displayed. [Further Example of Drawing Compatible with Size Change] Further, in the examples so far of the present embodiment, when animation drawing in which the size is changed is performed, the drawing processing unit112selects and scales one of the initial bitmap image or the target bitmap image to draw the selected and scaled initial bitmap image or target bitmap image in a frame buffer. However, the present embodiment is not limited to this. In particular, in an example of the present embodiment, the drawing processing unit112may combine, during animation drawing during which at least the size of the bitmap image is changed (where animation drawing from t=0 to t=T is performed, during a period of 0<t<T), the initial bitmap image and the target bitmap image such that they crossfade with each other to draw them in a display range determined at each point of time. In particular, the drawing processing unit112expands or reduces, at the time t (0<t<T), the initial bitmap image and the target bitmap image to the size of the display range at the time t, and uses pixel values C0 (x, y) and CT(x, y) of corresponding pixels in the bitmap images after being expanded or reduced to determine the value C(x, y) of the corresponding pixel of the bitmap image after the combination at the time t as C (x, y)=(C0(x, y)×(T−t)+Ct (x, y)×t)/T. (It is to be noted that C is generally a vector quantity representative of a value of a color space including values of RGB and an alpha channel representative of transparency.) In this case, when correction of the position on the basis of information of the error is to be performed further, for example, during the period of 0≤t<T/2, the coordinates of the upper left corner of the display range may be set to p (x, y)=(fx (t), fy(t))−(dx0, dy0), and during the period of T/2≤t≤T, the coordinates of the upper left corner of the display range may be set to p (x, y)=(fx (t), fy (t))−(dxT, dyT). In this example, since the difference in resolution between the initial bitmap image and the target bitmap image is blunted, animation drawing that can be viewed comparatively smoothly is performed. REFERENCE SIGNS LIST 1: Image processing apparatus11: Control unit12: Storage unit13: Manipulation unit14: Display unit16: Height110: Setting processing unit111: Rasterization processing unit112: Drawing processing unit210: Frame buffer
27,737
11861771
DETAILED DESCRIPTION Referring toFIGS.1and2, a virtual hair extension system30that allows a user to virtually try on a selected hair extension is illustrated. The virtual hair extension system30is operably coupled to the Internet32as shown. The virtual hair extension system30includes an input device50, a memory device52, a digital camera54, a display device56, a computer60, a hair extension database70, and a data analytics database72. The virtual hair extension system30generates a modified version of a user image (e.g., a final modified user image) that shows how the user hair will look in a realistic way when wearing a selected hair extension (identified by a specific SKU) having an associated volume, length and color. The virtual hair extension system30helps the user to select a desirable hair extension without the help of customer support or a hair stylist. Further, the system10saves the user time and money since the user does not inadvertently purchase incorrect hair extensions. For purposes of understanding, a few terms will be defined below. The term “module” refers to a computer application or a series of computer commands that accomplish a desired task. The term “image” refers to a digital image. The term “SKU” refers to a stock keeping unit and is a number (typically eight alphanumeric digits) that retailers assign to products to internally keep track of stock levels. If a product has different colors, lengths, and volumes, each product variation has a unique SKU number. As used in the flowcharts herein, the SKU refers to stock keeping unit for a specific hair extension having a specific color, length, volume, price, and hair extension image. The term “reference image” refers to an image of a model having hair with a selected hair extension thereon having a desired color, length, and volume. The term “RGB color space” refers to the red, green, blue color space The term “HSV color space” refers to the hue, saturation, value color space. The term “binary hair mask” refers to a mask image generated from a user image wherein if a pixel in the user image is classified as hair, a corresponding pixel in the binary hair mask has a value of 255 (e.g., white color), and if a pixel in the user image540is not classified as hair, a corresponding pixel in the binary hair mask has a value of 0 (e.g., black color). The term “Gabor filter” refers to is a linear filter used for texture analysis, which means that it analyzes whether there is any specific frequency content in the image in specific directions in a localized region around the point or region of analysis. In the spatial domain, a 2-D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave. The term “hair orientation mask” refers to an image generated by a user image and a binary hair mask that indicates the specific directions of hair. The term “generative adversarial neural network module” also known as a generative adversarial network (GAN) is a module that utilizes two neural networks that contest with each other in a game (in the form of a zero-sum game, where one agent's gain is another agent's loss). Given a training set, this technique learns to generate new data with the same statistics as the training set. The core idea of a GAN is based on the “indirect” training through the discriminator, another neural network that is able to tell how much an input is “realistic”, which itself is also being updated dynamically. This basically means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator. This enables the model to learn in an unsupervised manner. The term “Alpha blending module” refers to a module that performs a process of combining multiple images with a background to create the appearance of partial or full transparency. It is often useful to render pixels in separate passes or layers and then combine the resulting images into a single, final image called a composite. In order to combine the pixels of the images correctly, it is necessary to keep an associated matte for each element in addition to its color. This matte layer contains the coverage information—the shape of the geometry being drawn—making it possible to distinguish between parts of the image where something was drawn and parts that are empty. The terms “try on” and tryout” used herein mean a virtual try on of a selected hair extension. Referring toFIG.1, the virtual hair extension system30will now be discussed. As discussed above, the virtual hair extension system30generates a modified version of a user image (e.g., a final modified user image) that shows how the user hair will look in a realistic way when wearing a selected hair extension (identified by a specific SKU) having an associated volume, length and color. The input device50is provided to receive user input that is utilized by the computer60for implementing the associated methods described herein. The input device50is operably coupled to the computer60and communicates with the computer60. The memory device52is provided to store data utilized by the computer60for performing the associated operations and methods described herein. The memory device52is operably coupled to the computer60and communicates with the computer60. The digital camera54is provided to generate a user image of a person having hair and to send the user image to the computer60. The digital camera54is operably coupled to the computer60and communicates with the computer60. The display device56is provided to display graphical user interfaces and images in response to display instructions received from the computer60. The display device56is operably coupled to the computer60and communicates with the computer60. Referring toFIGS.1and2, the computer60is provided to implement the associated methods described herein. The computer60is operably coupled to the input device50, the memory device52, the digital camera54, the display device56, and the Internet32. The computer60communicates with the hair extension database70and the data analytics database72via the Internet32. The computer60includes a hair segmentation module90, a hair color matching module92, a hair extension blending module94, and a data analytics dashboard module96. Referring toFIGS.2and11, the hair segmentation module90is provided to receive a user image540and to generate a binary hair mask542utilizing the user image540. The binary hair mask542has the same pixel dimensions as the user image540. If a pixel in the user image540is classified as hair, a corresponding pixel in the binary hair mask has a value of 255 (e.g., white color). Otherwise, if a pixel in the user image540is not classified as hair, a corresponding pixel in the binary hair mask542has a value of 0 (e.g., black color). Referring toFIG.10, the hair segmentation module90utilizes a supervised learning approach with a convolutional neural network module110to generate the binary hair mask542based on the user image540. The convolutional neural network module110is defined and can be implemented by the convolutional network module architecture112(shown inFIG.10) and a cost function which will be described below. The convolutional network module architecture112includes the following functions: Conv, BatchNorm, Relu, Maxpool, ConvTranspose, and Sigmoid. Conv is a function that performs convolution which is a simple type of filtering to apply on an input that results in an activation. BatchNorm is a function to standardize the inputs of a layer in each mini-batch of data in deep neural network training. ReLu is a non-linear activation function used in building multi-layer neural network This function can be formulated as F(x)=max(0,x) where x is the input. MaxPool is a function to perform a discretization process using sample-based method in order to down-sample an input representation of an image, and a hidden-layer output matrix. The MaxPool function will reduce the input dimensionality of features contained in the sub-regions binned. ConvTranspose refers to transposed convolutions which are standard convolutions with a modified input feature map. Sigmoid is a mathematical function with S-shaped curve characteristics. Referring toFIGS.10and11, the output of the convolutional neural network architecture112is a probability of a pixel in the user image540being a hair pixel or not for each pixel in the user image540, and the cost function is chosen to maximize the IOU (Intersection over union) score on a user image training dataset that has been labeled manually. The cost function that was used in the training of the convolutional neural network architecture112is a mean binary cross-entropy taken over all the pixels of the binary hair mask542. The cost function is defined by the following equation: L=-1N2⁢∑i,j=1N⁢[xij⁢log⁡(pij)+(1-xij)⁢log⁡(1-pij)], where N=256, are the values of the ground truth mask, and pijare the probabilities returned by our model. Referring toFIGS.2and13, the hair color matching module92is provided to generate a list of color shades and associated confidence scores indicating a probability that a hair color of a user matches a color shade associated with a plurality of hair extensions. The hair extensions images associated with the color shades that have an upper range of confidence scores will be displayed for the user to select from. Referring toFIG.13, a table590indicates sub-groups of colors including black, brown, blonde. In an exemplary embodiment, the black color includes color shades of off-black and jet-black. The brown color includes color shades of mocha brown, dark brown, and ash brown. The blonde color includes color shades of ash blonde, dirty blonde, and sandy blonde. Referring toFIG.2, the hair color matching module92includes a black shades classifier module250, a brown shades classifier module252, a blonde shades classifier module254, and a comprehensive shades classifier module256. The black shades classifier module250is trained to generate confidence scores associated with black color shades in an image. The brown shades classifier module252is trained to generate confidence scores associated with brown color shades in an image. The blonde shades classifier module254is trained to generate confidence scores associated with blonde color shades in an image. The comprehensive shades classifier module256is trained to generate confidence scores associated with black color shades, brown color shades, and blonde color shades in an image. Referring toFIGS.2and20, the hair extension blending module94is provided to generate a modified version of a user image540(e.g., a final modified user image820shown inFIG.24) that shows how the user hair will look in a realistic way when wearing a selected hair extension having a desired volume, length and color. Referring toFIG.20, the hair extension blending module94applies the user image540and the binary hair mask542to the plurality of Gabor filters280and generates a hair orientation mask700based on the output of the plurality of Gabor filters280. In an exemplary embodiment, the hair extension blending module94applies convolution to the user image540and the binary hair mask542utilizing 32 Gabor filters. For each hair pixel in the user image540and associated pixel in the binary hair mask542, the module94obtains32score numbers that are the results of the convolutions. For each hair pixel in the hair orientation mask700, the module94selects the Gabor filter with the highest score and obtains the orientation. Referring toFIG.21, the hair extension blending module94generates the resized binary hair mask742based on the binary hair mask542. The resized binary hair mask742illustrates a hair mask having a length and a volume that matches a length and a volume of hair in a reference image770(shown inFIG.22) having a selected hair extension. The length of hair in a reference image770matches a length and a volume associated with a SKU of the selected hair extension. The hair extension blending module94generates the resized hair orientation mask750based on the hair orientation mask700. The resized hair orientation mask750illustrates a hair orientation mask having a length and a volume that matches a length of hair in a reference image770(shown inFIG.22) having a selected hair extension. The length and the volume of hair in a reference image770matches a length associated with a SKU of the selected hair extension. Referring toFIG.22, the generative adversarial neural network module utilizes the user image540, the reference image770, the resized binary hair mask742, and the resized hair orientation mask750to generate the first modified user image781and the second modified user image782. The first modified user image781has hair with a color corresponding to a color of the hair in the user image540and having the desired length and the desired volume. The second modified user image782has hair with a color corresponding to a color of the hair in the reference image770and having the desired length and the desired volume. The color of the hair in the reference image770is identical to the color of hair associated with the SKU of the selected hair extension. Referring toFIG.23, the hair extension blending module94utilizes the resized binary hair mask742and the resized hair orientation mask750to generate the blending mask800. In particular, the module94generates the blending mask800by multiplying each pixel of the resized binary hair mask742by a corresponding pixel of the resized hair orientation mask750. Referring toFIG.24, the hair extension blending module94utilizes an Alpha blending module300to generate the final modified user image820utilizing the first modified user image781, the second modified user image782, and the blending mask800. In an exemplary embodiment, the hair extension blending module94blends the first modified user image781and the second modified user image782using the following equation: R=I1*(1−M)+I2*M(2) where R is the final modified user image, I1is the first modified user image, I2is the second modified user image, and M is the blending mask. Referring toFIGS.1,2and25-27, the data analytics dashboard module96is provided to allow a retailer to view user data associated with the virtual hair extension system30obtained from the data analytics database72(shown inFIG.1). In particular, the data analytics dashboard module96generates and displays the clicks dashboard1200(shown inFIG.25), the tryouts dashboard1310(shown inFIG.26), and the SKUs dashboard1410(shown inFIG.27) on the display device56—which will be described in greater detail below. Referring toFIGS.1-9, a flowchart of a method for generating a final modified user image820having the hair of the user with a selected hair extension thereon utilizing the virtual hair extension system30in accordance with another exemplary embodiment will be explained. At step400, the computer60receives a user image540(shown inFIG.11) of a user having hair from a digital camera52, and stores the user image540in a memory device52. The computer60has a hair segmentation module90(shown inFIG.2), a hair color matching module92, a hair extension blending module94, and a data analytics dashboard module96. After step400, the method advances to step402. At step402, the hair segmentation module90has a convolution neural network module110(shown inFIG.10) that generates the binary hair mask542(shown inFIG.11) utilizing the user image540. After step402, the method advances to step404. At step404, the hair segmentation module90multiplies each pixel of the user image540(shown inFIG.11) by a corresponding pixel of the binary hair mask542to obtain a segmented user hair image544(shown inFIG.11) in an RGB color space. The segmented user hair image544illustrates only the hair of the user image540. After step404, the method advances to step406. At step406, the hair color matching module92generates a segmented user hair image in a HSV color space from the segmented user hair image544in the RGB color space. After step406, the method advances to step408. At step408, the hair color matching module92generates a normalized histogram of hair pixels560(shown inFIG.12) from the segmented user hair image in the HSV color space. In an exemplary embodiment, all of the hair pixels of the segmented user hair image in the HSV color space are put into a normalized histogram of size 32*32*32 containing 32678 features totally. Every bin of the normalized histogram of hair pixels560contains the number of hair pixels with their values in the corresponding region, divided by the total number of hair pixels in the user image. The system utilizes the normalization step for consistency since every user image has a different size of binary hair mask. After step408, the method advances to step410. At step410, the computer60makes a determination as to whether the normalized histogram of hair pixels560indicates hair pixels having one of a plurality of predefined colors (e.g., pink, blue, green). In particular, a set of predefined special colors such as pink, blue, green that do not belong to the specified color sub-groups that the system needs to match against will be detected using a binary classification method. The binary classification method can use supervised machine learning techniques such as Support Vector Machine, Random Forests, Catboost or a neural network for example. The entire normalized histogram of hair pixels560will be used as an input to the binary classification method. If the value of step410equals “yes”, the user image540is deleted and the method returns to step400. Otherwise, the method advances to step412. At step412, the hair color matching module92selects hair pixels in a selected set of most populated bins in the normalized histogram of hair pixels560to obtain a compressed histogram of hair pixels. For purposes of understanding, a complete histogram of hair pixels of the segmented user hair image in the HSV color space has many non-relevant values. In particular, the specified color sub-groups cover only a small part of a complete histogram, so many values of a complete histogram are not relevant. In an exemplary embodiment, the hair color matching module92selects the most filled 5% bins out of the total bins in a histogram of hair pixels so that actual color shades of hair will be more accurately determined. Using this technique, the module92corrects for small errors in the generation of the segmented user hair image. For example, if a part of skin, clothes, or background was inadvertently classified as hair in the segmented user hair image, it will not affect the final color matching list, since these pixels will be ignored. After step412, the method advances to step420. At step420, the computer60makes a determination as to whether the user will manually select a color sub-group (e.g., black, brown, blonde) indicating their hair color. This determination can be based on a stored software flag in the memory device52for example. If the value of step420equals “yes”, the method advances to step422. Otherwise, the method advances to step440. At step422, the hair color matching module92displays a GUI600(shown inFIG.14) on the display device56and receives a user input from an input device50indicating a first color sub-group (e.g., black, brown, blonde) of their hair. After step422, the method advances to step424. At step424, the hair color matching module92selects one of a black shades classifier module250(shown inFIG.2), a brown shades classifier module252, and a blonde shades classifier module254(which is a selected shades classifier module) based on the selected color sub-group. The selected shades classifier module determines a plurality of confidence scores associated with a plurality of color shades. Each confidence score indicates a probability that a color of hair in the segmented user hair image in the HSV color space matches a color shade associated with the selected shades classifier module and further associated with a plurality of hair extensions. After step424, the method advances to step426. At step426, the hair color matching module92sorts a list of the plurality of color shades based on the plurality of confidence scores from a respective color shade thereof having a highest confidence score to a respective color shade thereof having a lowest confidence score. After step426, the method advances to step428. At step428, the computer60stores the list of the plurality of color shades and the associated plurality of confidence scores in a memory device52such that each color shade thereof has a respective confidence score thereof. After step428, the method advances to step440. At step440, the computer60makes a determination as to whether the computer60will automatically determine a color shade of hair in the user image540. This determination can be based on a stored software flag in the memory device52for example. If the value of step440equals “yes”, the method advances to step442. Otherwise, the method advances to step460. At step442, the hair color matching module92selects a comprehensive shades classifier module256for all specified color shades of all specified color sub-groups (e.g., black, brown, blonde). The comprehensive shades classifier module256determines a plurality of confidence scores associated with a plurality of color shades. Each confidence score indicates a probability that a color of hair in the segmented user hair image in the HSV color space matches a color shade associated with the comprehensive shades classifier module256and further associated with a plurality of hair extensions. After step442, the method advances to step444. At step444, the hair color matching module92sorts a list of the plurality of color shades based on the plurality of confidence scores from a respective color shade thereof having a highest confidence score to a respective color shade thereof having a lowest confidence score. After step444, the method advances to step446. At step446, the hair color matching module92stores the list of the plurality of color shades and the associated plurality of confidence scores in the memory device52such that each color shade thereof has a respective confidence score thereof. After step446, the method advances to step448. At step448, the computer60displays hair extension images (e.g., hair extension images shown inFIG.19) associated with a subset of the plurality of hair extensions on the display device56. Each hair extension image corresponds to a respective hair extension having a respective SKU. The subset of the plurality of hair extensions have color shades with confidence scores within an upper range of the plurality of confidence scores. After step448, the method advances to step460. At step460, the computer60receives a user selection of a selected hair extension corresponding to one of the hair extension images from the input device50and determines an associated SKU. The SKU indicates the selected hair extension from the subset of the plurality of hair extensions. After step460, method advances to step462. At step462, the computer60retrieves data associated with the SKU from a hair extension database70(shown inFIG.1) utilizing the Internet32. The data includes a color, a length, a volume, a price, and a selected hair extension image of the selected hair extension. After step462, the method advances to step464. At step464, the hair extension blending module94applies convolution to the binary hair mask542and the user image540utilizing a plurality of Gabor filters280(shown inFIG.20) which output a plurality of score numbers for each pixel. The hair extension blending module94selects a highest score of the plurality of score numbers for each pixel to generate the hair orientation mask700(shown inFIG.20). The hair orientation mask700indicates the orientation of the hair in the user image540. After step464, the method advances to step466. At step466, the computer60makes a determination as to whether the user image540is a frontal image of the user. In an exemplary embodiment, the computer60applies face detection using open source face detectors such as Opencv or Dlib to verify if the user image is a frontal image. Further, the computer60determines a plurality of 2D landmarks622(e.g., shown inFIG.15) using the open source facial landmarks detector such as Dlib or Opencv. If the value of step466equals “yes”, the method advances to step468. Otherwise, method advances to step470. At step468, the hair extension blending module94determines a scale of the user image540by utilizing 2D landmarks on the user image540to determine a number of pixels between both eyes of the user in the user image540, and then dividing the number of pixels between both eyes by a predetermined eye separation distance. For example, referring toFIG.15, the user image620comprises a frontal image and includes a plurality of 2D landmarks622on the face of the user. The plurality of 2D landmarks622includes landmarks624,626indicating a location of a center point on a first eye and a center point on a second eye of the user, respectively. In this example, the module94determines a number of pixels between the landmarks624,626and then divides the number of pixels by a predetermined eye separation distance to determine the scale of the user image620. In an exemplary embodiment, the predetermined eye separation distance corresponds to an average distance that humans have between two eyes (in inches). After step468, the method advances to step470. At step470, the computer60makes a determination as to whether the user image540is a rear image of the user. In an exemplary embodiment, the computer60applies face detection using open source face detectors such as Opencv or Dlib to verify if the user image is a rear image. If the value of step470equals “yes”, the method advances to step472. Otherwise, the method advances to step474. At step472, the hair extension blending module94determines a scale of the user image540by obtaining an upper quarter region640(shown inFIG.17) of the binary hair mask542(shown inFIG.17) and determines a number of pixels across the upper quarter region640of the binary hair mask542, and then divides the number of pixels across the upper quarter region640of the binary hair mask542by a predetermined head width. In an exemplary embodiment, the predetermined head width corresponds to an average head width distance that humans have in inches. After step472, the method advances to step474. At step474, the hair extension blending module94determines an estimated hair length of the hair of the user by generating a bounding box around the binary hair mask542(shown inFIG.17) and determines a pixel height of the bounding box to determine a pixel hair length, and then divides the pixel hair length by a scale of the user image540. After step474, the method advances to step476. At step476, the hair extension blending module94generates a resized binary hair mask742utilizing the binary hair mask542, the estimated hair length, and a desired length and a desired volume associated with the SKU of the selected hair extension. The resized binary hair mask742has the desired length and the desired volume. In particular, the hair extension blending module94utilizes the OpenCV resize function to generate the resized binary hair mask742utilizing the binary hair mask542, the estimated hair length, and a desired length associated with the SKU of the selected hair extension. After step476, the method advances to step478. At step478, the hair extension blending module94generates a resized hair orientation mask750(shown inFIG.21) utilizing the hair orientation mask700, the estimated hair length, and a desired length and a desired volume associated with the SKU of the selected hair extension. The resized hair orientation mask750has the desired length and the desired volume associated with the SKU of the selected hair extension. In particular, the hair extension blending module94utilizes the OpenCV resize function to generate the resized hair orientation mask750utilizing the hair orientation mask700, the estimated hair length, and a desired length and a desired volume associated with the SKU of the selected hair extension. After step478, the method advances to step480. At step480, the hair extension blending module94has a generative adversarial neural network module290that generates a first modified user image781(shown inFIG.22) and a second modified user image782utilizing the user image540, the reference image770, the resized binary hair mask742, and the resized hair orientation mask750. The first modified user image781has hair with a color corresponding to a color of the hair in the user image540, and has the desired length and the desired volume. The second modified user image782has hair with a color corresponding to a color of the hair in the reference image770, and has the desired length and the desired volume. The color of hair in the reference image770is identical to a color of hair associated with the SKU of the selected hair extension. After step480, the method advances to step490. At step490, the hair extension blending module94generates a blending mask800(shown inFIG.23) by multiplying each pixel of the resized binary hair mask742(shown inFIG.23) by a corresponding pixel of the resized hair orientation mask750. After step490, the method advances to step492. At step492, the hair extension blending module94has an Alpha blending module300(shown inFIG.24) that blends the first modified user image781, the second modified user image782, and the blending mask800to obtain a final modified user image820(shown inFIG.24) having the hair of the user with the selected hair extension thereon. In an exemplary embodiment, the hair extension blending module94blends the first modified user image781and the second modified user image782using the following equation: R=I1*(1−M)+I2*M(2) where R is the final modified user image, I1is the first modified user image, I2is the second modified user image, and M is the blending mask. After step492, the method advances to step494. At step494, the computer60displays the final modified user image820(shown inFIG.24) on the display device56. After step494, the method advances to step496. At step496, the computer60stores the final modified user image820in the memory device52. After step496, the method advances to step498. At step498, the data analytics dashboard module96receives a user dashboard selection from an input device50. After step498, the method advances to step500. At step500, the computer60retrieves data for at least one of a clicks dashboard1200(shown inFIG.25), a tryouts dashboard1310(shown inFIG.26), and a SKUs dashboard1410(shown inFIG.27) associated with the plurality of hair extensions from a data analytics database1672, based on the user dashboard selection, utilizing the Internet32. At step500, the method advances to step502. At step502, the computer60displays at least one of the clicks dashboard1200, the tryouts dashboard1310, and the SKUs dashboard1410and the retrieved data on the display device56. Referring toFIG.25, the clicks dashboard1200is provided to display clicks information associated with users who have clicked on selected hair extensions. The clicks dashboard1200includes a date selection box1210, a line graph1212, a view total clicks selection icon1214, SKU name selection icons1220,1222,1224,1226,1228, a first category icon1240, a second category icon1242, and a third category icon1244. During operation, the date selection box1210allows a retailer to select dates for retrieving clicks information associated with clicks that occurred within the selected dates. The line graph1212illustrates clicks per day for selected SKUs or categories for the selected dates. When the view total clicks selection icon1214is selected, the total number of clicks for a SKU or a category is displayed on the line graph1212for the selected dates. Further, when the SKU name selection icon1220is selected, the number of clicks associated with a first product is displayed on the line graph1212when the icon1214is subsequently selected. Further, when the SKU name selection icon1222is selected, the number of clicks associated with a second product is displayed on the line graph1212when the icon1214is subsequently selected. Also, when the SKU name selection icon1224is selected, the number of clicks associated with a third product is displayed on the line graph1212when the icon1214is subsequently selected. Further, when the SKU name selection icon1226is selected, the number of clicks associated with a fourth product is displayed on the line graph1212when the icon1214is subsequently selected. Also, when the SKU name selection icon1228is selected, the number of clicks associated with a fifth product is displayed on the line graph1212when the icon1214is subsequently selected. Further, when the first category icon1240is selected, the number of clicks associated with a first category of products to be displayed on the line graph1212when the icon1214is subsequently selected. Further, when the second category icon1242is selected, the number of clicks associated with a second category of products is displayed on the line graph1212when the icon1214is subsequently selected. Still further, when the third category icon1244is selected, the number of clicks associated with a third category of products is displayed on the line graph1212when the icon1214is subsequently selected. Referring toFIG.26, the tryouts dashboard1310is provided to display graphs associated with selected hair extension products in which the users have virtually tried out the selected products. A user has a virtual tryout of the selected product when a final modified user image is displayed for the user. The tryouts dashboard1310includes a custom dates selection box1312and a stacked area chart1314. During operation, the custom dates selection box1312allows a retailer to select dates for retrieving virtual tryout data associated with virtual tryouts of selected products within the selected dates. The stacked area chart1314includes graphs indicating the number of virtual tryouts for selected products. For example, the stacked area chart1314includes a graph1320indicating the number of virtual tryouts for a first product over a time frame. Further, the stacked area chart1314includes a graph1322indicating the number of virtual tryouts for a second product over the time frame. Further, the stacked area chart1324includes a graph1324indicating the number of virtual tryouts for a third product over the time frame. Referring toFIG.27, the SKUs dashboard1410is provided to display data associated with selected hair extension products identified by the SKU names. The SKUs dashboard1410includes a custom dates selection box1412, SKU records1420,1422,1424,1426,1428,1430, and an export file selection icon1500. During operation, the custom dates selection box1412allows a retailer to select dates for retrieving data associated with selected products identified by the SKU names within the selected dates. The export file selection icon1500allows a retailer to save the retrieved data to an electronic file. The SKU record1420is associated with a first product and includes the following information: (i) SKU name associated with the first product, (ii) number of clicks to view the first product by users, (iii) number of virtual tryouts of the first product by users, and (iv) the percentage conversion rate of users purchasing the first product. The SKU record1422is associated with a second product and includes the following information: (i) SKU name associated with the second product, (ii) number of clicks to view the second product by users, (iii) number of virtual tryouts of the second product by users, and (iv) the percentage conversion rate of users purchasing the second product. The SKU record1424is associated with a third product and includes the following information: (i) SKU name associated with the third product, (ii) number of clicks to view the third product by users, (iii) number of virtual tryouts of the third product by users, and (iv) the percentage conversion rate of users purchasing the third product. The SKU record1426is associated with a fourth product and includes the following information: (i) SKU name associated with the fourth product, (ii) number of clicks to view the fourth product by users, (iii) number of virtual tryouts of the fourth product by users, and (iv) the percentage conversion rate of users purchasing the fourth product. The SKU record1428is associated with a fifth product and includes the following information: (i) SKU name associated with the fifth product, (ii) number of clicks to view the fifth product by users, (iii) number of virtual tryouts of the fifth product by users, and (iv) the percentage conversion rate of users purchasing the fifth product. The SKU record1430is associated with a sixth product and includes the following information: (i) SKU name associated with the sixth product, (ii) number of clicks to view the sixth product by users, (iii) number of virtual tryouts of the sixth product by users, and (iv) the percentage conversion rate of users purchasing the sixth product. Referring toFIGS.28and29, a virtual hair extension system1630that allows a user to virtually try on a selected hair extension is illustrated. The virtual hair extension system1630is operably coupled to the Internet1632as shown. The virtual hair extension system1630includes a user device1634having an input device1650, a memory device1652, a digital camera1654, a display device1656, a user computer1660, a hair extension database1670, and a data analytics database1672. The hair extension database1670is identical to the hair extension database1670discussed above. Further, the data analytics database1672is identical to the data analytics database72discussed above. The input device1650is provided to receive user input for implementing the associated methods described herein. The input device1650is operably coupled to the user computer1660and communicates with the user computer1660. The memory device1652is provided to store data utilized by the user computer1660. The memory device1652is operably coupled to the user computer1660and communicates with the user computer1660. The digital camera1654is provided to generate a user image of a person having hair and to send the user image to the user computer1660. The digital camera1654is operably coupled to the user computer1660and communicates with the user computer1660. The display device1656is provided to display graphical user interfaces and images in response to display instructions received from the user computer1660. The display device1656is operably coupled to the user computer1660and communicates with the user computer1660. The user computer1660operably communicates with the cloud-based computer1665utilizing the Internet1632. The cloud-based computer1665is provided to implement at least a portion of the associated methods described herein. The cloud-based computer1665is operably coupled to the Internet32, the hair extension database1670, the data analytics database1672, and the memory device1667. The user computer1660communicates with the hair extension database70and the data analytics database72. The memory device1667is provided to store data utilized by the cloud-based computer1665. The cloud-based computer1665includes the hair segmentation module90, the hair color matching module92, the hair extension blending module94, and the data analytics dashboard module96that were previously discussed above. Referring toFIGS.30-36, a flowchart of a method for generating a final modified user image820having the hair of the user with a selected hair extension thereon utilizing the virtual hair extension system1630in accordance with another exemplary embodiment will be explained. At step1700, the computer1660receives a user image540(shown inFIG.11) of a user having hair from a digital camera1654, and sends the user image540to a cloud-based computer1665utilizing the Internet1632. After step1700, the method advances to step1702. At step1702, the cloud-based computer1665stores the user image540in a memory device1667. The cloud-based computer1665has a hair segmentation module90(shown inFIG.29), a hair color matching module92, a hair extension blending module94, and a data analytics dashboard module96. After step1702, the method advances to step1704. At step1704, the hair segmentation module90has a convolution neural network module110(shown inFIG.10) that generates the binary hair mask542(shown inFIG.11) utilizing the user image540. After step1704, the method advances to step1706. At step1706, the hair segmentation module90multiplies each pixel of the user image540(shown inFIG.11) by a corresponding pixel of the binary hair mask542to obtain a segmented user hair image544(shown inFIG.11) in an RGB color space. The segmented user hair image544illustrates only the hair of the user image540. After step1706, the method advances to step1708. At step1708, the hair color matching module92generates a segmented user hair image in a HSV color space from the segmented user hair image544in the RGB color space. After step1708, the method advances to step1710. At step1710, the hair color matching module92generates a normalized histogram of hair pixels560(shown inFIG.12) from the segmented user hair image in the HSV color space. After step1710, the method advances to step1712. At step1712, the cloud-based computer1665makes a determination as to whether the normalized histogram of hair pixels560indicate hair pixels having one of a plurality of predefined colors (e.g., pink, blue, green). If the value of step1712equals “yes”, the method deletes the user image540and returns to step1700. Otherwise, the method advances to step1720. At step1720, the hair color matching module92selects hair pixels in a selected set of most populated bins in the normalized histogram of hair pixels560to obtain a compressed histogram of hair pixels. After step1720, the method advances to step1722. At step1722, the cloud-based computer1665makes a determination as to whether the user will manually select a color sub-group indicating their hair color. This determination can be based on a stored software flag in the memory device52for example. If the value of step1722equals “yes”, the method advances to step1724. Otherwise, the method advances to step1744. At step1724, the cloud-based computer1665sends a message to the user computer1660requesting that the user select a color sub-group indicating their hair color, utilizing the Internet1632. After step1724, the method advances to step1726. At step1726, the user computer1660displays a GUI600(shown inFIG.14) requesting that the user select a color sub-group (e.g., a black, brown, or blonde) indicating their hair color on a display device1656. After step1726, the method advances to step1728. At step1728, the user computer1660receives a user input from an input device1650indicating a selected color sub-group (e.g., a black, brown, or blonde) and sends the selected color sub-group to the cloud-based computer1665utilizing the Internet1632. After step1728, the method advances to step1730. At step1730, the hair color matching module92selects one of a black shades classifier module250(shown inFIG.29), a brown shades classifier module252, and a blonde shades classifier module254(which is a selected shades classifier module) based on the selected color sub-group. The selected shades classifier module determines a plurality of confidence scores associated with a plurality of color shades. Each confidence score indicates a probability that a color of hair in the segmented user hair image in the HSV color space matches a color shade associated with the selected shades classifier module and further associated with a plurality of hair extensions. After step1730, the method advances to step1740. At step1740, the hair color matching module92sorts a list of the plurality of color shades based on the plurality of confidence scores from a respective color shade thereof having a highest confidence score to a respective color shade thereof having a lowest confidence score. After step1740, the method advances to step1742. At step1742, the cloud-based computer1665stores the list of the plurality of color shades and the associated plurality of confidence scores in the memory device1667such that each color shade thereof has a respective confidence score thereof. After step1742, the method advances to step1744. At step1744, the cloud-based computer1665makes a determination as to whether the cloud-based computer1665will automatically determine a color shade of hair in the user image540. This determination can be based on a stored software flag in the memory device1667for example. If the value of step1744equals “yes”, the method advances to step1746. Otherwise, the method advances to step1760. At step1746, the hair color matching module92selects a comprehensive shades classifier module256for all specified color shades of all specified color sub-groups (e.g., black, brown, blonde). The comprehensive shades classifier module256determines a plurality of confidence scores associated with a plurality of color shades. Each confidence score indicates a probability that a color of hair in the segmented user hair image in the HSV color space matches a color shade associated with the comprehensive shades classifier module256and further associated with a plurality of hair extensions. After step1746, the method advances to step1748. At step1748, the hair color matching module92sorts a list of the plurality of color shades based on the plurality of confidence scores from a respective color shade thereof having a highest confidence score to a respective color shade thereof having a lowest confidence score. After step1748, the method advances to step1750. At step1750, the cloud-based computer1665stores the list of the plurality of color shades and the associated plurality of confidence scores in the memory device1667such that each color shade thereof has a respective confidence score thereof. After step1750, the method advances to step1760. At step1760, the cloud-based computer1665sends hair extension images and SKUs associated with a subset of the plurality of hair extensions to the user computer1660utilizing the Internet1632. Each hair extension image corresponds to a respective hair extension having a respective SKU. The subset of the plurality of hair extensions having color shades with confidence scores within an upper range of the plurality of confidence scores. After step1760, the method advances to step1762. At step1762, the user computer1660displays a GUI660(shown inFIG.19) having the hair extension images thereon and requests that the user select one of the hair extensions on the display device1656. After step1762, the method advances to step1764. At step1764, the user computer1660receives a user input from an input device1650indicating a selected hair extension and SKU. After step1764, the method advances to step1766. At step1766, the user computer1660sends the SKU of the selected hair extension to the cloud-based computer1665. After step1766, the method advances to step1768. At step1768, the cloud-based computer1665retrieves data associated with the SKU from a hair extension database1670(shown inFIG.28). The data includes a color, a length, a volume, a price, and a selected hair extension image of the selected hair extension. After step1768, the method advances to step1770. At step1770, the hair extension blending module94applies convolution to the binary hair mask542and the user image540utilizing a plurality of Gabor filters280which output a plurality of score numbers for each pixel, and selects a highest score of the plurality of score numbers for each pixel to generate the hair orientation mask700. The hair orientation mask700indicates the orientation of the hair in the user image540. After step1770, the method advances to step1780. At step1780, the cloud-based computer1665makes a determination as to whether the user image540a frontal image of the user. If the value of step1780equals “yes”, the method advances to step1782. Otherwise, the method advances to step1784. At step1782, the hair extension blending module94determines a scale of the user image540by utilizing 2D landmarks on the user image540to determine a number of pixels between both eyes of the user in the user image540, and then divides the number of pixels between both eyes by a predetermined eye separation distance. After step1782, the method advances to step1784. At step1784, the cloud-based computer1665makes a determination as to whether the user image540is a rear image of the user. If the value of step1784equals “yes”, the method advances to step1786. Otherwise, the method advances to step1788. At step1786, the hair extension blending module94determines a scale of the user image540by obtaining an upper quarter region640(shown inFIG.18) of the binary hair mask542and determining a number of pixels across the upper quarter region640of the binary hair mask542, and then dividing the number of pixels across the upper quarter region640of the binary hair mask542by a predetermined head width. After step1786, the method advances to step1788. At step1788, the hair extension blending module94determines an estimated hair length of the hair of the user by generating a bounding box around the binary hair mask542and determines a pixel height of the bounding box to determine a pixel hair length, and then divides the pixel hair length by a scale of the user image540. After step1788, the method advances to step1790. At step1790, the hair extension blending module94generates a resized binary hair mask742(shown inFIG.21) utilizing the binary hair mask542(shown inFIG.21), the estimated hair length, and a desired length and a desired volume associated with the SKU of the selected hair extension. The resized binary hair mask742has the desired length and the desired volume. After step1790, the method advances to step1800. At step1800, the hair extension blending module94generates a resized hair orientation mask750(shown inFIG.21) utilizing the hair orientation mask700, the estimated hair length, and a desired length and a desired volume associated with the SKU of the selected hair extension. The resized hair orientation mask750has the desired length and the desired volume associated with the SKU of the selected hair extension. After step1800, the method advances to step1802. At step1802, the hair extension blending module94has a generative adversarial neural network module290(shown inFIG.22) that generates a first modified user image781(shown inFIG.22) and a second modified user image782utilizing the user image540, the reference image770, the resized binary hair mask742, and the resized hair orientation mask750. The first modified user image781has hair with a color corresponding to a color of the hair in the user image540, and having the desired length and the desired volume. The second modified user image782has hair with a color corresponding to a color of the hair in the reference image770, and having the desired length and the desired volume. The color of hair in the reference image770is identical to a color of hair associated with the SKU of the selected hair extension. After step1802, the method advances to step1804. At step1804, the hair extension blending module94generates a blending mask800(shown inFIG.23) by multiplying each pixel of the resized binary hair mask742by a corresponding pixel of the resized hair orientation mask750. After step1804, the method advances to step1806. At step1806, the hair extension blending module94has an Alpha blending module300(shown inFIG.24) that blends the first modified user image781, the second modified user image782, and the blending mask800to obtain a final modified user image820(shown inFIG.24) having the hair of the user with the selected hair extension thereon. After step1806, the method advances to step1808. At step1808, the cloud-based computer1665stores the final modified user image820in the memory device1667. After step1808, the method advances to step1820. At step1820, the cloud-based computer1665sends the final modified user image820to the user computer1660utilizing the Internet1632. After step1820, the method advances to step1822. At step1822, the user computer1660displays the final modified user image820on the display device1656. After step1822, the method advances to step1824. At step1824, the user computer1660receives a user dashboard selection requesting at least one of a clicks dashboard1200(shown inFIG.25), a tryouts dashboard1310(shown inFIG.26), and a SKUs dashboard1410(shown inFIG.27). After step1824, the method advances to step1826. At step1826, the user computer1660sends the user dashboard selection to the cloud-based computer1665utilizing the Internet1632. After step1826, the method advances to step1828. At step1828, the cloud-based computer1665retrieves at least one of the clicks dashboard1200, the tryouts dashboard1310, and the SKUs dashboard1410associated with the plurality of hair extensions from a data analytics database1672, based on the user dashboard selection. After step1828, the method advances to step1830. At step1830, the cloud-based computer1665sends the selected one of the clicks dashboard1200, the tryouts dashboard1310, and the SKUs dashboard1410to the user computer1660utilizing the Internet1632. After step1830, the method advances to step1832. At step1832, the user computer1660displays the selected one of the clicks dashboard1200, the tryouts dashboard1310, and the SKUs dashboard1410on the display device1656. Various embodiments of the presently disclosed subject matter may include or be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. Embodiments also may be embodied in the form of a computer program product having computer program code containing instructions embodied in non-transitory and/or tangible media, such as hard drives, USB (universal serial bus) drives, or any other machine-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing embodiments of the disclosed subject matter. Embodiments also may be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing embodiments of the disclosed subject matter. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits. In some configurations, a set of computer-readable instructions stored on a computer-readable storage medium may be implemented by a general-purpose processor, which may transform the general-purpose processor or a device containing the general-purpose processor into a special-purpose device configured to implement or carry out the instructions. Embodiments may be implemented using hardware that may include a processor, such as a microprocessor and/or an Application Specific Integrated Circuit (ASIC) that embodies all or part of the techniques according to embodiments of the disclosed subject matter in hardware and/or firmware. The processor may be coupled to memory, such as RAM, ROM, flash memory, a hard disk or any other device capable of storing electronic information. The memory may store instructions adapted to be executed by the processor to perform the techniques according to embodiments of the disclosed subject matter. While the claimed invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the claimed invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the claimed invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the claimed invention is not to be seen as limited by the foregoing description.
56,972
11861772
DETAILED DESCRIPTION Overview A virtual garment try-on is performed by generating a photorealistic digital image that depicts a person in a pose wearing a garment based on a first digital image that depicts the person in the pose wearing another garment and a second digital image that depicts the garment. Conventional systems for virtual try-on are not capable of accurately generating the digital image in scenarios involving complex poses of the person or that require significant geometric deformation of the garment. For example, digital images generated using conventional systems in these scenarios include significant textural artefacts and/or depict the person with missing (or additional) body parts. The textural artefacts are a consequence of a lack of adequate regularization which causes over-deformation of the garment. The missing (or additional) body parts are a result of the limited ability of conventional systems to accurately incorporate three-dimensional geometric information (e.g., body-part ordering) based on the first and second digital images which are two-dimensional. To overcome the limitations of conventional systems, techniques and systems are described for generating images for virtual try-on and pose transfer. In one example, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose wearing a first garment and a second digital image that depicts a second garment. The generator system generates prior data by processing the first digital image to encode the geometry of the person as a 1-channel body shape, an 18-channel pose map, a 3-channel head region, and a dense 11-channel body part segmentation. Once generated, the prior data describes the geometry of the person in the pose in a manner which is agnostic to (or independent of) the first garment depicted in the first digital image. For example, the generator system processes the prior data and the second digital image using a first convolutional network to compute a hierarchy of candidate appearance flow maps that warp the second garment based on the pose at different pixel-block sizes (e.g., different scales). Each of the candidate appearance flow maps is a vector that indicates how to deform the second digital image to align with the pose of the person in the first digital image. The candidate appearance flow maps are combined as an aggregate per-pixel displacement map using a convolutional gated recurrent network. For instance, the generator system implements the convolutional gated recurrent network to gate (allow or dismiss) the candidate appearance flow maps that correspond to different scales of view (e.g., based on the different pixel-block sizes). Gating the candidate appearance flow maps in this way prevents over warping of the second garment by regularizing high degrees of freedom in a dense per-pixel appearance flow. The generator system generates a warped garment image by warping the second digital image (e.g., the second garment) using the aggregate per-pixel displacement map. The prior data and the second digital image are processed using a second convolutional network to predict a conditional segmentation mask. The conditional segmentation mask segments portions of the geometry of the person. For example, the conditional segmentation mask represents a clothing segmentation mask of the person in the pose conditioned or corrected based on the second garment. Accordingly, the conditional segmentation mask predicts a clothing segmentation of the person as it would be after the garment change try-on. The generator system processes the warped garment image, the conditional segmentation mask, and additional prior data using a third convolutional network to generate a digital image that depicts the person in the pose wearing the second garment. The additional prior data describes a UV map of the person, a body-part segmentation mask of the person, and a texture translation prior. The texture translation prior describes pixels of the first digital image that do not depict a portion of the first garment (e.g., non-garment pixels). For instance, the UV map and the body-part segmentation mask of the person preserve structural and geometric integrity in the digital image such as depth-ordering, pose, skin and neckline reconstruction, etc. The described systems are capable of accurately generating the digital image depicting the person in the pose wearing the second garment even in scenarios in which the pose is complex and the second garment requires significant warping or deformation. This is not possible using conventional systems that generate the digital image as depicting textual artefacts and/or the person having missing body parts. For example, a comparison of digital images generated using the described systems and digital images generated using conventional systems demonstrates that the described systems outperform the conventional systems based on multiple different metrics. Furthermore, portions of the described systems are implementable to generate digital images for pose transfer which is also not possible using conventional systems for virtual try-on. A pose transfer system receives data describing a source digital image that depicts a person in a source pose and a target digital image that depicts the person (or another person) in a target pose, and the pose transfer system generates a digital image that depicts the person in the target pose. By computing candidate appearance flow maps and using the convolutional gated recurrent network for gated candidate appearance flow map aggregation, the described systems are capable of generating the digital image that depicts the person in the target pose with greater accuracy (e.g., fewer artefacts) than conventional systems for pose transfer. Thus, the described systems improve computer-based technology for both virtual try-on and pose transfer. Term Examples As used herein, the term “candidate appearance flow map” refers to a machine learning model generated vector that indicates how to reconstruct a target digital image using pixels of a source digital image. By way of example, for each target pixel of the target digital image, the vector specifies coordinate offsets of the source digital image where a pixel value is sampled to reconstruct the target pixel. As used herein, the term “conditional segmentation mask” refers to a machine learning model generated mask that conditions or corrects a segmentation mask of a first digital image based on a second digital image. By way of example, the first digital image depicts a person in a pose wearing a first garment and the second digital image depicts a second garment. In this example, a conditional segmentation mask estimates a segmentation of the person in the pose wearing the second garment. By way of further example, the first garment is long-sleeved shirt and the second garment is a short-sleeved shirt. In this further example, the conditional segmentation mask estimates a segmentation of the person in the pose wearing the short-sleeved shirt. For example, the conditional segmentation mask conditions or corrects a segmentation mask of the person in the pose wearing the long-sleeved shirt based on the short-sleeved shirt. As used herein, the term “machine learning model” refers to a computer representation that is tunable (e.g., trainable) based on inputs to approximate unknown functions. By way of example, the term “machine learning model” includes a model that utilizes algorithms to learn from, and make predictions on, known data by analyzing the known data to learn to generate outputs that reflect patterns and attributes of the known data. According to various implementations, such a machine learning model uses supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, and/or transfer learning. For example, the machine learning model is capable of including, but is not limited to, clustering, decision trees, support vector machines, linear regression, logistic regression, Bayesian networks, random forest learning, dimensionality reduction algorithms, boosting algorithms, artificial neural networks (e.g., fully-connected neural networks, deep convolutional neural networks, or recurrent neural networks), deep learning, etc. By way of example, a machine learning model makes high-level abstractions in data by generating data-driven predictions or decisions from the known input data. In the following discussion, an example environment is first described that employs examples of techniques described herein. Example procedures are also described which are performable in the example environment and other environments. Consequently, performance of the example procedures is not limited to the example environment and the example environment is not limited to performance of the example procedures. Example Environment FIG.1is an illustration of an environment100in an example implementation that is operable to employ digital systems and techniques as described herein. The illustrated environment100includes a computing device102connected to a network104. The computing device102is configurable as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone), and so forth. Thus, the computing device102is capable of ranging from a full resource device with substantial memory and processor resources (e.g., personal computers, game consoles) to a low-resource device with limited memory and/or processing resources (e.g., mobile devices). In some examples, the computing device102is representative of a plurality of different devices such as multiple servers utilized to perform operations “over the cloud.” The illustrated environment100also includes a display device106that is communicatively coupled to the computing device102via a wired or a wireless connection. A variety of device configurations are usable to implement the computing device102and/or the display device106. The computing device102includes a storage device108and a generator module110. The storage device108is illustrated to include digital content112such as digital photographs, digital images, digital videos, etc. The generator module110is illustrated as having, receiving, and/or transmitting input data114. The input data114describes a first digital image116that depicts a person in a pose. As shown, the person is a woman and the pose is front facing with a left arm visible and a right arm disposed behind the woman's back such that the right arm is not visible. The woman is wearing a first garment which has horizontal black and white stripes and mid-length sleeves that terminate above the woman's elbow. The input data114also describes a second digital image118that depicts a second garment. The second garment displays the letters “GANT” and has full-length sleeves. The generator module110receives and processes the input data114to generate a digital image120which is rendered in a user interface122and depicts the person in the pose wearing the second garment. To do so in one example, the generator module110leverages prior data that describes clothing-agnostic (e.g., clothing-independent) structural priors of the person. For example, the generator module110generates the prior data using dense (e.g., 11-channel) body-part segmentation for the first digital image116. In this example, the generator module110uses the dense body-part segmentation in addition to a conventional 1-channel body shape, 18-channel pose map, and 3-channel head region to provide richer structural priors. Accordingly, the prior data encodes a geometry of the person in the pose as depicted in the first digital image116. Continuing the previous example, the generator module110processes the prior data and the second digital image118using a first machine learning model (e.g., a first convolutional network) to compute candidate appearance flow maps that warp the second garment based on the pose. For example, the generator module110computes the candidate appearance flow maps at different pixel-block sizes using the first convolutional network. For instance, the generator module110interpolates the candidate appearance flow maps to have identical height and width. The generator module110combines the candidate appearance flow maps as an aggregate per-pixel displacement map using a second machine learning model (e.g., a convolutional gated recurrent network). For example, the generator module110implements the convolutional gated recurrent network to gate (allow or dismiss) the candidate appearance flow maps that correspond to different radial neighborhoods (e.g., the different pixel-block sizes). This prevents over warping of the second garment by regularizing high degrees of freedom in a dense per-pixel appearance flow. Thus, the aggregate per-pixel displacement map warps pixels depicting portions of the second garment to align with the pose of the person. In an example, the generator module110generates a warped garment image by warping the second garment using the aggregate per-pixel displacement map. For example, the generator module110also processes the prior data that describes the geometry of the person and the second digital image118using a third machine learning model (e.g., a second convolutional network) to predict a conditional segmentation mask. In this example, the conditional segmentation mask segments portions of the geometry of the person. Notably, the prior data encodes the geometry of the person and is agnostic to the first garment that the person is wearing in the first digital image116. This is important to prevent over-fitting as the pipeline is trained on paired data where the input and output are the same images (e.g., have the same segmentation mask). The generator module110trains the second convolutional network with a weighted cross-entropy loss with respect to a ground truth garment segmentation mask. The generator module110uses the trained second convolutional network to predict the conditional segmentation mask as corresponding to a clothing segmentation of the person after the garment change try-on. The generator module110processes the warped garment image, the conditional segmentation mask, and additional prior data using a fourth machine learning model (e.g., a third convolutional network) to generate the digital image120. The additional prior data describes a UV map of the person, a body-part segmentation mask of the person, and a texture translation prior. The texture translation prior represents pixels of the first digital image116that do not depict a portion of the first garment. For example, the texture translation prior describes non-garment pixels of the first digital image116. The UV map and the body-part segmentation mask preserve geometric integrity (e.g., depth-ordering, pose, skin and neckline reconstruction, etc.) in the digital image120. In one example, the UV map of the person and the body-part segmentation mask of the person are included in UV maps and body-part segmentation masks generated by the generator module110using a pre-trained network as described by Güler et al.,Densepose: Dense Human Pose Estimation in the Wild, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7297-7306, 2 (2018). As shown, the digital image120is photorealistic and accurately depicts the person in the pose depicted in the first digital image116wearing the second garment depicted in the second digital image118. A bounding box124bounds letters “ANT” of the letters “GANT” displayed on the second garment because the letter “G” is obscured by the woman's hair in the digital image120. Further, the full-length sleeves of the second garment fully cover the woman's arms as depicted in the digital image120which are depicted as being partially exposed in the first digital image116. In the digital image120, the woman is front facing with a left arm visible and a right arm disposed behind the woman's back such that the right arm is not visible as the woman is depicted in the first digital image116. Since the generator module110is capable of accurately generating the digital image120as depicting the person in the pose wearing the second garment, the generator module110is usable to provide a variety of functionality. Consider an example in which a user of a client device determines a particular model included in a group of models to wear a particular garment at an event. In this example, the client device communicates data describing digital images of each model in the group and a digital image of the particular garment to the generator module110via the network104. The generator module110generates a digital image for each model that depicts the model wearing the particular garment and communicates data describing the generated digital images to the client device via the network104. The client device receives the data describing the generated digital images and the user of the client device determines the particular model based on the generated digital images. FIG.2depicts a system200in an example implementation showing operation of a generator module110. The generator module110is illustrated to include a candidate module202, a combination module204, a segment module206, and an output module208. For example, the generator module110receives input data114, prior data210, and/or additional prior data212as inputs. In one example, the candidate module202processes the input data114and/or the prior data210to generate candidate flow map data214. FIG.3illustrates a representation300of computing candidate appearance flow maps. As shown, the candidate module202receives the input data114which describes a first digital image302that depicts a person in a pose and a second digital image304that depicts a garment. In one example, the second digital image304is directly representative of an isolated garment image Ip. In another example, the candidate module202generates the garment image Ipusing the second digital image304, e.g., by segmenting pixels of the second digital image304that depict the garment and isolating the segmented pixels as the garment image Ip. For example, the candidate module202also receives the prior data210which describes a geometry of the person depicted in the first digital image302. In this example, the geometry of the person described by the prior data210is independent of or agnostic to garments worn by the person in the first digital image302. For instance, the prior data210describes body-part segmentation masks that segment portions of the geometry of the person depicted in the first digital image302. In some examples, the candidate module202generates the prior data210because training data describing images depicting the person wearing different garments is unavailable. In these examples, the candidate module202extends a conventional binary (1-channel) body shape, (18-channel) pose map, and (3-channel) head region with an additional dense (11-channel) body-part segmentation of the first digital image302to provide improved structural priors relative to priors generated without the additional body-part segmentation of the first digital image302. For example, the candidate module202processes the second digital image304(the garment image Ip) described by the input data114and the body-part segmentation masks described by the prior data210to compute candidate appearance flow maps that warp the garment image Ipbased on the pose at different pixel-block sizes. In one example, the candidate module202includes a machine learning model such as a convolutional network306which the candidate module202implements to compute candidate appearance flow maps. For example, the convolutional network306is a 12-layer network as described by Ronneberger et al., U-net: Convolutional Networks for Biomedical Image Segmentation, CoRR, abs/1505.04597, 4, 5 (2015). In this example, given an input RGB image of size (H, W), the last K layers are used to predict the candidate appearance flow maps for (flfor l∈{0, . . . , K}) such that a predicted candidate appearance flow map flis double the size of candidate appearance flow map fl-1. The predicted candidate appearance flow maps are interpolated to have identical height and width (H, W) which generates a pyramid of K candidate appearance flow maps that correspond to a structural hierarchy. As illustrated inFIG.3, the candidate module202generates the candidate flow map data214as describing the computed candidate appearance flow maps that warp the garment at the different scales (e.g., the different pixel-block sizes). For instance, the combination module204receives the candidate flow map data214and/or the prior data210and the combination module204processes the candidate flow map data214and/or the prior data210to generate warped garment data216.FIG.4illustrates a representation400of generating a warped garment image Iwrp. As illustrated, the representation400includes the candidate flow map data214and the second digital image304(the garment image Ip) described by the input data114. In one example, the combination module204includes a machine learning model such as a convolutional gated recurrent network402. In this example, the convolutional gated recurrent network402is a network as described by Siam et al.,Convolutional Gated Recurrent Networks for Video Segmentation, arXiv:1611.05435v2 [cs.CV] 21 Nov. 2016. The combination module204processes the candidate flow map data214and/or the input data114using the convolutional gated recurrent network402to combine the candidate appearance flow maps as an aggregate per-pixel displacement map that warps pixels depicting portions of the garment to align with the pose of the person. For example, the pose of the person is described by the geometry of the person encoded in the prior data210. In this example, the combination module204generates the aggregate per-pixel displacement map by implementing the convolutional gated recurrent network402to perform a per-pixel selection process that gates (e.g., allows or dismisses) pixel flow estimates corresponding to different radial neighborhoods (e.g., for the different scales or the different pixel-block sizes). This prevents over-warping of the garment by regularizing high degrees of freedom in dense per-pixel appearance flow. As illustrated in the representation400, the combination module204uses the aggregate per-pixel displacement map to generate the warped garment image Iwrp. To do so in one example, the combination module204uses the aggregate per-pixel displacement map to warp the garment image Ipand a mask Mpto generate the warped garment image Iwrpand a warped binary garment mask Mwrp, respectively. Additionally, intermediate flow maps flfor l∈{0, . . . , K} are used to produce intermediate warped images Iwrpland intermediate warped masks Mwrpl. Each of the warped images (final and intermediate) are subject to an L1 loss and a perceptual similarity loss with respect to garment regions of the first digital image302. Each predicted warped mask is subject to a reconstruction loss with respect to a ground truth mask. The predicted flow maps are subjected to a total variation loss to ensure spatial smoothness of flow predictions. As shown, pixels depicting the garment in the second digital image304are displaced to align with the pose of the person to generate the warped garment image Iwrp. For example, the combination module204generates the warped garment data216as describing the warped garment image Iwrp. The segment module206is illustrated as receiving the warped garment data216which includes the input data114in some examples. The segment module206also receives the prior data210and the segment module206processes the prior data210and/or the input data114to generate segment mask data218.FIG.5illustrates a representation500of predicting a conditional segmentation mask Mexp. The representation500includes the prior data210and the second digital image304(the garment image Ip) described by the input data114. For example, the segment module206includes a machine learning model such as a convolutional network502which the segment module206implements to process the second digital image304and the prior data210to predict the conditional segmentation mask Mexp. The prior data210encodes the geometry of the person in a manner that is independent and agnostic of garments worn by the person in the first digital image302which is important to prevent over-fitting as the pipeline is trained on paired data (e.g., where the input and output are the same images and hence have the same segmentation mask). The convolutional network502is trained with a weighted cross-entropy loss with respect to a ground truth garment segmentation mask (Msgt) obtained with a pre-trained human parser. A weight for a skin class and a background class is increased (e.g., 3.0) for improved handling of bleeding and self-occlusion for scenarios in which the pose of the person results in portions of the garment or person being hidden from view. For instance, the conditional segmentation mask Mexpsegments portions of the geometry of the person described by the prior data210. In an example, the conditional segmentation mask Mexpis predicted as corresponding to clothing segmentation of the person after a virtual garment try-on (e.g., of the garment depicted in the second digital image304). In this example, the convolutional network502includes six encoder layers and decoder layers and an output from the convolutional network502is a 7-channel conditional segmentation mask Mexp. The segment module206generates the segment mask data218as describing the conditional segmentation mask Mexp. With reference toFIG.2, the output module208receives the segment mask data218which includes the warped garment data216in some examples. For instance, the output module208also receives the additional prior data212.FIG.6illustrates a representation600of outputting a digital image that depicts a person in a pose wearing a garment (Itryon) The representation600includes the additional prior data212which describes a texture translation prior602, a body-part segmentation mask604of the person, and a UV map606of the person. The texture translation prior602represents pixels of the first digital image302that do not depict portions of a garment worn by the person in the image. Accordingly, the texture translation prior602describes non-garment pixels of the first digital image302. For instance, the texture translation prior602is computed using the first digital image302and the conditional segmentation mask Mexp. The body-part segmentation mask604and the UV map606preserve geometric integrity of the pose, depth-ordering, skin and neckline reconstruction, and so forth. In an example, the output module208includes a machine learning model such as a convolutional network608and the output module208implements the convolutional network608to generate a digital image610that depicts the person in the pose wearing the garment Itryon. In this example, the convolutional network608includes six encoder and decoder layers and the convolutional network608processes the warped garment data216, the segment mask data218, and the additional prior data212to generate the digital image610that depicts the person in the pose wearing the garment Itryon. For example, this is representable as: Itryon=Mout*Iwrp+(1−Mout)*Irp where: Moutis generated by the convolutional network608and is a composite mask for garment pixels in the try-on output; and Irpis generated by the convolutional network608and is a rendered person including all pixels depicting the person except the garment in the try-on output. In order to preserve structural and geometric integrity of the try-on output, the convolutional network608is constrained to reconstruct input clothing segmentation (as Mexppred) and IUV priors (as Mbppred, Iuvpred) which are unchanged. Itryonis subject to an L1 loss, a perceptual similarity loss, and an edge loss with respect to the first digital image302. The edge loss is based on sobel filters and improves a quality of reproduced textures. Additionally Mexppred, Mbppred, and Iuvpredare subject to reconstruction losses against corresponding network inputs. The reconstruction loss combines cross-entropy loss for Mexppred, Mbppredand smooth L1 loss for Iuvpred. As shown, the digital image610is photorealistic and accurately depicts the person in the pose depicted in the first digital image302wearing the garment depicted in the second digital image304(the garment image Ip). For instance, the digital image610accurately depicts depth such as a left hand in front of a left leg. In a self-occlusion example, the digital image610accurately depicts the person's right forearm behind the person and hidden from view. In general, functionality, features, and concepts described in relation to the examples above and below are employed in the context of the example procedures described in this section. Further, functionality, features, and concepts described in relation to different figures and examples in this document are interchangeable among one another and are not limited to implementation in the context of a particular figure or procedure. Moreover, blocks associated with different representative procedures and corresponding figures herein are applicable individually, together, and/or combined in different ways. Thus, individual functionality, features, and concepts described in relation to different example environments, devices, components, figures, and procedures herein are usable in any suitable combinations and are not limited to the particular combinations represented by the enumerated examples in this description. Example Procedures The following discussion describes techniques which are implementable utilizing the previously described systems and devices. Aspects of each of the procedures are implementable in hardware, firmware, software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. In portions of the following discussion, reference is made toFIGS.1-6.FIG.7is a flow diagram depicting a procedure700in an example implementation in which input data is received describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment and a digital image is output that depicts the person in the pose wearing the garment. Input data is received describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment (block702). In an example, the computing device102implements the generator module110to receive the input data. Candidate appearance flow maps that warp the garment based on the pose are computed at different pixel-block sizes using a first convolutional network (block704). The generator module110computes the candidate appearance flow maps in some examples. A warped garment image is generated (block706) by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network, the aggregate per-pixel displacement map warps pixels depicting portions of the garment to align with the pose. For example, the computing device102implements the generator module110to generate the warped garment image. A conditional segment mask is predicted (block708) that segments portions of a geometry of the person using a second convolutional network. In one example, the generator module110predicts the conditional segment mask. A digital image is output (block710) that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segment mask using a third convolutional network. FIG.8illustrates a representation800of example images generated for virtual try-on. The representation800includes a digital image802that depicts a first person in a first pose. For instance, the first person is a first woman and the first pose is front facing with a right hand partially tucked in a front pants pocket and a left arm bent such that a left elbow is adjacent to a left hip of the first woman. In the digital image802, the first woman is wearing a first garment which is a solid color and has full-length sleeves. The representation800also includes a digital image804that depicts a second garment. The second garment is short-sleeved and lightly colored with dark horizontal stripes. In an example, the generator module110receives input data114describing the digital image802and the digital image804and the generator module110processes the input data114to generate a digital image806. The digital image806depicts the first woman in the first pose wearing the second garment. As shown in the digital image806, the first woman's arms are exposed and the dark horizontal stripes of the second garment have been warped to align with the first pose. For example, the generator module110warps pixels of the digital image804depicting portions of the dark horizontal stripes to align with the first pose based on the prior data210and the additional prior data212to generate the digital image806. The representation800also includes a digital image808that depicts a second person in a second pose. As shown, the second person is a second woman and the second pose is front facing with a right arm disposed at the second woman's side and a left arm disposed behind the second woman's back up to a left elbow of the left arm. In the digital image808, the second woman is wearing a third garment that is short-sleeved and dark colored with thin light-colored horizontal stripes. A digital image810is included in the representation800that depicts a fourth garment. The fourth garment is short-sleeved and lightly colored with dark hand drawn designs. The dark designs are illustrated to include shapes, words, and mathematical equations. In one example, the generator module110receives input data114describing the digital image808and the digital image810. In this example, the generator module110processes the input data114to generate a digital image812. The digital image812depicts the second woman in the second pose wearing the fourth garment. As shown in the digital image812, the fourth garment includes the shapes, words, and mathematical equations. For instance, the representation800includes a digital image814that depicts a third person in a third pose. The third person is a third woman and the third pose is front facing and similar to the second pose in that a right arm disposed at the third woman's right side. In the digital image814, the third woman's left arm is along a left side with a left hand resting on the third woman's left thigh. The third woman is wearing a fifth garment which is short-sleeved and includes alternating light and dark colored horizontal stripes. A digital image816depicts a sixth garment. The sixth garment has full-length sleeves and is dark colored with a single photorealistic graphic disposed in a center of the sixth garment. For example, the generator module110receives input data114describing the digital image814and the digital image816. In this example, the generator module110processes the input data114to generate a digital image818that depicts the third woman in the third pose wearing the sixth garment. As shown in the digital image818, the third woman's arms are covered by the full-length sleeves of the sixth garment. For instance, the photorealistic graphic is accurately reproduced in the digital image818. FIG.9illustrates a representation900of a network for pose transfer. For an extended validation of the described system's efficacy for estimating appearance flows, portions of the described system are implemented for human pose transfer. For example, the network for pose transfer is a network as described by Li et al.,Dense Intrinsic Appearance Flow for Human Pose Transfer, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3693-3702 (2019) with the convolutional gated recurrent network402used in place of three-dimensional flow regression. The task of human pose transfer generates an image of a person in a target pose based on a reference image. Unlike the virtual try-on task which warps a garment based on a pose of a person, the pose transfer task warps the pose of the person. The representation900includes a first digital image902that depicts a person in a source pose904and a second digital image906that depicts the person in a target pose908. The generator module110implements a convolutional network910to receive the first digital image902and the second digital image906as inputs and the convolutional network910processes the inputs to generate a visibility map912and candidate appearance flow maps914. For instance, the visibility map912is generated using cross-entropy loss. The candidate appearance flow maps914each warp the person in the source pose904based on the target pose908at a different pixel-block size (e.g., a different scale). The convolutional gated recurrent network402receives the candidate appearance flow maps914and aggregates the candidate appearance flow maps914as a flow map916with expected predicted error loss. The flow map916and the visibility map912are then used along with the first digital image902and the second digital image906to generate a digital image that depicts the person in the first digital image902in the target pose908. The digital image that depicts the person in the first digital image902in the target pose908demonstrates significant improvements in skin generation and texture relative to conventional techniques for pose transfer. FIG.10illustrates a representation1000of example images generated for pose transfer. As shown, the representation1000includes a digital image1002that depicts a source model and a digital image1004that depicts a target pose. The network for pose transfer processes the digital image1002and the digital image1004to generate a digital image1006. As shown, the digital image1006depicts the source model in the target pose. The representation1000also include a digital image1008that depicts a source model and a digital image1010that depicts a target pose. The network for pose transfer processes the digital image1008and the digital image1010to generate a digital image1012that depicts the source model in the target pose. Example Improvements The described systems were evaluated against several conventional systems based on structural similarity index measure (SSIM), peak signal to nose ratio (PSNR), and Frechet inception distance (FID). Table 1 presents results of the evaluation. TABLE 1SystemSSIMPSNRFIDConventional System 10.78421.0130.05Conventional System 20.83723.5226.67Conventional System 30.84323.6023.68Described Systems0.88525.4615.17 As shown in Table 1 above, the described systems outperform each of three conventional systems based on every metric evaluated. For instance, the SSIM of the described systems is greater than the SSIM of each of the three conventional systems, the PSNR of the described systems is greater than the PSNR of each of the three conventional systems, and the FID of the described systems is lower than the FID of each of the three conventional systems. Example System and Device FIG.11illustrates an example system1100that includes an example computing device that is representative of one or more computing systems and/or devices that are usable to implement the various techniques described herein. This is illustrated through inclusion of the generator module110. The computing device1102includes, for example, a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system. The example computing device1102as illustrated includes a processing system1104, one or more computer-readable media1106, and one or more I/O interfaces1108that are communicatively coupled, one to another. Although not shown, the computing device1102further includes a system bus or other data and command transfer system that couples the various components, one to another. For example, a system bus includes any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines. The processing system1104is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system1104is illustrated as including hardware elements1110that are configured as processors, functional blocks, and so forth. This includes example implementations in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. The hardware elements1110are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors are comprised of semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions are, for example, electronically-executable instructions. The computer-readable media1106is illustrated as including memory/storage1112. The memory/storage1112represents memory/storage capacity associated with one or more computer-readable media. In one example, the memory/storage1112includes volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). In another example, the memory/storage1112includes fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media1106is configurable in a variety of other ways as further described below. Input/output interface(s)1108are representative of functionality to allow a user to enter commands and information to computing device1102, and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., which employs visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, the computing device1102is configurable in a variety of ways as further described below to support user interaction. Various techniques are described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques are implementable on a variety of commercial computing platforms having a variety of processors. Implementations of the described modules and techniques are storable on or transmitted across some form of computer-readable media. For example, the computer-readable media includes a variety of media that is accessible to the computing device1102. By way of example, and not limitation, computer-readable media includes “computer-readable storage media” and “computer-readable signal media.” “Computer-readable storage media” refers to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and which are accessible to a computer. “Computer-readable signal media” refers to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device1102, such as via a network. Signal media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. As previously described, hardware elements1110and computer-readable media1106are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that is employable in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions. Hardware includes components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware. In this context, hardware operates as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously. Combinations of the foregoing are also employable to implement various techniques described herein. Accordingly, software, hardware, or executable modules are implementable as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements1110. For example, the computing device1102is configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by the computing device1102as software is achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements1110of the processing system1104. The instructions and/or functions are executable/operable by one or more articles of manufacture (for example, one or more computing devices1102and/or processing systems1104) to implement techniques, modules, and examples described herein. The techniques described herein are supportable by various configurations of the computing device1102and are not limited to the specific examples of the techniques described herein. This functionality is also implementable entirely or partially through use of a distributed system, such as over a “cloud”1114as described below. The cloud1114includes and/or is representative of a platform1116for resources1118. The platform1116abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud1114. For example, the resources1118include applications and/or data that are utilized while computer processing is executed on servers that are remote from the computing device1102. In some examples, the resources1118also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network. The platform1116abstracts the resources1118and functions to connect the computing device1102with other computing devices. In some examples, the platform1116also serves to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources that are implemented via the platform. Accordingly, in an interconnected device embodiment, implementation of functionality described herein is distributable throughout the system1100. For example, the functionality is implementable in part on the computing device1102as well as via the platform1116that abstracts the functionality of the cloud1114. CONCLUSION Although implementations of systems for generating images for virtual try-on and pose transfer have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations of systems for generating images for virtual try-on and pose transfer, and other equivalent features and methods are intended to be within the scope of the appended claims. Further, various different examples are described and it is to be appreciated that each described example is implementable independently or in connection with one or more other described examples.
50,341
11861773
DESCRIPTION OF EMBODIMENT Hereinafter, the best mode (hereinafter, referred to as embodiment) for carrying out the present technique will be described in detail with reference to the drawings. Note that the embodiment will be described in the following order.1. Configuration Example of Content Viewing System According to Present Technique2. Presentation of Visual Field of Each User3. Tracking Image That Tracks Visual Field of Another User4. Highlighting User Speaking5. Visual Field Information Presentation Process of User Apparatus206. Switch of Visual Field Image and Wide Area Image7. Movement of Visual Field Image8. Selection of Avatar to Be Displayed9. A Series of Processes Executed by Software Note that in the present specification, it can be assumed that a system denotes a set of a plurality of constituent elements (apparatuses, modules (parts), and the like), and not all of the constituent elements have to be in the same housing. Therefore, both a plurality of apparatuses housed in separate housings and connected through a network and one apparatus including a plurality of modules contained in one housing can be assumed as systems in the present specification. <1. Configuration Example of Content Viewing System According to Present Technique> FIG.1is a block diagram illustrating a configuration example of a content viewing system according to the present technique. A content viewing system10enables to figure out visual fields of other users in a case where a plurality of users views the same content. Note that the content includes an image. Obviously, there can be voice corresponding to the image. However, only the image of the content will be mentioned below, and the voice of the content will be not be mentioned. The timing of the plurality of users viewing the same content does not have to be the same time. For example, when a user (referred to as user A) views the content, visual field information representing the visual field of the user A may be saved. At later timing, the visual field of the user A may be presented to another user (referred to as user B) viewing the same content based on the saved visual field information of the user A. The content provided by the content viewing system10can be a spherical image including an image in a case of viewing the whole circumference from a point of view or a free viewpoint image including an image in a case of viewing the whole circumference while the point of view is moved. The spherical image and the free viewpoint image can be any of an image obtained by photographing a real space (photographed image), an image of the real space photographed in real time (image being photographed), an image obtained by using computer graphics to generate a VR (virtual reality) space of a game or the like, an image including a virtual object superimposed on the real space, and the like. The content viewing system10includes user apparatuses20-1to20-N(N is a natural number) used by users and a server apparatus40. Hereinafter, the user apparatuses20-1to20-N will be simply referred to as a user apparatus20in a case where the user apparatuses20-1to20-N do not have to be individually distinguished. The user apparatus20includes an information processing apparatus21and a display apparatus22. The information processing apparatus21mainly executes a process of cutting out part of an image of the content to generate a visual field image to be displayed on the display apparatus22. The display apparatus22mainly displays the visual field image. In addition, the display apparatus22displays visual fields of other users using other user apparatuses20to present the visual fields of the other users to the user of the user apparatus20. The user of the user apparatus20views the visual field image displayed on the display apparatus22in the image of the content. Therefore, the range of the scene in the visual field image is the visual field of the user viewing the visual field image. The angle of view of the visual field image, that is, the visual field size of the visual field of the user viewing the visual field image (visual field size (field of view) of the visual field provided by the display apparatus22to the user viewing the visual field image), varies depending on a display device as the display apparatus22or display software that displays the image. For example, there are a display device in which the angle of view of the visual field image is 90 degrees, a display device in which the angle of view of the visual field image is 210 degrees, and the like. An example of the display apparatus22includes a display apparatus worn and used on the head of the user, such as AR (augmented reality) glasses and other HMDs. However, the display apparatus22may be a planar display device, such as a television receiver, or a display device that projects an image, such as a projector. Note that the information processing apparatus21and the display apparatus22included in the user apparatus20may be integrated or may be placed in different housings and separately arranged. The connection between the information processing apparatus21and the display apparatus22may be wired connection or may be wireless connection. The server apparatus40is connected to the user apparatus20through the Internet31. The server apparatus40includes a content distribution unit41, a visual field information management unit42, and a communication management unit43. The content distribution unit41distributes data of content through the Internet31according to requests from the user apparatuses20. The data of the content may be distributed to the user apparatuses20at the same timing or at different timing. The visual field information management unit42acquires, from each user apparatus20, visual field information representing the visual field of the user at the time that the content is viewed in each user apparatus20and manages the visual field information. The visual field information includes at least one of content identification information, elapsed time information, point-of-view information, visual field center information, or visual field size information. The content identification information is information for identifying the content. The elapsed time information exists in a case where the image of the content changes with time, and the elapsed time information is information representing the elapsed time from the top of the content (temporal position where the content is reproduced). The point-of-view information is information representing the position of the point of view in a content space that is a space (of the scene) in the free viewpoint image in the case the content is a free viewpoint image. The visual field center information is information representing a visual field center (coordinates of the visual field center) that is the center of the visual field image (range as the visual field of the user) cut out from the image of the content in the user apparatus20, that is, the content image based on the data of the content. The visual field size information is information representing a visual field size that is the size of the visual field image cut out from the content image in the user apparatus20, that is, the size of the range as the visual field of the user viewing the visual field image (size of the visual field provided by the display apparatus22to the user viewing the visual field image). Note that the user can arbitrarily set the visual field size within a range permitted by the display apparatus22. In addition, the visual field center can be considered as the center of the visual field of the user viewing the visual field image, and the visual field size can be considered as the size of the visual field of the user viewing the visual field image. The content identification information can be referenced to specify the content viewed by the user. The elapsed time information can be referenced to specify the temporal position (timing) (seek position) of the content viewed by the user in the case where the content specified by the content identification information is content that changes with time. The visual field center information, as well as the visual field size information and the point-of-view information if necessary, can be referenced to specify the visual field of the user viewing the content specified by the content identification information, that is, the visual field image (range of the visual field image) viewed by the user in the content image. Note that to roughly figure out the visual field (field of view) of the user, the visual field information not including the visual field size information can be transmitted and received. In addition, the visual field size of each user can be assumed as the same fixed value, and the visual field of each user can be specified from the visual field center. The communication management unit43manages communication, such as exchange of messages using voice or characters, between the users viewing the same content. Note that the function of the server apparatus40may be provided to at least one of the user apparatuses20, and a plurality of user apparatuses20including the user apparatus20provided with the function of the server apparatus40may be connected to each other through an intranet or the like. FIG.2is a block diagram illustrating a configuration example of the information processing apparatus21included in the user apparatus20. The information processing apparatus21includes a communication unit51, a content holding unit52, a visual field image determination unit53, an image cutting unit54, a visual field information holding unit55, an input unit58, a trigger detection unit59, and a display control unit60. The communication unit51connects to the server apparatus40through the Internet31and functions as a content acquisition unit that acquires the data of the content. Furthermore, the communication unit51acquires image data and the like of user images corresponding to the users (user images representing the users), such as icons and avatars of the users. The data and the like of the content acquired by the communication unit51from the server apparatus40are recorded in the content holding unit52. In addition, the communication unit51notifies the server apparatus40of the visual field information, which is sequentially generated by the visual field information holding unit55, of the user (first user) using the user apparatus20. Furthermore, the communication unit51functions as a visual field information acquisition unit that acquires, from the server apparatus40, the visual field information of another user (second user) using another user apparatus20to view the content. The communication unit51outputs the visual field information of the other user acquired from the server apparatus40to the visual field information holding unit55. The content holding unit52holds the data of the content acquired from the server apparatus40. In addition, the content holding unit52holds the image data and the like of the user image, such as an icon and an avatar for representing each user, to be superimposed and displayed on the image (visual field image) cut out from the content image based on the data of the content. The visual field image determination unit53determines the visual field center based on at least one of an amount of movement of the line of sight notified from a line-of-sight detection unit71(FIG.3) of the display apparatus22or an amount of movement of the head notified from a head motion detection unit72(FIG.3). Furthermore, the visual field image determination unit53determines the visual field image (range of the visual field image) of the user to be cut out from the content image based on the visual field center and the visual field size (visual field size of the visual field provided (limited) by the display apparatus22to the user viewing the visual field image). For example, in the case where the display apparatus22is an HMD, the visual field image determination unit53determines the visual field center based on at least one of the movement of the line of sight of the user or the movement of the head of the user associated with the HMD as the display apparatus22and determines the visual field image of the user based on the visual field center and the visual field size. In addition, the visual field image determination unit53moves the visual field image of the user based on the line of sight of the user in response to approach of the line of sight of the user to an edge of the visual field image of the user. Note that the visual field image determination unit53can make an angle of rotation, which is the amount of movement of the visual field image of the user, larger than an angle of rotation of the head of the user based on the angle of rotation of the line of sight of the user and the angle of rotation of the head. In addition, the visual field image determination unit53can determine an initial position of the visual field image corresponding to the timing that the user has substantially started to view the content (timing that the user has started to use the user apparatus20), that is, an initial position of the visual field center of the visual field image, based on the visual field information of other users. For example, the visual field image determination unit53can specify, based on the visual field information of the other users, a region in the content image where the visual fields of equal to or more than a predetermined number of other users are gathered and can determine a position in the region as the initial position of the visual field center. Furthermore, the visual field image determination unit53can determine the visual field image to be cut out from the content image, that is, determine, as the visual field image, the image of the range with the center at the initial position of the visual field center in the content image, based on the initial position of the visual field center. In this case, the user can easily communicate with the other users right after the user starts to view the content. The image cutting unit54cuts out (to thereby generate) the visual field image corresponding to the visual field of the user, that is, the visual field image determined by the visual field image determination unit53, from the content image based on the data of the content. In addition, the image cutting unit54can acquire the visual field information of another user from the visual field information holding unit55and cut out, as the visual field image, an image in the visual field size of the user (visual field size indicated in the visual field size information included in the visual field information of the user) or in the visual field size of the other user from the content image according to the visual field information of the other user. The visual field image can be displayed on the display apparatus22as, for example, a tracking image that tracks the visual field of the other user. According to the tracking image, the user can set the visual field of the other user as the visual field of the user and view an image similar to the visual field image viewed by the other user. The visual field information holding unit55sequentially updates and holds the visual field information in the user apparatus20and outputs the visual field information to the communication unit51to cause the communication unit51to notify the server apparatus40of the visual field information. In addition, the visual field information holding unit55holds the visual field information of other users acquired by the communication unit51from the server apparatus40. The input unit58includes an operation device, such as a remote control, a voice input device, such as a microphone, and an imaging device, such as a camera. The input unit58inputs a key operation of the user using the operation device, inputs a speech of the user using the voice input device, or inputs an image obtained by taking a picture of the user using the imaging device. The trigger detection unit59detects a key operation, a voice command, a gesture, or the like as a trigger of predetermined action from an operation, a speech, an image, or the like of the user input by the input unit58. The display control unit60controls the display to cause the display apparatus22to display the visual field image and the user image representing the user, such as an icon and an avatar corresponding to each user. In addition, the display control unit60controls the display to cause the display apparatus22to display the visual fields (information representing the visual fields) of the users that need to be displayed, based on the visual field information of each user. For example, the display control unit60causes the display apparatus22to display a visual field position instruction image indicating the position of the visual field of another user based on the visual field information of the other user. The visual field position instruction image includes a wide area image112(FIG.5) including the visual field image (first visual field image) of the user of the user apparatus20and the visual field image (second visual field image) corresponding to the visual field of the other user. Furthermore, the visual field position instruction image includes a symbol image that indicates the position of the visual field of the other user and that is superimposed on the visual field image of the user of the user apparatus20. Examples of the symbol image include a visual field direction instruction mark, a visual field direction instruction line, and the like described later. Note that the display control unit60can control the display apparatus22to switch the visual field image of the user of the user apparatus20and the wide area image in response to a predetermined trigger. The predetermined trigger includes at least one of the key operation, the voice command, the motion of the head, or the gesture operation of the user of the user apparatus20. In addition, the display control unit60superimposes the user image, such as an avatar, of another user on the visual field image of the user and causes the display apparatus22to display the image. In a case where there are a plurality of other users, the display control unit60can superimpose at least one of a plurality of user images corresponding to the plurality of other users on the visual field image of the user and cause the display apparatus22to display the image. For example, the display control unit60can set priorities of the other users for which the user images, such as avatars, are to be superimposed and displayed on the visual field image or the like in the display apparatus22and can cause the display apparatus22to display part or all of the user images of the plurality of other users according to the priorities. For example, the display control unit60can control (determine the priorities) whether or not to superimpose each of the plurality of user images of the plurality of other users on the visual field image of the user according to the positional relationship between the visual field image of the user and each of the visual fields of the plurality of other users. In addition, for example, the display control unit60can preferentially superimpose, on the visual field image of the user, the user image of another user with the visual field relatively close to the visual field image of the user (visual field provided by the visual field image) among the plurality of other users and can cause the display apparatus22to display the image. Furthermore, for example, the display control unit60can preferentially superimpose, on the visual field image of the user, part of the plurality of user images of the plurality of other users according to a history of communication between the user and the plurality of other users and can cause the display apparatus22to display the image. FIG.3is a block diagram illustrating a configuration example of the display apparatus22included in the user apparatus20. Particularly,FIG.3illustrates a configuration example suitable for a case in which the display apparatus22is an HMD mounted on the head of the user. The display apparatus22includes the line-of-sight detection unit71, the head motion detection unit72, a display unit73, and a voice input/output unit74. The line-of-sight detection unit71detects the line of sight of the user. For example, the corneal reflex or other arbitrary methods and techniques can be used to detect the line of sight. Furthermore, the line-of-sight detection unit71detects, as an amount of movement of the line of sight, an angle (line-of-sight movement angle) from the middle of the visual field image (visual field middle) displayed on the display unit73to the line of sight of the user and transmits the line-of-sight movement angle to the visual field image determination unit53of the information processing apparatus21. The head motion detection unit72detects, as an amount of movement of the head, a head rotation angle of the user wearing the HMD as the display apparatus22and transmits the head rotation angle to the visual field image determination unit53of the information processing apparatus21. The head rotation angle of the user wearing the HMD as the display apparatus22is also an angle of rotation of the HMD. The display unit73displays the visual field image and the like based on an image signal of the visual field image and the like supplied from the display control unit60. The voice input/output unit74includes, for example, a microphone and a speaker and is configured to output voice of the content (content voice based on the data of the content) and input speech of the user. The input speech of the user is used for, for example, communication between the users viewing the same content. That is, the input speech of the user is transmitted to the other user apparatuses20and output from the speakers of the voice input/output units74of the other user apparatuses20. The HMD as the display apparatus22includes the line-of-sight detection unit71and the head motion detection unit72as described above, and the movement of the line of sight of the user and the movement of the head are detected in the HMD (in association with the HMD). <2. Presentation of Visual Field of Each User> FIG.4is a diagram illustrating an example of an entire image including an entire content image developed on a plane. An entire image100is an image obtained by using an equirectangular projection method to develop, on the plane, the entire content image that is a spherical image. Note that the projection for developing, on the plane, the entire content image that is a spherical image may be a projection other than the equirectangular projection method. For example, a Mercator projection method can be used instead of the equirectangular projection method. Hereinafter, the image developed on the plane by using the equirectangular projection method will be referred to as an equirectangular projection. Similarly, an image developed on the plane by using the Mercator projection method will be referred to as a Mercator projection. In this case, it is assumed that the content is shared (viewed) by three users A to C. However, the timing of the users A to C viewing the content may not be the same time. The display apparatus22used by the user A displays a visual field image generated by cutting out, from the content image, a visual field range101A according to the visual field center information and the visual field size information of the visual field information of the user A (range in the visual field size indicated in the visual field size information, with the center at the visual field center indicated in the visual field center information). Similarly, the display apparatus22used by the user B displays a visual field image generated by cutting out, from the content image, a visual field range101B according to the visual field center information and the visual field size information of the visual field information of the user B. The display apparatus22used by the user C displays a visual field image generated by cutting out, from the content image, a visual field range10CB according to the visual field center information and the visual field size information of the visual field information of the user C. The visual field range101A represents the visual field (range of the visual field) of the user A. Similarly, the visual field range101B represents the visual field of the user B, and the visual field range101C represents the visual field of the user C. <Method Using Wide Area Image> FIG.5is a diagram illustrating an example of display of a visual field image111A that is displayed on the display apparatus22used by the user A and that is generated by cutting out the visual field range101A from the content image. However, in the case ofFIG.5, the wide area image112is superimposed and displayed at a predetermined position in the visual field image111A (upper left in the case ofFIG.5) according to a predetermined operation of the user A. The wide area image112is a type of visual field position instruction image indicating the positions of the visual fields of the other users (in this case, users B and C) sharing (viewing) the same content in order to present the visual fields of the other users to the user (in this case, user A). The wide area image112is generated by using all or part of the entire image100. That is, the wide area image112is generated by cutting out, from the entire image100, a range including the visual field image of the user (first visual field image) and the visual field images of the other users (second visual field images). Therefore, in a case where the visual fields of a plurality of users sharing the content are dispersed, the wide area image112is generated based on an image obtained by cutting out a large range of the entire image100or based on all of the entire image100. In a case where the visual fields of a plurality of users sharing the content are concentrated, the wide area image112is generated based on an image obtained by cutting out a small range from the entire image100. Note that the wide area image112can always be generated based on all of the entire image100. FIG.6is an enlarged view of the wide area image112superimposed on the visual field image111A (FIG.5) displayed on the display apparatus22used by the user A. The wide area image112displayed on the display apparatus22of the user A (display apparatus22used by the user A) includes the visual field image of the user A and the visual field images of the users B and C as other users sharing the same content with the user A. Therefore, the wide area image112includes the visual field range (of the visual field) of the user A and the visual field ranges of the users B and C. Note that although the wide area image112is an equirectangular projection inFIG.6, the wide area image112may be any of a Mercator projection, an aerial view, a bird's eye view, and a plan view such as a two-dimensional map. InFIG.6, visual field range display113A representing the visual field range of the user A, visual field range display113B representing the visual field range of the user B, an icon114B as a user image of the user B, visual field range display113C representing the visual field range of the user C, and an icon114C as a user image of the user C are superimposed and displayed on the wide area image112. According to the visual field range display113B and the icon114B, the visual field range101B (FIG.4) of the user B is presented to the user A. Similarly, according to the visual field range display113C and the icon114C, the visual field range101C (FIG.4) of the user C is presented to the user A. Therefore, the wide area image112is superimposed and displayed on the visual field image111A (FIG.5) displayed on the display apparatus22used by the user A, and the user A can figure out the visual fields of the other users (in this case, users B and C) sharing the content. This can improve the communication regarding a plurality of users viewing the same content. That is, in a case where, for example, the users perform communication, such as exchange of messages using voice or characters, regarding the things viewed by the users, a situation such as miscommunication can be suppressed, and smooth communication between the users can be assisted. Note that instead of superimposing and displaying the wide area image112on the visual field image111A (FIG.5), the wide area image112may be displayed on a display device other than the display apparatus22, such as, for example, a television receiver. Next, other methods of presenting the visual field ranges of the other users sharing the content to the user will be described. <Method Using Visual Field Direction Instruction Marks> FIG.7is a diagram illustrating an example of display in a case where visual field direction instruction marks121, which represent the directions of the locations of the visual fields of other users sharing the content, are superimposed and displayed as symbol images, which indicate the positions of the visual fields of the other users, on the visual field image111A, which is displayed on the display apparatus22used by the user. In the example of display ofFIG.7, a visual field direction instruction mark121B corresponding to the user B and a visual field direction instruction mark121C corresponding to the user C are superimposed and displayed on the visual field image111A corresponding to the user A (visual field image of user A). The visual field direction instruction mark121B is obtained by surrounding the icon114B (FIG.6) corresponding to the user B by a graphic including an acute projection, and the acute projection indicates the direction of the visual field of the user B. Similarly, the visual field direction instruction mark121C is obtained by surrounding the icon114C (FIG.6) corresponding to the user C by a graphic including an acute projection (graphic illustrating a so-called speech bubble), and the acute projection indicates the direction of the visual field of the user C. The visual field direction instruction marks121B and121C superimposed on the visual field image111A displayed on the display apparatus22used by the user A allow the user A to figure out the directions of the locations of the visual fields of the other users sharing the content (in this case, users B and C). The user A can, for example, rotate the head in the upper left direction indicated by the acute projection of the visual field direction instruction mark121B to move the visual field of the user (user A) to the visual field of the user B. Similarly, the user A can, for example, rotate the head in the right direction indicated by the acute projection of the visual field direction instruction mark121C to move the visual field of the user (user A) to the visual field of the user C. FIG.8is a diagram illustrating an example of display of the visual field image in a case where the visual field of the user is narrower than the visual field of another user. As illustrated inFIG.7described above, when the user A viewing the visual field image111A including the superimposed visual field direction instruction mark121C rotates the head in the right direction indicated by the acute projection of the visual field direction instruction mark121C, the visual field image111A (range cut out as the visual field image111A) displayed on the display apparatus22moves to the right side of the content image. As a result, the visual field of the user (user A) gradually approaches the visual field of the user C. Accordingly, the acute projection of the visual field direction instruction mark121C is gradually reduced. Furthermore, when the visual field of the user (user A) is included in the visual field of the user C wider than the visual field of the user (user A) (when the visual field image (displayed on the display apparatus22) of the user A is included in the visual field image of the user C), the acute projection of the visual field direction instruction mark121C disappears as illustrated inFIG.8. Therefore, the size of the projection of the visual field direction instruction mark121C superimposed on the visual field image111A displayed on the display apparatus22of the user A allows the user A to figure out the proximity and the degree of coincidence of the visual field of the user (user A) and the visual field of the other user (in this case, user C). Furthermore, when the projection of the visual field direction instruction mark121C disappears, it can be determined that the user (user A) and the other user (in this case, user C) are viewing the same thing. Therefore, the user A and the user C can perform communication, such as exchange of messages, in this state, and the situation such as miscommunication can be suppressed. FIG.9is a diagram illustrating an example of display of the visual field image in a case where the visual field of the user is wider than the visual field of another user. As inFIG.7described above, when the user A viewing the visual field image111A including the superimposed visual field direction instruction mark121C rotates the head in the right direction indicated by the acute projection of the visual field direction instruction mark121C, the visual field of the user (user A) gradually approaches the visual field of the user C. Accordingly, the acute projection of the visual field direction instruction mark121C is gradually reduced. Furthermore, when the visual field of the user C narrower than the visual field of the user (user A) is included in the visual field of the user (user A), the acute projection of the visual field direction instruction mark121C disappears as illustrated inFIG.9, and visual field range display141C representing the visual field of the user C is superimposed and displayed on the visual field image111A. Therefore, the size of the projection of the visual field direction instruction mark121C superimposed on the visual field image111A displayed on the display apparatus22used by the user A and the visual field range display141C allow the user A to figure out the proximity and the degree of coincidence (overlap) of the visual field of the user (user A) and the visual field of the other user (in this case, user C). Furthermore, when the projection of the visual field direction instruction mark121C disappears, it can be determined that the user (user A) and the other user (in this case, user C) are viewing the same thing. Therefore, the user A and the user C can perform communication, such as exchange of messages, in this state, and the situation such as miscommunication can be suppressed. Note that the visual field direction instruction mark121C and the visual field range display141C are types of visual field position instruction image indicating the position of the visual field of the user C. <Method Using Visual Field Direction Instruction Line> FIG.10is a diagram illustrating an example of display in a case where a visual field direction instruction line131, which represents the direction of the location of the visual field of another user sharing the content, is superimposed and displayed as a symbol image, which indicates the position of the visual field of the other user, on the visual field image111A, which is displayed on the display apparatus22used by the user. In the example of display ofFIG.10, a visual field direction instruction line131B is superimposed and displayed on the visual field image111A corresponding to the visual field of the user A. In the visual field direction instruction line131B, one end of the straight line represents the direction of the location of the visual field of the user B (in this case, upper left direction). In addition, an icon132B corresponding to the user B is superimposed and displayed on the visual field image111A near the visual field direction instruction line131B in order to indicate that the visual field direction instruction line131B corresponds to the user B. The visual field direction instruction line131B superimposed on the visual field image111A displayed on the display apparatus22used by the user A allows the user A to figure out the direction of the location of the visual field of the other user sharing the content (in this case, user B). The user A can, for example, rotate the head in the upper left direction along the visual field direction instruction line131B to move the visual field of the user (user A) to the visual field of the user B. This can suppress the situation, such as miscommunication, in the case where the users perform communication, such as exchange of messages, and smooth communication between the users can be assisted. <3. Tracking Image that Tracks Visual Field of Another User> The user apparatus20can track the visual field of another user of another user apparatus20based on the visual field information of the other user apparatus20acquired from the server apparatus40. FIGS.11A,11B, and11Care diagrams illustrating an example of display of a tracking image AC displayed on the display unit73of the display apparatus22when the user apparatus20used by the user A tracks the visual field of the user C in the case where the visual field image (range cut out as the visual field image from the content image) of the user A is larger than the visual field image of the user C so that the visual field of the user A is wider than the visual field of the user C. The user apparatus20used by the user A can generate the tracking image AC by cutting out, as a visual field image, an image in the visual field size of the user A or the visual field size of the user C with the center at the visual field center indicated in the visual field information of the user C, from the content image based on the visual field information of the user C. In a tracking image151AC illustrated inFIG.11A, visual field range display152C representing the visual field of the user C and an icon153C corresponding to the user C are superimposed and displayed on the visual field image in the visual field size of the user A. In the tracking image151AC illustrated inFIG.11B, masking display154is superimposed on a range outside the visual field of the user C in the visual field image in the visual field size of the user A, and the icon153C corresponding to the user C is superimposed and displayed on the range of the visual field of the user C. In the tracking image151AC illustrated inFIG.11C, the visual field image in the visual field size of the user C is expanded to correspond to the visual field size of the user A, and the icon153C corresponding to the user C is further superimposed and displayed. FIGS.12A and12Bare diagrams illustrating an example of display of the tracking image AC displayed on the display unit73of the display apparatus22when the user apparatus20used by the user A tracks the visual field of the user C in the case where the visual field image of the user A is smaller than the visual field image of the user C so that the visual field of the user A is narrower than the visual field of the user C. In a tracking image161AC illustrated inFIG.12A, an icon162C corresponding to the user C and an arrow163indicating that the visual field of the user C includes the visual field of the user A (indicating that the visual field of the user C is wider than the visual field of the user A) are superimposed and displayed on the visual field image in the visual field size of the user A. In the tracking image161AC illustrated inFIG.12B, the icon162C, the arrow163, and visual field range display164representing the visual field of the user A are superimposed and displayed on an image obtained by reducing the visual field image in the visual field size of the user C to a size within the visual field size of the user A. The user apparatus20can switch the mode between a normal mode for determining the visual field image of the user displayed on the display apparatus22and a tracking mode for displaying the tracking images illustrated inFIGS.11A,11B,11C,12A, and12B according to the motion of the head or the like of the user. According to the tracking mode, the user can view the scene viewed by another user. <4. Highlighting User Speaking> When the visual fields (including the directions of the visual fields) of other users are always displayed on the visual field image111A corresponding to the visual field of the user A, it may become difficult for the user A to see the visual field image111A in a case where the number of other users increases. Therefore, in displaying the visual fields of the other users, for example, the visual field of only the user transmitting a message in communication between the users may be displayed, or the visual field of the user transmitting a message may be highlighted. FIG.13is a diagram illustrating an example of display in a case where the user apparatus20used by the user A highlights a visual field direction instruction mark171C corresponding to the user C in response to transmission of a message by the user C. In the case ofFIG.13, the visual field direction instruction mark171C highlighted by thickening the contour line is superimposed on the visual field image111A corresponding to the visual field of the user A. Furthermore, text display172(in the case ofFIG.13, “LOOK HERE”) corresponding to the speech of the user C is superimposed and displayed below the visual field direction instruction mark171C. Note that to allow the user A to visually and more easily distinguish the other user transmitting the message and another user not transmitting a message, a visual field direction instruction mark171B corresponding to the other user not transmitting the message (in this case, user B) may be displayed by changing the contour to a broken line or by reducing the brightness to make the visual field direction instruction mark171B inconspicuous. <5. Visual Field Information Presentation Process of User Apparatus20> Next, a visual field information presentation process for presenting the locations of the visual fields of the other users to the user while displaying the visual field image corresponding to the visual field of the user on the display apparatus22of the user apparatus20will be described. FIG.14is a flow chart describing the visual field information presentation process. In the information processing apparatus21(FIG.2) of the user apparatus20used by the user A, the communication unit51connects to the server apparatus40through the Internet31and acquires the data of the content in step S1. The data of the content acquired from the server apparatus40is held in the content holding unit52. Furthermore, in step S1, the communication unit51starts to acquire the visual field information of the other users (for example, users B and C) viewing the same content from the visual field information management unit42of the server apparatus40. In step S2, the visual field image determination unit53sets, as an initial position of the visual field center, a position in a range viewed by more other users based on the visual field information of the other users. Furthermore, the visual field image determination unit53determines the visual field image (corresponding to the visual field) of the user in the content image based on the initial position of the visual field center and the visual field size. By setting the initial position of the visual field center at the position in the range viewed by more other users, the user (in this case, user A) can immediately view, as the visual field image, the same image as the other users, and the users can easily communicate. In step S3, the image cutting unit54cuts out the visual field image111A (FIG.5) determined by the visual field image determination unit53from the content image to generate the visual field image111A. In step S4, the display control unit60superimposes, on the visual field image111A, visual field position instruction images as display representing the visual fields of the other users. Subsequently, the display control unit60supplies an image signal of the visual field image111A including the superimposed visual field position instruction images to the display apparatus22and causes the display unit73to display the visual field image111A. Here, the visual field position instruction images include the wide area image112(FIG.6), symbol images, and the like. The symbol images include the visual field direction instruction marks121B and121C (FIG.7), the visual field direction instruction line131B (FIG.10), and the like. In step S5, the visual field image determination unit53determines whether or not to move the visual field image displayed on the display unit73based on the amount of movement (angle of rotation) of the line of sight and the amount of movement (angle of rotation) of the head. Here, in a case where the visual field image determination unit53determines to move the visual field image, the process proceeds to step S6. In step S6, the visual field information holding unit55updates and holds the visual field information of the user based on the amount of movement of the line of sight and the amount of movement of the head and outputs the visual field information to the communication unit51. The communication unit51notifies the updated visual field information of the user to the server apparatus40through the Internet31. Note that the visual field information notified to the server apparatus40is to be supplied to the other user apparatuses20. The visual field image determination unit53determines the visual field image corresponding to the visual field of the user in the content image based on the updated visual field information. The process returns to step S3, and the subsequent process is repeated until the end of the reproduction of the content. In this way, the visual field image corresponding to the visual field of the user in the content image is determined based on the updated visual field information, and the visual field image displayed on the display unit73moves based on the amount of movement of the line of sight of the user and the amount of movement of the head (the range cut out as the visual field image from the content image moves). Furthermore, in a case where the visual field image determination unit53determines not to move the visual field image in step S5, the visual field image determination unit53sets again the visual field image of the last time as the visual field image corresponding to the visual field of the user, and step S6is skipped. The process then returns to step S3, and the subsequent process is repeated until the end of the reproduction of the content. According to the visual field information presentation process described above, the visual fields (including the directions of the locations of the visual fields) of the other users are displayed to present the visual fields of the other users to the user. The user can move the line of sight and the head to move the visual field of the user based on the presentation, and the user can immediately see the same images as the other users. This can suppress the situation, such as miscommunication, in the communication between the users. <6. Switch of Visual Field Image and Wide Area Image> In the description so far, the wide area image (FIG.6) is superimposed and displayed on the visual field image (FIG.5) or displayed on another display device. However, the display apparatus22can switch and display the visual field image and the wide area image. In switching the display of the display apparatus22from the visual field image to the wide area image, the visual field image may be instantaneously switched to the wide area image. In addition, the visual field image may be zoomed out, or the point of view may be moved to thereby gradually change the visual field image to the wide area image. The visual field image can be switched to the wide area image when, for example, a user (for example, user A) wants to know the visual field (including the direction of the visual field) of another user (for example, user B). In addition, even in a case where there are no other users, the visual field image can be switched to the wide area image when the user A wants to figure out the position of the visual field of the user in the entire content image or the point of view of the user (in a case where the content is a free viewpoint image). Examples of a trigger for switching the visual field image to the wide area image are as follows. For example, an operation of a key of a remote control included in the input unit58(FIG.2) operated by the user may be the trigger, and the visual field image may be switched to display the wide area image only in a period in which the user is pressing the key of the remote control. In addition, a voice command spoken by the user may be the trigger, for example. The visual field image may be switched to the wide area image in response to a predetermined voice command “zoom out” spoken by the user, and the wide area image may be returned to the visual field image in response to a voice command “return” spoken by the user. In addition, a motion of the head of the user may be the trigger, for example. The visual field image may be switched to a bird's eye view or the like as a wide area image when the user faces right below. The visual field image may be switched to an image of a 2D map as a wide area image corresponding to the content space when the user faces right above. In addition, communication between users may be the trigger, for example. The image may be switched to the wide area image when the user starts to communicate with another user or when the distance from another user during communication has increased so that the other user is out of the visual field. After that, the visual field image and the wide area image may be switched in response to a predetermined gesture operation, such as a nod of the user or a motion of the hand. In addition, an event in the content may be the trigger, for example. In response to the event, the visual field image may be switched to the wide area image including the place of the event. Furthermore, in a case where, for example, the content is a live image in a real space, generation of an alert in the real space (for example, discovery of suspicious person, fire, operation of emergency button, or the like) may be the trigger. In response to the alert, the visual field image may be switched to the wide area image including the location of the generation of the alert. Furthermore, the content of a conversation as communication between users may be analyzed, and the switch between the visual field image and the wide area image may be triggered when the result of analysis indicates predetermined content. In addition, the visual field image and the wide area image may be switched in response to a gesture (action) of the user looking to the left or right or a gesture of pulling. Next,FIGS.15A,15B,15C,15D, and15Eare diagram illustrating an example of display of the wide area image switched from the visual field image.FIG.15Ais an example of display of a visual field image181displayed on the display apparatus22worn by the user A. FIG.15Bis an example of display of a wide area image182using a bird's eye view switched from the visual field image181(FIG.15A) in response to the detection of the trigger. A visual field mark183indicating the position (that is, point of view) of the user and the direction of the visual field is superimposed on the wide area image182. The bird's eye view is displayed at an angle that the visual field mark183is out of the shadow of objects, such as main buildings, on the wide area image182, while it is prioritized to make it easy to understand the current visual field of the user indicated by the visual field mark183. FIG.15Cis an example of display of a wide area image184using an equirectangular projection switched from the visual field image181(FIG.15A) in response to the detection of the trigger. Visual field range display185A representing the visual field of the user A is superimposed on the wide area image184. FIG.15Dis an example of display of a wide area image186using an equirectangular projection switched from the visual field image181(FIG.15A) triggered when, for example, the user A starts to communicate with the user B. Visual field range display187A representing the visual field of the user A and visual field range display187B representing the visual field of the user B are superimposed on the wide area image186. FIG.15Eis an example of display of a wide area image188using an equirectangular projection switched from the visual field image181(FIG.15A) triggered when, for example, the user A starts to communicate with the user B. The wide area image188is an expanded image of the range including the visual fields of the user A and the user B. Visual field range display189A representing the visual field of the user A and visual field range display189B representing the visual field of the user B are superimposed on the wide area image188. As for the wide area image, when the visual fields of a plurality of users significantly vary in the case where the plurality of users communicates while viewing the content, an aerial view or the like including the visual fields of the plurality of users (visual field images of the plurality of users) at the same time can be adopted as the wide area image and automatically displayed. As for the content of AR, for example, a separate camera can be used to take a picture of the real world, and the image can be used to generate the wide area image. Furthermore, in a case where there are a user wearing AR glasses and another user remotely viewing the content of the image of a live camera arranged in the space of the user, an aerial image including the visual fields of the user and the other user at the same time can be adopted as the wide area image and automatically displayed during the communication between the user and the other user. Furthermore, in the case where the visual fields of a plurality of users significantly vary, the display of the wide area image can be prioritized over the display of the visual field image. Furthermore, in a case where it is estimated that a plurality of users pays attention to different objects in the content image, an image including the different objects (image in which the different objects can be viewed) can be adopted as the wide area image. In addition, the wide area image can be an image, in which the image is reduced to display, at a super wide angle, a predetermined range around the visual field image of the user in the content image, and the positions of the other users are displayed. Furthermore, as for the aerial view as the wide area image, the point of view of the aerial view and the angle of view of the aerial view can be selected based on the direction of the HMD as the display apparatus22. The display of the wide area image may be triggered when the user recognizes another user and starts a conversation, that is, for example, when the user selects the other user from a menu including display of the icons of the other users and starts a voice call or a text chat. In addition, the display of the wide area image may be triggered when another user moves away from the user in the middle of a conversation, such as a voice call, between the user and the other user. Note that in the case where the content is a free viewpoint image, an image including a scene of the content space in the free viewpoint image viewed in an arbitrary direction from an arbitrary point of view can be the visual field image, and the visual field image can be displayed on the HMD as the display apparatus22. However, the movable range of the user wearing the HMD is limited. Therefore, as for the movement of the visual field image, the user may be allowed to move the image in the rotation direction, and the user apparatus20may automatically perform the parallel translation. In the case of viewing such a free viewpoint image, the wide area image can also be displayed as described above. In addition, the wide area image can be displayed to cover the entire field of view of the user. Furthermore, the wide area image can be displayed below the visual field of the user so as to be attached to the ground when the user faces below. In addition, a window can be displayed in part of the visual field image, and the wide area image can be displayed in the window. Furthermore, the wide area image can be faintly displayed with lower brightness than the visual field image. According to the wide area image, in a case where the visual fields of the user and the other user are different so that the communication is not smooth, the wide area image including the visual fields of the user and the other user can be displayed, and the obstacle (gap) of the communication can be improved. Furthermore, according to the wide area image, the user who cannot recognize the position in the free viewpoint image can easily return to a desirable position or can make a long-distance movement in a short time. <Image Switching Process of Switching Visual Field Image and Wide Area Image> Next,FIG.16is a flow chart describing an image switching process of switching the visual field image and the wide area image. However, the movement of the visual field of the user will not be mentioned in the description of the image switching process. In step S11the information processing apparatus21generates the visual field image corresponding to the visual field of the user and causes the display unit73to display the generated visual field image. Specifically, the image cutting unit54of the information processing apparatus21cuts out the visual field image from the content image held in the content holding unit52to generate the visual field image, and the display control unit60supplies the image signal of the generated visual field image to the display apparatus22to cause the display unit73to display the visual field image. In step S12, the information processing apparatus21determines whether or not there is a trigger for instructing a switch between the visual field image and the wide area image. Specifically, the trigger detection unit59of the information processing apparatus21determines whether or not a wide area trigger for instructing a switch to the wide area image is detected based on a key operation from the user, a speech of the user, an image of the user, or the like input by the input unit58. Here, in a case where the trigger detection unit59determines that the wide area trigger is not detected, the process returns to step S11. As a result, the visual field image is still displayed on the display unit73. On the other hand, in a case where the trigger detection unit59determines that the wide area trigger is detected in step S12, the process proceeds to step S13. In step S13, the information processing apparatus21generates the wide area image, and the display apparatus22switches the display of the display unit73from the visual field image to the wide area image. Specifically, for example, the display control unit60superimposes the visual field range display representing the visual field of each user (for example, visual field range display187A and1878ofFIG.15D) on the entire image100(FIG.4) of the content image held in the content holding unit52to thereby generate the wide area image (for example, wide area image186ofFIG.15D) and supplies the image signal of the generated wide area image to the display apparatus22to cause the display unit73to display the wide area image. Note that the display control unit60can determine the display position of the visual field range display representing the visual field of each user based on the visual field information of each user held in the visual field information holding unit55. After step S13, the process proceeds to step S14, and the trigger detection unit59determines whether or not the visual field trigger for instructing a switch to the visual field image is detected based on a key operation from the user, a speech of the user, an image of the user, or the like input by the input unit58. Here, in a case where the trigger detection unit59determines that the visual field trigger is not detected, the process returns to step S13. As a result, the wide area image is still displayed on the display unit73. In a case where the trigger detection unit59determines that the visual field trigger is detected in step S14, the process returns to step S11and the display of the display unit73is switched to the visual field image. According to the image switching process described above, the user can perform an operation, a speech, an action, a gesture, or the like as a trigger to switch the visual field image and the wide area image. The user viewing the wide area image can figure out the visual fields of the other users, and the situation, such as miscommunication, can be suppressed in the communication between the users. In addition, the user viewing the wide area image can figure out the visual field and the point of view of the user (position of the user in the content space in the case where the content is a free viewpoint image). Therefore, in a case where, for example, the user cannot recognize the position of the user in the content space, the user can quickly return to a desirable position. <7. Movement of Visual Field Image> Next, movement of the visual field image will be described. In the HMD, the user rotates the head to move the visual field image (range of the visual field image) to be cut out from the content image, and as a result, the visual field of the user moves. However, for example, in a case of moving the visual field image to right behind the visual field image currently displayed on the display apparatus22in the content image, the body also needs to be rotated along with the head, because a person usually cannot rotate only the head 180 degrees. Therefore, if the user frequently looks to the left and right or the back while wearing the HMD, the user gets physically tired. Thus, the user apparatus20in the present embodiment moves the visual field image (visual field center indicated in the visual field center information included in the visual field information for determining the visual field image) more than the amount of rotation of the head to reduce the physical fatigue of the user. That is, the angle of rotation of the visual field image is made larger than the angle of rotation of the head of the user. For example, the visual field image is rotated and moved 180 degrees in the horizontal direction when the user rotates the head 90 degrees in the horizontal direction. FIGS.17A and17Bare diagrams illustrating an example in which the visual field image is rotated and moved more than the amount of rotation of the head of the user. FIG.17Aillustrates a case of viewing, from above, a state in which the user faces the front in a predetermined direction and views a spherical image as a content image200. In the content image200, there is an object201in front of the user, and there is an object202behind the user. In this case, the object201exists in a visual field image (range of visual field image)203to be cut out from the content image200when the user faces the front, and the user can visually recognize the object201. On the other hand, the object202exists outside the visual field image203of the user, and the user cannot visually recognize the object202. When the user rotates the head 90 degrees in this state, the visual field image203is rotated, for example, 180 degrees, which is larger than the angle of rotation of the head, in the same direction as the rotation of the head in the content image200as illustrated inFIG.17B. In this case, the object201exists outside the visual field image203after the rotation, and the user cannot visually recognize the object201. However, the object202exists in the visual field image203after the rotation, and the user can visually recognize the object202. However, in a case where the visual field image is simply moved by equal to or more than the amount of rotation of the head as illustrated inFIGS.17A and17B, the user may feel, for example, unpleasant or sick. Therefore, in the user apparatus20, the visual field image can be rotated and moved more than the amount of rotation of the head only in a case where the user takes an action which can be assumed that the user is strongly willing to greatly move the visual field or under a condition that the user is unlikely to notice a large movement of the visual field. <Movement of Visual Field Image According to Movement of Line of Sight> The user apparatus20can move the visual field image to thereby move the visual field of the user when only the line of sight is moved without the rotation of the head. FIGS.18A,18B,19A, and19Bare diagrams illustrating the relationship between the motion of the line of sight of the user and the movement of the visual field image. FIG.18Ais an example of an entire image190obtained by developing the content image on a plane, and a range191on the entire image190indicates a range cut out as a visual field image192(FIG.18B) to be displayed on the display unit73of the display apparatus22.FIG.18Bis an example of display of the visual field image192, and an X mark indicates a line-of-sight position195of the user on the visual field image192. In the case ofFIG.18B, the line-of-sight position195is positioned at the lower center of the visual field image192. FIG.19Aindicates a motion of the line of sight of the user. InFIG.19A, the line-of-sight position195of the user is moved to the lower side of the left edge of the visual field image192.FIG.19Billustrates a range cut out as the visual field image after the movement of the visual field image according to the movement of the line of sight, that is, after the movement of the line of sight illustrated inFIG.19A. In a case where the user moves the line of sight so that the line-of-sight position195approaches the edge of the visual field image as illustrated inFIG.19A, it can be assumed that the user is willing to see the outside the range191of the current visual field image. Therefore, the visual field image determination unit53moves the visual field image based on the line-of-sight position195in response to the approach of the line-of-sight position195to the edge of the visual field image. That is, the range cut out as the visual field image is moved from the range191(FIG.17A) to a range196(FIG.18B) in the same direction as the movement direction of the line of sight. The movement of the visual field image based on the line-of-sight position195(of the line of sight after the movement) as described above can be made by, for example, setting the line-of-sight position195as the visual field center to determine the visual field image to be cut out from the content image. <Movement of Visual Field Image According to Amount of Movement of Line of Sight and Amount of Rotation of Head> In a case where the user swiftly moves the line of sight and rotates the head, it may be difficult for the user to notice a large movement of the visual field. Therefore, the user apparatus20moves the visual field image more than the amount of rotation of the head in a case where the angle of rotation as the amount of movement of the line of sight of the user and the amount of rotation (angle of rotation) of the head in a certain time are equal to or greater than a threshold. However, in a case where the rotation direction of the head and the movement direction of the line of sight are different, it is likely that the user is viewing the object in the currently displayed visual field image, and the user may feel awkward or unpleasant if the movement of the visual field image is equal to or greater than the actual rotation of the head in such a state. Therefore, in a case where the rotation direction of the head and the movement direction of the line of sight are opposite directions, the movement of the visual field image is made equal to or greater than the actual rotation of the head only in a case where the difference between the rotation of the head and the movement of the line of sight (line-of-sight movement angle described later) is changed more than a threshold within a certain time. <Definition of Line-of-Sight Movement Angle and Head Rotation Angle> FIG.20is a diagram for describing definition of a line-of-sight movement angle representing the amount of movement of the line of sight of the user, andFIG.20illustrates a state of viewing, from above, the visual field of the user provided by the display apparatus22. As illustrated inFIG.20, an angle from a visual field middle representing the middle of the visual field to a line-of-sight direction after the movement of the line of sight is defined as a line-of-sight movement angle representing the angle of rotation that is the amount of motion of the line of sight of the user. Furthermore, for example, an angle from the visual field middle to the left edge of the visual field is defined as a positive line-of-sight movement angle, and an angle from the visual field middle to the right edge of the visual field is defined as a negative line-of-sight movement angle. FIGS.21A,21B, and21Care diagrams for describing the definition of the head rotation angle representing the amount of rotation of the head of the user, andFIGS.21A,21B, and21Cillustrate a situation of viewing, from above, the state in which the user views a spherical image as the content image200. The head rotation angle is defined such that a predetermined direction in which the head of the user is currently facing indicated byFIG.21Ais 0 degrees. Furthermore, for example, an angle from the predetermined direction to the left side is defined as a positive head rotation angle, and an angle from the predetermined direction to the right side is defined as a negative head rotation angle. Note that inFIG.21A, the head rotation angle is 0 degrees, and the line-of-sight movement angle is a positive value. InFIG.21B, the head rotation angle is a positive value, and the line-of-sight movement angle is a positive value larger than the line-of-sight movement angle inFIG.21A. InFIG.21C, the head rotation angle is a positive value, and the line-of-sight movement angle is a negative value. <Visual Field Image Movement Process According to Amount of Movement of Line of Sight and Amount of Rotation of Head> Next, a visual field image movement process according to the amount of movement of the line of sight and the amount of rotation of the head will be described. FIG.22is a flow chart describing a visual field image movement process of moving the visual field image according to the amount of movement of the line of sight and the amount of rotation of the head. The visual field image movement process is executed while the content is viewed. In step S21, the head motion detection unit72of the display apparatus22(FIG.3) starts to acquire the head rotation angle of the user. The visual field image determination unit53of the information processing apparatus21(FIG.2) is notified of the acquired head rotation angle of the user. In step S22, the line-of-sight detection unit71of the display apparatus22detects the line of sight of the user and starts to acquire the line-of-sight position195(FIGS.18A and18B) and the line-of-sight movement angle of the user. The visual field image determination unit53of the information processing apparatus21is notified of the line-of-sight position195and the line-of-sight movement angle. In step S23, the visual field image determination unit53determines whether or not the line of sight of the user has approached the edge of the visual field image based on the notification from the line-of-sight detection unit71and advances the process to step S24when determining that the line of sight of the user has approached the edge of the visual field image. In step S24, the visual field image determination unit53moves the visual field image in the same direction as the movement direction of the line of sight from the current position in the content image. As a result, the visual field image displayed on the display unit73is moved in the direction of the line of sight. Note that in a case where the visual field image determination unit53does not determine that the line of sight of the user has approached the edge of the visual field image in step S23, step S24is skipped, and the process proceeds to step S25. In step S25, the visual field image determination unit53determines whether or not a combined angle of the head rotation angle and the line-of-sight movement angle of the user in a certain time is equal to or greater than a threshold. In a case where the visual field image determination unit53determines that the combined angle of the head rotation angle and the line-of-sight movement angle of the user in a certain time is equal to or greater than the threshold, that is, for example, in a case where the head and the line of sight of the user move at equal to or higher than a certain speed in the same direction or in a case where one of the motions (rotations) of the head and the line of sight of the user is sufficiently larger than the other although the head and the line of sight move in opposite directions, the process proceeds to step S26. In step S26, the visual field image determination unit53moves (rotates) the visual field image more than the head rotation angle from the current position in the content image. Note that an arbitrary method can be used to calculate the amount of movement (angle of rotation) of the visual field image in moving the visual field image more than the head rotation angle. The process then returns to step S23, and the subsequent process is repeated. On the other hand, in a case where the visual field image determination unit53determines that the combined angle of the head rotation angle and the line-of-sight movement angle of the user in a certain time is smaller than the threshold in step S25, the process proceeds to step S27. In step S27, the visual field image determination unit53moves the visual field image according to the head rotation angle from the current position in the content image. The process then returns to step S23, and the subsequent process is repeated. According to the visual field image movement process described above, it can be assumed that the user is willing to view the outside (scene outside) the current visual field image (scene in the visual field image) in the case where the user moves the line of sight to the edge of the visual field image. Therefore, the visual field image is moved in the direction in which the user has moved the line of sight. Furthermore, in the case where the combined angle of the head rotation angle and the line-of-sight movement angle of the user in a certain time is equal to or greater than the threshold, it may be difficult for the user to notice a large movement of the visual field. Therefore, the visual field image is moved more than the head rotation angle. As a result, the user can, for example, rotate the head in some degree to move the visual field to right behind without rotating the body. Therefore, the visual field can be easily moved in viewing the content, without increasing the physical load. <8. Selection of Avatar to be Displayed> In the case where the content is a free viewpoint image when a plurality of users views the same content, each user can arbitrarily change the point of view of the user. Therefore, another user (point of view of another user) may exist in the visual field of a user (visual field image viewed by the user). In such a case, a character as a model of the other user, such as an avatar, can be displayed (superimposed) at the position of the other user (point of view of the other user) in the visual field image (content space in the visual field image). The avatar may be a drawing or a photograph. In addition, the avatar may be two-dimensionally displayed or three-dimensionally displayed. The display of the avatar of the other user on the visual field image can expect an advantageous effect that the users can more smoothly communicate. However, if there are a plurality of other users in the visual field image, and all of the avatars corresponding to the plurality of other users are displayed, there may be a trouble, such as the visual field image is occupied by the avatars of the other users (avatars corresponding to the other users) or finding the partner (avatar) of communication becomes difficult. Therefore, in the case of displaying the avatars, a mechanism of selecting the avatars to be displayed on the visual field image can be implemented. In the user apparatus20of the present embodiment, the display control unit60(FIG.2) of the information processing apparatus21selects the avatars to be displayed (superimposed) on the visual field image. FIG.23is a plan view for describing a method of selecting avatars to be displayed on the visual field image in the display control unit60.FIG.23illustrates a case of viewing, from above, a content space301in the case where a plurality of users views the content of the same free viewpoint image. The content space301includes an avatar311A corresponding to a user A, an avatar311B corresponding to a user B, an avatar311C corresponding to a user C, and an avatar311D corresponding to a user D.FIG.23illustrates a range312A in the visual field image of the user A, a range312B in the visual field image of the user B, a range312C in the visual field image of the user C, and a range312D in the visual field image of the user D. The display control unit60of the information processing apparatus21of the user apparatus20can set priorities of other users (avatars of other users), among the other users (avatars) existing in the visual field image of the user, in descending order of possibility of viewing the same thing as the user, that is, for example, in descending order of overlapping area of the range in the visual field image (here, area as viewed in the plan view ofFIG.23), and display the avatars of the other users according to the priorities. For example, the display control unit60of the information processing apparatus21of the user apparatus20used by the user A specifies the other users (in the case ofFIG.23, users B, C, and D) existing in the range312A in the visual field image of the user A and detects the overlapping areas of the range312A in the visual field image of the user A and the ranges in the visual field images of the other users. Specifically, the display control unit60detects the area of each of an overlapping range315AB of the range312A and the range312B in the visual field image of the user B, an overlapping range315AC of the range312A and the range312C in the visual field image of the user C, and an overlapping range315AD of the range312A and the range312D in the visual field image of the user D. In the case ofFIG.23, the area of the overlapping range315AD is the largest, followed by the overlapping range315AB and the overlapping range315AC. Therefore, the priorities of displaying the avatars are determined in order of the user D, the user B, and the user C. Furthermore, the avatars are displayed based on the priorities according to the number of avatars of the other users displayed in the visual field image of the user A. Note that the number of avatars of the other users displayed in the visual field image are determined in advance, and the user can change the number of avatars. FIG.24is a diagram corresponding to the state illustrated inFIG.23, the diagram illustrating an example of display of the visual field image of the user A displayed on the display apparatus22of the user apparatus20used by the user A and the avatars corresponding to the other users displayed in the visual field image. Here,FIG.24corresponds to a case where the number of avatars of the other users displayed in the visual field image is one. As illustrated inFIG.24, the avatar311D corresponding to the user D indicated by a solid line is superimposed and displayed on the visual field image321A of the user A. Note that the avatar311B corresponding to the user B and the avatar311corresponding to the user C indicated by dotted lines are not displayed. However, in a case where the number of avatars of the other users displayed in the visual field image is two, the avatar311D and the avatar311B are displayed. In a case where the number of avatars of the other users displayed in the visual field image is three or more, the avatars311D,311B and311C are displayed. Note that here, the priority of the avatar to be displayed in the visual field image of the user A is set according to the overlapping area of the range in the visual field image of the user A and the range in the visual field image of another user. Therefore, the avatar of another user with the visual field close to the visual field image (range in the visual field image) of the user A tends to be preferentially displayed in the visual field image of the user A. Other than setting the priority of displaying the avatar in the visual field image of the user A according to the proximity of the visual field of the other user and the visual field image of the user A as described above, the priority can be set according to the positional relationship between the visual field image (range in the visual field image) of the user A and the visual field of the other user. In this way, whether or not to display (superimpose) the avatar of the other user on the visual field image of the user A can be controlled according to the positional relationship between the visual field image of the user A and the visual field of the other user. <Avatar Display Process> Next,FIG.25is a flow chart describing an avatar display process of setting the priorities of the avatars to display the avatars as described above. In step S31, the display control unit60reads the visual field information of the user and the visual field information of the other users held in the visual field information holding unit55. In step S32, the display control unit60refers to the visual field information of the user and the other users to determine whether or not the visual field image of the user, that is, the visual field image (range in the visual field image) of the user determined according to the visual field information of the user, includes more than a predetermined number of other users (points of view of the other users). The predetermined number corresponds to the preset number of avatars of the other users displayed in the visual field image of the user. In step S32, the process proceeds to step S33in a case where the display control unit60determines that the visual field image of the user includes more than the predetermined number of other users (points of view of the other users). In step S33, the display control unit60refers to the visual field information of the user and the other users to set the priorities for the avatars of the other users in descending order of the size of the range where the visual field (range in the visual field image) overlaps the visual field image (range in the visual field image) of the user. In step S34, the display control unit60determines the predetermined number of avatars of the other users (avatars corresponding to the other users) to be displayed in the visual field image of the user according to the set priorities and superimposes the determined avatars of the other users on the visual field image. In this way, the predetermined number of avatars are superimposed and displayed on the visual field image displayed on the display unit73. On the other hand, in a case where the display control unit60determines that the visual field image of the user does not include more than the predetermined number of other users (points of view of the other users) in step S32, all of the avatars of the other users existing in the visual field image of the user are to be displayed, and the process proceeds to step S35. In step S35, the display control unit60determines that all of the avatars of the other users existing in the visual field image of the user are the avatars to be displayed in the visual field image and superimposes the determined avatars of the other users on the visual field image. In this way, equal to or smaller than the predetermined number of avatars are superimposed and displayed on the visual field image displayed on the display unit73. According to the avatar display process described above, the user can see only the other avatars considered to be viewing the same or similar visual field images, and the user cannot see the other avatars. This can suppress occurrence of a problem in communication, such as miscommunication with the other users corresponding to the avatars that can be viewed. This can also suppress a situation that the visual field of the user is occupied by the avatars. In addition, the user may be able to arbitrarily set the priorities regarding the display of the avatars corresponding to the other users. In addition, the priorities may be set according to a history of communication, such as for example, the higher the number of exchanges of messages, the higher the priority. Furthermore, the priorities set based on these various standards may be able to be switched. Furthermore, in a case where the content is content that changes with time, the priorities may be set by considering not only the area of the overlapping range of the visual field image (range in the visual field image), but also the time of overlap of the visual field image. Note that the avatar display process can also be applied to cases of displaying an icon of the user (icon corresponding to the user), a live-action image of the user, a character (string) (image of character) representing the user, or other user images representing the user, instead of displaying the avatar. Furthermore, the icon may be displayed at first, and the display may be changed to display only the contour of the avatar, monochrome display of the avatar, full color display of the avatar, or the like according to the closeness of the relationship between the users or the degree of the communication. In addition, mutual authentication may also be performed between the users before the icon or the avatar is displayed. Note that for the content of either one of the free viewpoint image and the spherical image, various methods can also be used to control whether or not to display (superimpose) the user images, such as avatars, of the other users on the visual field image of the user. For example, the avatar of another user with the visual field close to the user, that is, another user with a similar visual field image, can be displayed in the visual field image of the user, at a position close to the user in the visual field image of the user. In addition, the avatar of another user who might get along with the user may be preferentially displayed on the visual field image of the user according to the proximity of seek positions of the same content viewed by the user and the other user (temporal positions of reproduced content), the proximity of the points of view of the user and the other user in the case where the content is a free viewpoint image of a 3D game or the like, a history of past conversation between the user and the other user, the proximity of relationship in an SNS, or the like. Furthermore, for the avatar of each user, user information regarding the user such as, for example, the content of conversation (text chat) of the user and the nickname of the user, can be displayed along with the avatar. In the case where the avatar of another user can be displayed on the visual field image of the user, the display regarding the avatar of the other user to be displayed in the visual field image of the user can be limited according to the relationship between the user and the other user or the like. For example, in a case where the avatar (position of the avatar) of another user is in the visual field of the user, but the avatar of the user is not in the visual field of the other user, the details of the display regarding the avatar of the other user, that is, for example, the angle of the neck and the expression of the avatar of the other user, the text of the conversation, and the like, may not be displayed in the visual field image of the user, and a simple avatar, that is, for example, only a silhouette (external form) of the avatar, may be displayed. Furthermore, a dialog for asking permission of communication may be displayed in the visual field image of each of the user and the other user, and in a case where each of the user and the other user permits the communication, the display regarding the avatar of the partner may be displayed in the visual field image. This case can prevent the user from peeping the communication of the other user. Note that in addition, only the icon of the partner may be displayed before each of the user and the other user permits the communication, and in the case where each of the user and the other user permits the communication, the avatar of the partner or a silhouette of live action may be displayed. Furthermore, in the case of displaying the avatar of the other user in the visual field image of the user, a 3D image or the like may be adopted as the avatar of the other user. The visual field image may be displayed such that the visual field image comes around the avatar of the other user according to a motion of the neck or the body of the user, and this allows the user to feel that the existence of the other user is real. Furthermore, when another user turns the neck to move away in the visual field image of the user in the case where the avatar of the other user with the visual field close to the user, that is, the other user with a similar visual field image, is displayed in the visual field image of the user, the avatar of the other user may be moved to a position far from the user in the visual field image of the user, or the avatar of another user (user different from the other user) with the visual field close to the user may be displayed in the visual field image of the user in place of the avatar of the other user. In addition, the display regarding the avatar of another user to be displayed in the visual field image of the user can be changed according to the relationship between the user and the other user. For example, part of information (less information) of the display regarding the avatar of another user may be displayed in the visual field image of the user who has once talked to the other user and registered the other user as a friend. All of display (more information) regarding the avatar of another user may be displayed in the visual field image of the user who is talking to the other user. Furthermore, in a case where the user can perform a zoom-in or zoom-out operation to adjust the angle of view of the image (range of the scene in the image) displayed on the display apparatus22, whether the angle of view of the image displayed on the display apparatus22of the user and the angle of view of the image displayed on the display apparatus22of the other user are similar may also be taken into account to set the priority of displaying the avatar of the other user in the visual field image of the user. Furthermore, in a case where the content includes a plurality of spherical images photographed by a plurality of spherical cameras, whether the user and the other user are viewing the spherical image photographed by the same spherical camera or the user and the other user are viewing the spherical images photographed by different spherical cameras may also be taken into account to set the priority of displaying the avatar of the other user in the visual field image of the user. Furthermore, in the case where the content is a free viewpoint image, the proximity of the visual fields of the user and the other user may be determined according to, for example, the angles of the necks of the user and the other user, whether the same substance (object) is in the visual field images of the user and the other user, and the like. Whether the same substance is in the visual field images of the user and the other user may be determined by, for example, a combination of the positions (points of view) of the user and the other user and the angles of view of the visual field images of the user and the other user. Furthermore, regarding the display of the avatar, a mode of displaying the avatar of another specific user and a mode of preferentially displaying the avatar of another user who has talked to the user in the past may be provided in the visual field image of the user, and the mode of displaying the avatar may be switched to the mode according to the operation of the user. In this way, the avatars can be displayed to allow a large number of unspecified users viewing the same or similar scene in the same content to connect to each other, and the conversation can be smoothly advanced. Furthermore, the priorities can be set for the avatars (other users) according to the proximity of visual fields, the permission of communication, and the like, and the avatars can be displayed according to the priorities. This can prevent a large number of avatars from covering the entire visual field image which disturbs viewing the content. <9. A Series of Processes Executed by Software> The series of processes can be executed by hardware or can be executed by software. In the case where the series of processes are executed by software, a program included in the software is installed on a computer. Here, examples of the computer include a computer incorporated into dedicated hardware and a general-purpose personal computer or the like that can execute various functions by installing various programs. FIG.26is a block diagram illustrating a configuration example of hardware of a computer that uses a program to execute the series of processes. In a computer500, a CPU (Central Processing Unit)501, a ROM (Read Only Memory)502, and a RAM (Random Access Memory)503are connected to each other through a bus504. An input/output interface505is further connected to the bus504. An input unit506, an output unit507, a storage unit508, a communication unit509, and a drive510are connected to the input/output interface505. The input unit506includes a keyboard, a mouse, a microphone, and the like. The output unit507includes a display, a speaker, and the like. The storage unit508includes a hard disk, a non-volatile memory, and the like. The communication unit509includes a network interface and the like. The drive510drives a removable medium511, such as a magnetic disk, an optical disk, a magneto-optical disk, and a semiconductor memory. In the computer configured in this way, the CPU501loads a program stored in the storage unit208on the RAM203through the input/output interface205and the bus204and executes the program to execute the series of processes, for example. The program executed by the computer (CPU501) can be provided by, for example, recording the program in the removable medium511as a package medium or the like. In addition, the program can be provided through a wired or wireless transmission medium, such as a local area network, the Internet, and digital satellite broadcasting. In the computer500, the removable medium511can be mounted on the drive510to install the program on the storage unit508through the input/output interface505. In addition, the communication unit509can receive the program through a wired or wireless transmission medium to install the program on the storage unit508. Furthermore, the program can be installed in advance on the ROM502or the storage unit508. Note that the program executed by the computer500may be a program in which the processes are executed in chronological order described in the present specification, or the program may be a program for executing the processes in parallel or for executing the processes at necessary timing such as when the processes are invoked. Note that the advantageous effects described in the present specification are illustrative only, and the advantageous effects are not limited. There may also be other advantageous effects. The embodiment of the present technique is not limited to the embodiment described above, and various changes can be made without departing from the scope of the present technique.<1>An information processing apparatus including:a content acquisition unit configured to acquire data of content;an image cutting unit configured to cut out a first visual field image corresponding to a visual field of a first user from a content image based on the data of the content;a visual field information acquisition unit configured to acquire visual field information representing a visual field of a second user viewing the content image; anda display control unit configured to control a display apparatus to display the first visual field image and configured to control the display apparatus to display the visual field of the second user based on the visual field information of the second user.<2>The information processing apparatus according to <1>, in whichthe display control unit is configured to control the display apparatus to display a visual field position instruction image indicating a position of the visual field of the second user based on the visual field information of the second user.<3>The information processing apparatus according to <2>, in whichthe visual field position instruction image includes a wide area image including the first visual field image and a second visual field image corresponding to the visual field of the second user.<4>The information processing apparatus according to <3>, in whichthe display control unit is configured to control the display apparatus to switch the first visual field image and the wide area image in response to a predetermined trigger.<5>The information processing apparatus according to <4>, in whichthe predetermined trigger includes at least one of a key operation, a voice command, a motion of a head, or a gesture operation by the first user.<6>The information processing apparatus according to any one of <3> to <5>, in whichthe wide area image includes at least one of an equirectangular projection, a Mercator projection, a bird's eye view, an aerial view, or a two-dimensional map.<7>The information processing apparatus according to any one of <2> to <6>, in whichthe visual field position instruction image includes a symbol image superimposed on the first visual field image, the symbol image indicating the position of the visual field of the second user.<8>The information processing apparatus according to any one of <1> to <7>, further including:a visual field image determination unit configured to determine the first visual field image to be cut out from the content image based on at least one of a movement of a line of sight of the first user or a movement of a head of the first user.<9>The information processing apparatus according to <8>, in whichthe visual field image determination unit is configured to move the first visual field image based on the line of sight of the first user in response to approach of the line of sight of the first user to an edge of the first visual field image.<10>The information processing apparatus according to <8> or <9>, in whichthe visual field image determination unit is configured to make an angle of rotation of the first visual field image larger than an angle of rotation of the head of the first user based on an angle of rotation of the line of sight of the first user and the angle of rotation of the head of the first user.<11>The information processing apparatus according to any one of <8> to <10>, in whichthe visual field image determination unit is configured to determine, based on the visual field information of the second user, an initial position of the visual field image of the first user corresponding to timing that the first user has substantially started to view the content.<12>The information processing apparatus according to any one of <8> to <11>, in whichthe display apparatus includes a head mounted display, and the visual field image determination unit is configured to determine the first visual field image based on at least one of a movement of the line of sight of the first user or a movement of the head of the first user associated with the head mounted display.<13>The information processing apparatus according to any one of <1> to <12>, in whichthe second user includes a plurality of users,the display control unit is configured to control the display apparatus to superimpose, on the first visual field image, at least one of a plurality of user images corresponding to the plurality of users, andthe display control unit is configured to control whether or not to superimpose, on the first visual field image, each of the plurality of user images according to a positional relationship between the first visual field image and a visual field of each of the plurality of users.<14>The information processing apparatus according to <13>, in whichthe display control unit is configured to control the display apparatus to preferentially superimpose, on the first visual field image, a user image corresponding to a user with the visual field relatively close to the first visual field image among the plurality of users.<15>The information processing apparatus according to <13> or <14>, in whichthe display control unit is configured to control the display apparatus to preferentially superimpose, on the first visual field image, part of the plurality of user images according to a history of communication between the first user and the plurality of users.<16>The information processing apparatus according to any one of <1> to <15>, in whichthe content image includes a spherical image or a free viewpoint image.<17>The information processing apparatus according to any one of <1> to <16>, further including:the display apparatus.<18>An information processing method including:acquiring data of content;cutting out a first visual field image corresponding to a visual field of a first user from a content image based on the data of the content;acquiring visual field information representing a visual field of a second user viewing the content image; andcontrolling a display apparatus to display the first visual field image and controlling the display apparatus to display the visual field of the second user based on the visual field information of the second user.<19>A program for causing a computer to function as:a content acquisition unit configured to acquire data of content;an image cutting unit configured to cut out a first visual field image corresponding to a visual field of a first user from a content image based on the data of the content;a visual field information acquisition unit configured to acquire visual field information representing a visual field of a second user viewing the content image; anda display control unit configured to control a display apparatus to display the first visual field image and configured to control the display apparatus to display the visual field of the second user based on the visual field information of the second user. REFERENCE SIGNS LIST 10Content viewing system,20User apparatus,21Information processing apparatus,22Display apparatus,31Internet,40Server apparatus,41Content distribution unit,42Visual field information management unit,43Communication management unit,51Communication unit,52Content holding unit,53Visual field image determination unit,54Image cutting unit,55Visual field information holding unit,58Input unit,59Trigger detection unit,60Display control unit,71Line-of-sight detection unit,72Head motion detection unit,73Display unit,74Voice input/output unit,100Entire image,101Visual field range,111Visual field image,112Wide area image,113Visual field range display,121Visual field direction instruction mark,131Visual field direction instruction line,132Icon,141Visual field range display,151Tracking image,162Icon,163Arrow,200Content image,301Content space,311Avatar,500Computer,501CPU
104,931
11861774
DETAILED DESCRIPTION Throughout the following description, specific details are set forth in order to provide a more thorough understanding of the invention. However, the invention may be practiced without these particulars. In other instances, well known elements have not been shown or described in detail to avoid unnecessarily obscuring the invention. Accordingly, the specification and drawings are to be regarded in an illustrative, rather than a restrictive sense. FIG.1schematically illustrates an animation model10which comprises a cloth object (in the illustrated example, a shirt)12at least partially surrounding or enclosing or covering an underlying collision object14(in the illustrated example, a human body or musculoskeletal system). For the purposes herein “cloth object” means a model of an object which is tight-fitting around the collision object14(e.g. sufficiently tight-fitting that secondary motion that is independent of object14(e.g. “flapping”) is not relevant or important for simulation purposes). Cloth object12may comprise a model of skin tissue located, for example, around a collision object14made up of muscle and/or bone tissue (a musculoskeletal system)) or textiles (e.g. articles of clothing), located around a collision object14made up of a musculoskeletal system with or without its own skin tissue. The textiles may, for example, comprise pants (e.g. yoga pants, leggings, jeans, dress pants, etc.), shirts (e.g. t-shirts, tank tops, workout shirts, dress shirts, etc.), dresses, skirts, vests, undergarments (e.g. briefs, sport bras, etc.), outerwear (e.g. sweaters, hoodies, top coats, etc.), etc. Collision object14comprises models of objects cloth object12is intended to cover (e.g. muscle tissue, bone tissue, skin tissue, skin tissue that is at least partially clothed, etc.). Since cloth object12is tight-fitting around collision object14, cloth object12is typically in an equilibrium state (e.g. various forces acting on cloth object12are balanced). When cloth object12is in an equilibrium state, cloth object12typically does not move (e.g. flap, oscillate, etc.) relative to collision object14. It therefore becomes unnecessary, with such tight-fitting cloth objects12, to model dynamics of cloth object12such as wrinkles appearing in cloth object12and then disappearing under tension, wind moving a portion of cloth object12relative to collision object14and/or the like. One aspect of the technology described herein provides a method for finding a dynamic equilibrium model of a desired cloth object with respect to a desired underlying collision object. In some embodiments the method performs an optimization of an energy function defining the cloth object to be modelled. Advantageously, the method generates a quasistatic simulation of the cloth object (i.e. the generated model of the cloth object is stateless such that any frame of an animation sequence can be modelled at any time (without having to model the state of the cloth at preceding time steps). Relative to dynamic simulations of cloth objects, a quasistatic simulation of cloth objects may improve efficiency, reduce time it takes to generate a simulation, reduce required computation power and/or the like. FIG.2is a block diagram of an example method20for generating a dynamic equilibrium model22of cloth object12according to a particular embodiment. In block24method20receives objects that are to be modelled. In currently preferred embodiments block24receives cloth object12and underlying collision object14. However, this is not necessary in all cases. In some embodiments, block24may receive only one of cloth object12or collision object14. In some such embodiments block24may retrieve and/or generate the other one of the objects from a database, library, etc. based on one or more features of the received object. For example, block24may receive an instance of collision object14(e.g. a human torso). Block24may then retrieve and/or generate a cloth object12(e.g. a skin object or a shirt covering the torso) based on one or more identified features of the collision object14. Block24may also initialize an initial state of cloth object12. For example, cloth object12may be initialized based on one or more features identified in cloth object12and/or collision object14. In some embodiments block24uses an artificial intelligence algorithm to initialize cloth object12based on the identified features of cloth object12and/or collision object14. In some embodiments block24additionally receives an initial guess or representation23of the optimized model for cloth object12from a user. In some embodiments an initial representation23of cloth object12may be generated by performing a linear blending (linear blend-skinning) technique or similar techniques, such as dual quaternion skinning, spherical blend-skinning and/or the like. Once the objects to be modelled are received by method20, block24involves conditioning the objects. Cloth object12and collision object14are typically represented by polygonal meshes (e.g. surface meshes). In currently preferred embodiments cloth object12and collision object14are represented by triangle meshes. If the representations of cloth object12and collision object14received by block24are not triangle meshes (e.g. one or both of cloth object12and collision objection14are provided in the form of other polygonal meshes or volume meshes), then block24may comprise converting the non-triangle mesh representations of cloth object12and collision object14into triangle mesh representations (e.g. triangular surface mesh representations). In block25, cloth object12is optimized. Cloth object12may be optimized by minimizing an elastic potential energy function representative of cloth object12. The optimization preferably minimizes stretching and bending of cloth object12. In some embodiments, the block25optimization adjusts vertex positions of the mesh of cloth object12until stretching and/or bending of cloth object12are minimized. An elastic potential energy function representative of cloth object12may, for example, be represented as follows: E=kB(Bending Energy Function)+kS(Stretching Energy Function)   (1A) wherein: Bending Energy Function is an expression that assigns energy to bending of cloth object12; Stretching Energy Function is an expression that assigns energy to stretching of cloth object12; kBis a configurable (e.g. user-configurable) scalar weight parameter corresponding to the bending energy and kSis a configurable (e.g. user-configurable) scalar weight parameter corresponding to the stretching energy. In some embodiments, the potential energy function of equation (1A) may take the form: E=kB(∑verticies,iE⁡(vi))+kS(∑triangles,jE⁡(Fj_))(1⁢B) wherein: E(νi) corresponds to the bending energy of cloth object12at vertex νi; i is an index of the vertices of cloth object12; E(Fj) corresponds to the stretching energy of cloth object12at a triangle j;Fjis a deformation gradient tensor for the triangle j; and j is an index of the triangles of cloth object12. In some embodiments, the bending energy E(νi) may, for example, be represented as follows: E(νi)=(|{right arrow over (νι)}−{right arrow over (νι)}′|−LO)2Ai(2) wherein: {right arrow over (νι)} corresponds to the position of the ithvertex of cloth object12; {right arrow over (νι)}′ corresponds to average positions of the ithvertex's neighbors (e.g. vertices connected to the ithvertex by edges); LOcorresponds to a desired distance between {right arrow over (νι)} and {right arrow over (νι)}′, which may be a configurable (e.g. user-configurable) parameter of the block24optimization or which may be determined based on an initial (e.g. input configuration of the mesh corresponding to cloth object12); and Aicorresponds to the area of triangles (or other polygonal shapes of the mesh) associated with (e.g. that include) vertex νi. The stretching energy E(Fj) may, for example, be represented as follows: E(Fj)=∥Fj−R(Fj)∥2Ai(3) wherein:Fjcorresponds to the deformation gradient to transform the jthtriangle (or other polygonal shape) from two dimensions (i.e. 2D) to three dimensions (i.e. 3D); andR(Fj) corresponds to a polar decomposition ofFj. The deformation gradientFjmay, for example be represented as follows: Fj=Rj*Sj(4) wherein:Rjrepresents a rotation matrix associated with the deformation gradient of the jthtriangle; andSjis a right stretch tensor that corresponds to an arbitrary amount of stretching for the jthtriangle. The arbitrary amount of stretching may be set on a per triangle basis (i.e. for the jthtriangle) as a configurable (e.g. user configurable) parameter of the block24optimization. In some embodiments, the arbitrary amount of stretching is determined as part of block25. In some embodiments, the elastic potential energy function representation of cloth object12includes a term accounting for gravitation forces. A term accounting for gravity may, for example, be represented as follows: …+kg(∑verticies,iyi)⁢…(5) wherein: yicorresponds to gravitational forces experienced by the ithvertex; and kgis a configurable scalar parameter corresponding to the gravitational energy. Such a gravitational term may be added (e.g. as an additional term) to equation (1A) or (1B). Additionally or alternatively, the potential energy function representation of cloth object12may include a term accounting for air pressure. A term accounting for air pressure may, for example, be represented as follows: …+ka(∑verticies(nι→·vι→)⁢Ai)⁢…(6) wherein: {right arrow over (nι)} is a normal vector extending from the ithvertex (νi);νιis the position of the ithvertex (νi); kais a configurable scalar parameter corresponding to the air pressure; and Aihas the meaning described above. Returning toFIG.2, in block26collisions of cloth object12with outer surfaces of collision object14are modelled. In some embodiments, the collisions are modelled with inequality constraints. In some embodiments, such collisions could additionally or alternatively be modelled with: penalty term(s) in the cost function (e.g. in equations (1A), (1B) above), which heavily penalize interpenetrating collisions between cloth object12and collision object14; and/or equality constraints that are activated in some iterations and deactivated in other iterations. Block26is shown inFIG.2as being separate from the block25optimization for clarity. It will be appreciated that the collision models of block26may be incorporated into the block25optimization (e.g. as constraints to the optimization or as terms in the cost function of equations (1A), (1B) as discussed above). The inequality constraints on each vertex (νi) of cloth object12may be defined by keeping track of a corresponding point (pi) on a collision surface of collision object14. The corresponding point (pi) on collision object14may be defined and/or characterized by: an identification of the triangle (or other polygonal shape) in which piis located in the mesh of collision object14; and by the barycentric coordinates of pirelative to the vertices of collision object14that define the identified triangle. The corresponding point (pi) on collision object14may be initialized by taking the closest point to vertex (νi) in a reference pose or initial state of cloth object12(e.g. initial representation23). Each time a new optimization (method20) is performed for cloth object12, points (pi) may be started at their initial reference positions. Block26may track points (pi) every time vertex positions (νi) of cloth object12are updated (e.g. in each iteration of method20). In some embodiments, block26tracks points (pi) by moving points (pi) along the collision surface of collision object(s)14until it is not possible to get any closer to the corresponding cloth object vertex (νi). In some embodiments, the inequality constraint on the position of vertex (νi) comprises defining a tangent plane at point (pi) on collision object14and ensuring that vertex (νi) does not penetrate the tangential plane at point (pi). A normal vector of the tangential plane may be defined by smoothly interpolating a normal vector ({right arrow over (n)}i) from the normal vectors of collision object vertices (ui,1, ui,2, ui,3) corresponding to the collision object triangle that includes the point (pi). Such normal vector interpolation may be based on the barycentric coordinates of the point (pi) relative to collision object vertices (ui,1, ui,2, ui,3). For example, where the vertices (ui,1, ui,2, ui,3) of the collision object triangle that include the point (pi) have associated normal vectors ({right arrow over (n)}i,1, {right arrow over (n)}i,2, {right arrow over (n)}i,3) and the point (pi) has barycentric coordinates (λi,1, λi,2, λi,3) within the collision object triangle, then the normal vector ({right arrow over (n)}i) of the tangent plane at point (pi) may be interpolated according to {right arrow over (n)}i=λi,1{right arrow over (n)}i,1+λi,2{right arrow over (n)}i,2+λi,2{right arrow over (n)}i,2. It is typically advantageous to define the normal vector ({right arrow over (n)}i) of the tangent plane at point (pi) by smoothly interpolating normal vectors ({right arrow over (n)}i,1, {right arrow over (n)}i,2, {right arrow over (n)}i,3) of the collision object vertices (ui,1, ui,2, ui,3) that include the point (pi), since a “geometric normal” of the collision surface (i.e. a normal to the plane that includes the vertices (ui,1, ui,2, ui,3) is typically discontinuous, which can lead to undesirable effects, such as jittering or jumping between discrete configurations being introduced into the model of cloth object12. It will be appreciated that geometric normal may change discontinuously from a point on one side of a triangle edge to a point on another side of the edge (i.e. on a different triangle). A normal vector ({right arrow over (n)}i) interpolated from the individual normal vectors ({right arrow over (n)}i,1, {right arrow over (n)}i,2, {right arrow over (n)}i,3) at the plurality of vertices (ui,1, ui,2, ui,3) corresponding to a triangle does not suffer from this same discontinuity. In some embodiments, one form of suitable inequality constraint is defined as follows: ({right arrow over (ν)}i−{right arrow over (p)}i)·{right arrow over (n)}i≥0.   (7) wherein: {right arrow over (ν)}iis a position of the ithvertex (νi) on cloth object12; {right arrow over (p)}iis a position of the point (pi) on a collision surface (collision object14) that is closest to the vertex νi; and {right arrow over (n)}iis the normal vector interpolated from the individual normal vectors ({right arrow over (n)}i,1, {right arrow over (n)}i,2, {right arrow over (n)}i,3) at the plurality of vertices (ui,1, ui,2, ui,3) that define the collision object triangle that includes the point (pi). It will be appreciated that the inequality of equation (7) requires that the vector ({right arrow over (ν)}i−{right arrow over (p)}i) has the same direction as the vector {right arrow over (n)}i, which is tantamount to preventing interpenetrating collisions between the cloth object vertex (νi) and the tangent plane at the collision object point (pi) (as defined by the interpolated normal vector {right arrow over (n)}i. In some embodiments, method20(e.g. block25and/or block26) uses an iterative method to handle optimization and the associated inequality constraints. In some embodiments, a Gauss Seidel solver is used to handle the optimization of an energy function (e.g. equation (1A) or (1B)) subject to the inequality constraints (e.g. equation (7)). In some embodiments, other types of solvers, such as an active set OP solver, interior point method solver and/or the like, may additionally or alternatively be used to handle the inequality constraints. In some embodiments, the simulation and collisions are modelled with a method as described in “ADMM⊇2 Projective Dynamics: Fast Simulation of Hyperelastic Models with Dynamic Constraints” by M. Overyby, G. Brown, J. Li and R. Narain. The described method may be adapted to generate a quasistatic model by setting all masses to zero. Although blocks25and26are illustrated sequentially for ease of description, blocks25and26need not occur sequentially. In some embodiments block25and26are performed concurrently (e.g. by the same solver). In some embodiments block26is performed prior to block25being performed. In block27method20determines whether the model of cloth object12is sufficient or whether additional optimization iterations should be performed. Any suitable loop-exit criteria may be used in the block27evaluation. In some embodiments, the block27loop-exit criteria is a configurable (e.g. user configurable) number of iterations, although other loop-exit criteria could be used. In some embodiments a user determines whether additional optimization iterations should be performed. If additional optimizations are to be performed block27returns method20to block25. Otherwise block27outputs optimized model22(i.e. a quasistatic simulation model) of cloth object12. Example Application In some cases the methods described herein (e.g. method20) may form the basis of a computer program, a plug-in to be added to an existing computer program and/or the like. For example, the methods described herein may be incorporated into a plug-in deformer for the Maya™ 3D computer animation modeling software application which is commercially available from Autodesk™, Inc. of California, United States. The methods described herein may be incorporated into a plug-in deformer for other computer animation modelling software, such as, by way of non-limiting example, 3D Studio Max™, Blender™, Houdini™ and/or the like. The plug-in (or computer program more generally) may be configured to prompt a user for one or more of the following:a cloth object (i.e. the object that is to be simulated (e.g. cloth object12));a rest state for the cloth object (e.g. initial representation23); anda collision object (i.e. the object that the cloth object at least partially covers (e.g. collision object14)). In some embodiments, the collision object14and/or the cloth object12may be specified by some other piece of software (e.g. animation software). The plug-in (or computer program) outputs a simulation (cloth model22) of the cloth object12given the configuration of the collision object14. As described elsewhere herein, the output cloth object (cloth model22) is a quasistatic simulation of the cloth object12(e.g. given the configuration of collision object14) after relaxation. The output cloth object (cloth model22) typically has also undergone several effects such as wrinkling, sliding, etc., as the cloth object12is interacted with the collision object14during the simulation. A typical animation comprises a number of frames per second. Each frame of such an animation may have a different configuration for collision object14. For example, various frames and corresponding configurations of collision object14may be generated by a suitable simulation of a musculoskeletal animation model (e.g. an animation rig) or various frames and corresponding configurations of collision object14may be generated by artist(s). For each such frame of collision object14, method20may be used to generate a corresponding cloth model22(i.e. a corresponding quasistatic configuration of cloth object12). In some embodiments, the initial representation23of cloth object12for each frame (each instance of method20) may be the cloth model22(i.e. the configuration of cloth object12) from the previous frame. In some embodiments, the plug-in (or computer program) allows for additional user constraints to be added to the simulation. Additionally, or alternatively, the plug-in (or computer program) may allow a user to vary one or more properties of the cloth object12and/or the collision object14. For example, the plug-in (or computer program) may permit a user to vary:an amount of attraction of cloth object12to an original or a previous pose (configuration) of cloth object12;one or more other properties of cloth object12, such as tensile strength, bending resistance, mass density, thickness of the cloth, abrasion resistance, smoothness, pilling propensity, fastness of dyestuffs, etc.etc. Additional user constraints and/or physical properties of the simulation may be varied by a user in a number of ways. In some embodiments a user may paint maps to vary constraints or physical properties of the simulation. In some embodiments a user may numerically vary constraints or physical properties through a graphical user interface. In some embodiments, the methods described herein may be used to generate training data for other artificial intelligence and/or machine learning techniques. For example, U.S. patent application Ser. No. 17/676,087 (which is hereby incorporated herein by reference) describes a machine learning technique for performing elaborate deformations of a CGI character skin or clothing at real time frame rates. Some of the techniques described in U.S. patent application Ser. No. 17/676,087 do not model dynamics. Methods according to particular embodiments of the current invention may be able generate a quality cloth and/or skin simulation that also is free of dynamics. Such cloth and/or skin simulations developed by the techniques described herein, can be used as training data for the machine learning techniques described in U.S. patent application Ser. No. 17/676,087, which in turn can be used to obtain real time (at an animation frame rate) skin/cloth deformations Interpretation of Terms Unless the context clearly requires otherwise, throughout the description and the claims:“comprise”, “comprising”, and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”;“connected”, “coupled”, or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof;“herein”, “above”, “below”, and words of similar import, when used to describe this specification, shall refer to this specification as a whole, and not to any particular portions of this specification;“or”, in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list;the singular forms “a”, “an”, and “the” also include the meaning of any appropriate plural forms. Words that indicate directions such as “vertical”, “transverse”, “horizontal”, “upward”, “downward”, “forward”, “backward”, “inward”, “outward”, “left”, “right”, “front”, “back”, “top”, “bottom”, “below”, “above”, “under”, and the like, used in this description and any accompanying claims (where present), depend on the specific orientation of the apparatus described and illustrated. The subject matter described herein may assume various alternative orientations. Accordingly, these directional terms are not strictly defined and should not be interpreted narrowly. Embodiments of the invention may be implemented using specifically designed hardware, configurable hardware, programmable data processors configured by the provision of software (which may optionally comprise “firmware”) capable of executing on the data processors, special purpose computers or data processors that are specifically programmed, configured, or constructed to perform one or more steps in a method as explained in detail herein and/or combinations of two or more of these. Examples of specifically designed hardware are: logic circuits, application-specific integrated circuits (“ASICs”), large scale integrated circuits (“LSIs”), very large scale integrated circuits (“VLSIs”), and the like. Examples of configurable hardware are: one or more programmable logic devices such as programmable array logic (“PALs”), programmable logic arrays (“PLAs”), and field programmable gate arrays (“FPGAs”). Examples of programmable data processors are: microprocessors, digital signal processors (“DSPs”), embedded processors, graphics processors, math co-processors, general purpose computers, server computers, cloud computers, mainframe computers, computer workstations, and the like. For example, one or more data processors in a control circuit for a device may implement methods as described herein by executing software instructions in a program memory accessible to the processors. Processing may be centralized or distributed. Where processing is distributed, information including software and/or data may be kept centrally or distributed. Such information may be exchanged between different functional units by way of a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet, wired or wireless data links, electromagnetic signals, or other data communication channel. For example, while processes or blocks are presented in a given order, alternative examples may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times. In addition, while elements are at times shown as being performed sequentially, they may instead be performed simultaneously or in different sequences. It is therefore intended that the following claims are interpreted to include all such variations as are within their intended scope. The invention may also be provided in the form of a program product. The program product may comprise any non-transitory medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, EPROMs, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted. In some embodiments, the invention may be implemented in software. For greater clarity, “software” includes any instructions executed on a processor, and may include (but is not limited to) firmware, resident software, microcode, and the like. Both processing hardware and software may be centralized or distributed (or a combination thereof), in whole or in part, as known to those skilled in the art. For example, software and other modules may be accessible via local memory, via a network, via a browser or other application in a distributed computing context, or via other means suitable for the purposes described above. Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention. Specific examples of systems, methods and apparatus have been described herein for purposes of illustration. These are only examples. The technology provided herein can be applied to systems other than the example systems described above. Many alterations, modifications, additions, omissions, and permutations are possible within the practice of this invention. This invention includes variations on described embodiments that would be apparent to the skilled addressee, including variations obtained by: replacing features, elements and/or acts with equivalent features, elements and/or acts; mixing and matching of features, elements and/or acts from different embodiments; combining features, elements and/or acts from embodiments as described herein with features, elements and/or acts of other technology; and/or omitting combining features, elements and/or acts from described embodiments. Various features are described herein as being present in “some embodiments”. Such features are not mandatory and may not be present in all embodiments. Embodiments of the invention may include zero, any one or any combination of two or more of such features. This is limited only to the extent that certain ones of such features are incompatible with other ones of such features in the sense that it would be impossible for a person of ordinary skill in the art to construct a practical embodiment that combines such incompatible features. Consequently, the description that “some embodiments” possess feature A and “some embodiments” possess feature B should be interpreted as an express indication that the inventors also contemplate embodiments which combine features A and B (unless the description states otherwise or features A and B are fundamentally incompatible). It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such modifications, permutations, additions, omissions, and sub-combinations as may reasonably be inferred. The scope of the claims should not be limited by the preferred embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
30,079
11861775
DESCRIPTION OF EMBODIMENTS Terms used in embodiments of this application are merely used to explain specific embodiments of this application, but are not intended to limit this application. FIG.1is a schematic diagram of a rendered picture according to an embodiment of this application. For example, the picture may be understood as a 3D game picture or a 3D animation picture with a fixed field of view. The picture includes the following several parts: a moving object11, a static object12, a light source13, a background14, and a related effect (for example, a shadow15generated when the light source illuminates an object) generated by the light source. In a related technology, during rendering of the picture, all parts included in the picture usually need to be re-rendered. Consequently, there is a relatively heavy rendering load for the picture, and there are often problems such as freezing, low picture smoothness, quick power consumption, and heat generation. However, actually, two adjacent frames of pictures usually include a same part. For example, the light source usually does not change in a short time, and a position and a status of the light source in virtual three-dimensional space presented in the picture usually do not change in the two adjacent frames of pictures. For another example, the static object in the picture does not move or change, and therefore a status of the static object and a position of the static object in virtual three-dimensional space do not change in the two adjacent frames of pictures. Therefore, if the parts that do not change are repeatedly rendered in the two adjacent frames of pictures, a rendering resource is actually wasted, and a rendering load is increased. For this case, embodiments of this application provide a picture rendering solution. In this solution, a rendering effect of a part that is in a previous frame and that does not change with respect to a current frame is reused, and on the basis of the rendering effect of the part, incremental rendering is performed on a part that changes, to reduce a rendering load. For example,FIG.2is a flowchart of a picture rendering method according to an embodiment of this application. As shown inFIG.2, the method includes the following steps. Step201: Obtain first picture data of a current frame. Step202: Compare the first picture data with currently recorded second picture data of a previous frame, to determine a first part that is in the first picture data and that does not change with respect to the second picture data and a second part that is in the first picture data and that changes with respect to the second picture data. Step203: Reuse a rendering result corresponding to the first part in the previous frame, and render the second part in the current frame, to obtain and display a rendering result of the current frame. For example, a picture in this embodiment may be understood as a 3D game picture or a 3D animation picture with a fixed field of view, and content of the picture may be divided, for example, into four parts: a picture background, a static object, a moving object, and a light source. For example, the picture background part may include information such as a visual range of the picture background, and the parts such as the static object, the moving object, and the light source may include position information and a status of the object in virtual space. For the static object and the moving object, the status in this embodiment may be used to describe a static state or a moving state of the object. For the light source, the status in this embodiment may be used to represent information such as an illumination angle and illumination intensity of the light source. In this embodiment, data of any frame of picture includes data of all of the four parts of the picture. In this embodiment, the current frame of picture may be understood as a picture to be rendered, and the previous frame of picture of the current frame may be understood as a picture for which rendering is completed. In this embodiment, after each frame of picture is rendered, a rendering result of each frame of picture is stored in a preset buffer for reuse in a next frame. For example, in an Nthframe, all of four parts of a picture are re-rendered by using a related technology, and after rendering of the Nthframe is completed, a rendering result of the Nthframe is stored in the preset buffer (for example, a texture buffer, Texture Buffer). During rendering of an (N+1)thframe, the rendering result of the Nthframe may be obtained from the preset buffer, to reuse the rendering result of the Nthframe in the (N+1)thframe. In addition, after rendering of the (N+1)thframe is completed, a rendering result of the (N+1)thframe is added to the preset buffer, so that the rendering result of the (N+1)thframe can be reused in a next frame of the (N+1)thframe. Alternatively, in some implementations, to save storage space of the preset buffer, the rendering result, of the Nthframe, stored in the preset buffer may be replaced with the rendering result of the (N+1)thframe. In this way, the preset buffer always stores a rendering result of a latest to-be-rendered picture. There are usually more same parts between the current frame and the previous frame, and therefore a rendering result of the previous frame can be reused to the greatest extent in the current frame. During comparison of the first picture data of the current frame with the second picture data of the previous frame, corresponding parts in the first picture data and the second picture data may be compared. For example, a first visual range described in the first picture data is compared with a second visual range described in the second picture data, to determine whether a virtual space position and a size of the first visual range changes with respect to the second visual range; data that is used to describe a same static object and that is in the first picture data and the second picture data is compared, to determine whether a virtual space position and a status of the static object change, for example, whether the static object changes from the static state to the moving state and whether a structure/shape changes; and data that is used to describe the light source and that is in the first picture data and the second picture data is compared, to determine whether a virtual space position (for example, a height and an orientation) and a status (for example, an illumination angle and illumination intensity) of the light source change. For example, it is assumed thatFIG.3aandFIG.3bare a schematic diagram of two frames of game pictures according to an embodiment of this application. A game picture shown inFIG.3bis a next frame of picture of a game picture shown inFIG.3a. During rendering of the picture inFIG.3b, corresponding objects inFIG.3bandFIG.3aare compared. ForFIG.3bandFIG.3a, positions and statuses of a static object31and a light source32in virtual space do not change, and therefore rendering results of the static object31and the light source32inFIG.3amay be reused. Due to movement of a moving object34, a visual range inFIG.3bchanges with respect to that inFIG.3a. A region in which a static object36is located is beyond the range of the picture. However, there is still a part of an overlapping region35betweenFIG.3aandFIG.3b. Therefore, a rendering result of the overlapping region35may be extracted from a rendering result inFIG.3ato render an overlapping region inFIG.3b. In comparison withFIG.3a, an object33is a new object, and does not have a corresponding rendering result inFIG.3a. Therefore, the object33and the moving object34need to be re-rendered together. When a specific rendering operation is performed, a rendering result, inFIG.3a, corresponding to a part that does not change may be copied to a preset memory buffer (for example, a frame buffer, Framebuffer) for reuse, and on the basis of the reused rendering result, incremental rendering is performed on a part that changes, to obtain a rendering result of the picture inFIG.3b. A rendering effect of a part that is in the previous frame and that does not change with respect to the current frame is reused, so that repeated rendering of the part that does not change in a picture can be avoided, to reduce a picture rendering load, reduce processing resources occupied for picture rendering, improve stability of a rendering frame rate, enhance picture smoothness, and reduce energy consumption. FIG.4is a flowchart of a method for comparing the first picture data with the second picture data according to an embodiment of this application. As shown inFIG.4, the method includes the following steps. Step401: Compare the first visual range described in the first picture data with the second visual range described in the second picture data, to determine an overlapping region between the first visual range and the second visual range. Step402: Compare virtual space positions and statuses that are of a static object located in the overlapping region and that are in the first picture data and the second picture data, and compare virtual space positions and statuses that are of the light source and that are described in the first picture data and the second picture data. For example,FIG.5aandFIG.5bare a schematic diagram of two frames of game pictures according to an embodiment of this application. A game picture shown inFIG.5bis a next frame of picture of a game picture shown inFIG.5a. InFIG.5b, a position of a moving object51in virtual space changes with respect to that inFIG.5a, and a corresponding visual range54of the picture shown inFIG.5balso changes with respect to that inFIG.5a. InFIG.5a, only for a region that overlaps that inFIG.5b, there is a rendering result that can be reused. Therefore, in some implementations, to reduce a calculation amount of data comparison and improve rendering efficiency, an overlapping region52(for example, in this embodiment, the overlapping region52is a visual range55inFIG.5a) between the visual range55inFIG.5aand the visual range54inFIG.5bmay be first determined based on the visual range55and the visual range54, and then a first data part corresponding to the overlapping region52is extracted from picture data of the picture shown inFIG.5a, a second data part corresponding to the overlapping region52is extracted from picture data of the picture shown inFIG.5b, and whether a virtual space position and a status of a static object53in the overlapping region52change is determined based on the first data part and the second data part. If either of the virtual space position and the status of the static object53changes, a rendering result of the static object53inFIG.5amay be reused inFIG.5b. In comparison withFIG.5a, if the virtual space position and/or the status of the static object53inFIG.5bchange/changes, the static object53is re-rendered inFIG.5b. For a visual range of a background inFIG.5b, a background rendering result corresponding to the overlapping region52inFIG.5amay be reused inFIG.5b, and incremental rendering is performed on backgrounds in remaining visual ranges on this basis. For a light source56and a rendering effect of the light source56, a position and a status of the light source inFIG.5bare compared with a position and a status of the light source inFIG.5abased on the picture data inFIG.5aandFIG.5b. If either of the position and the status of the light source56changes, a light effect rendering result of the overlapping region52inFIG.5ais reused, and incremental rendering is performed on another region on this basis. A moving object inFIG.5bis directly re-rendered, to obtain a rendering result. The overlapping region between the first visual range of the current picture and the second visual range of the previous frame of picture is determined, and parts that are in the first picture data and the second picture data and that are used to describe the overlapping region are compared. Therefore, a case in which all of the first picture data is compared with all of the second picture data can be avoided while comparison accuracy is ensured. In this way, a calculation amount of data comparison is reduced, and data comparison efficiency is improved. In an embodiment of this application, when a picture rendering operation is performed, there may be further a step of expanding a visual range of a picture. In a feasible implementation, this step may be performed when the overlapping region between the first visual range of the current picture and the second visual range of the previous frame of picture is less than a preset range. For example,FIG.6is a schematic diagram of expanding a visual range of a picture according to an embodiment of this application. InFIG.6, a rectangular region formed by a solid line is used as an example of the first visual range of the current picture, and a region between a dashed-line box and a solid-line box is an expanded visual range. When a range of the overlapping region between the first visual range of the current picture and the second visual range of the previous frame of picture is less than the preset range, to use the rendering effect of the current frame as much as possible in a rendering process of a next frame of picture of the current frame, a visual region of the current frame may be expanded on the basis of visual resolution M*N of a region in the rectangular solid-line box, so that there can be a relatively large overlapping region between the current frame and the next frame as much as possible. If an expansion amount in an M dimension is x and an expansion amount in an N dimension is y, (M+x)*(N+y) may represent a range of a region in the dashed-line box. When the first visual range of the current picture is expanded, the expansion amount x in the M dimension and the expansion amount y in the N dimension may be set based on a preset policy. For example, in a manner, the expansion amounts x and y may be associated with a size of the first part that is in the current frame and that does not change with respect to the previous frame. A larger size of the first part indicates a larger quantity of rendering effects that can be reused in a rendering process of the current frame and a lighter rendering load. In this case, a relatively large value may be set for the expansion amounts x and y, so that there can be an as large as possible overlapping range between a visual range of the next frame of picture and the visual range of the current picture, to help reuse as many rendering effects as possible in the current frame. On the contrary, a smaller size of the first part indicates a smaller quantity of rendering results that can be reused in the rendering process of the current frame and a heavier rendering load. In this case, to avoid an increase in the rendering load, a relatively small value may be set for the expansion amounts x and y, and even 0 may be set. In other words, an expansion amount of the visual range of the current frame may be directly proportional to the size of the first part. In another manner, the expansion amounts x and y may be alternatively associated with a size of the second part that is in the current frame and that changes with respect to the previous frame. A smaller size of the second part indicates a lighter rendering load in the current frame. In this case, a relatively large value may be set for the expansion amounts x and y. On the contrary, a larger size of the second part indicates a heavier rendering load in the current frame. In this case, to avoid an increase in the load, a relatively small value may be set for the expansion amounts x and y, and even 0 may be set. In other words, an expansion amount of the visual range of the current frame may be inversely proportional to the size of the second part. After the first visual range of the current frame is expanded, the first visual range described in the first picture data of the current frame may be updated to the expanded visual range, and the currently recorded second picture data of the previous frame may be updated to the first picture data of the current frame. In this way, during rendering of the next frame of the current frame, the visual range of the next frame of picture can be compared with the expanded visual range of the current frame, to obtain a relatively large overlapping region. Pictures presented in two adjacent frames are usually closest. Therefore, after the second picture data is updated to the first picture data, it can be implemented that for picture rendering of the next frame, a maximum quantity of effects can be obtained for reuse from the first picture data that is closest to the next frame in time. In this way, the rendering load is reduced, and rendering efficiency is improved. The first visual range is expanded, so that it can be ensured that when a moving object in the picture moves in a small range, the visual range of the next frame of picture can be included in the expanded visual range of the current frame, or when a moving object moves in a relatively large range, there can be a relatively large overlapping region between the next frame of picture and the current frame of picture, to help use the rendering result of the current frame to a larger extent and reduce the rendering load. FIG.7is a schematic diagram of a structure of a picture processing apparatus according to an embodiment of this application. As shown inFIG.7, the picture processing apparatus70includes:an obtaining module71, configured to obtain first picture data of a current frame;a comparison module72, configured to compare the first picture data with currently recorded second picture data of a previous frame of the current frame, to determine a first part that is in the first picture data and that does not change with respect to the second picture data and a second part that is in the first picture data and that changes with respect to the second picture data; anda rendering module73, configured to: reuse a rendering result corresponding to the first part in the previous frame, and render the second part in the current frame, to obtain and display a rendering result of the current frame. The picture processing apparatus provided in this embodiment can perform the method in the embodiment inFIG.2, and a manner of performing the method by the picture processing apparatus and beneficial effects are similar to those of the method. Details are not described herein. FIG.8is a schematic diagram of a structure of a picture processing apparatus according to an embodiment of this application. In this embodiment, each of first picture data of a current frame and second picture data of a previous frame of the current frame includes a visual range of a picture background and virtual space positions and statuses of a static object and a light source. As shown inFIG.8, on the basis of the foregoing embodiment, the comparison module72may include:a first comparison submodule721, configured to compare a first visual range described in the first picture data with a second visual range described in the second picture data, to determine an overlapping region between the first visual range and the second visual range;a second comparison submodule722, configured to compare virtual space positions and statuses that are of a static object located in the overlapping region and that are in the first picture data and the second picture data; anda third comparison submodule723, configured to compare virtual space positions and statuses that are of the light source and that are described in the first picture data and the second picture data. The picture processing apparatus provided in this embodiment can perform the method in the embodiment inFIG.4, and a manner of performing the method by the picture processing apparatus and beneficial effects are similar to those of the method. Details are not described herein. FIG.9is a schematic diagram of a structure of a picture processing apparatus according to an embodiment of this application. As shown inFIG.9, on the basis of the foregoing embodiment, the picture processing apparatus70may further include:a processing module74, configured to: when a range of the overlapping region is less than a preset range, expand a visual range of the current frame on the basis of the first visual range, and render a part obtained after expansion. In an implementation, an expansion amount of the visual range of the current frame is directly proportional to a size of the first part that does not change. In an implementation, an expansion amount of the visual range of the current frame is inversely proportional to a size of the second part that changes. In an implementation, the apparatus further includes:a first updating module, configured to update the second visual range described in the second picture data to an expanded visual range of the current frame. In an implementation, the apparatus further includes:a second updating module, configured to update the currently recorded second picture data of the previous frame to the first picture data of the current frame. The apparatus provided in this embodiment can execute the technical solution in the embodiment inFIG.6, and a manner of executing the technical solution by the apparatus and beneficial effects are similar to those of the technical solution. Details are not described herein. An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is run on a computer, the computer is enabled to perform the picture rendering method in the foregoing embodiments. In addition, an embodiment of this application further provides a computer program product. The computer program product includes a computer program. When the computer program is run on a computer, the computer is enabled to perform the picture rendering method in the foregoing embodiments. In addition, an embodiment of this application further provides a processor. The processor includes at least one circuit, configured to perform the picture rendering method in the foregoing embodiments. An embodiment of this application further provides an electronic device. The electronic device may be configured to implement the picture rendering method described in the foregoing method embodiments. For example, the electronic device may include one or more processors and interfaces. The interface is coupled to the processor. The processor may also be referred to as a processing unit, and may implement a specific control function. The processor may be a general-purpose processor, a dedicated processor, or the like. In a feasible design, the processor may further store instructions, and the instructions may be executed by the processor, so that the electronic device performs the picture rendering method described in the foregoing method embodiments. In still another possible design, the electronic device may include a circuit, and the circuit may implement a part of obtaining picture data and displaying a rendering result in the foregoing method embodiments. In a design, the electronic device may include one or more memories. The memory stores instructions or intermediate data. The instructions may be run on the processor, so that the electronic device performs the method described in the foregoing method embodiments. In some embodiments, the memory may further store other related data. The processor and the memory may be separately disposed, or may be integrated together. In a design, the electronic device may further include a transceiver. The processor may be referred to as a processing unit. The transceiver may be referred to as a transceiver unit, a transceiver machine, a transceiver circuit, a transceiver, or the like, and is configured to implement a transceiver function of the electronic device. The processor and the transceiver in this application may be implemented in an integrated circuit (integrated circuit, IC), an analog IC, a radio frequency integrated circuit RFIC, a mixed-signal IC, an application-specific integrated circuit (application-specific integrated circuit, ASIC), a printed circuit board (printed circuit board, PCB), an electronic device, or the like. The processor and the transceiver may also be manufactured by using various 1C process technologies, for example, a complementary metal-oxide-semiconductor (complementary metal-oxide-semiconductor, CMOS), an n-type metal oxide semiconductor (n-type metal oxide semiconductor, NMOS), a p-channel metal oxide semiconductor (p-channel metal oxide semiconductor, PMOS), a bipolar junction transistor (Bipolar Junction Transistor, BJT), a bipolar CMOS (BiCMOS), silicon germanium (SiGe), gallium arsenide (GaAs), and the like. For example,FIG.10is a schematic diagram of a structure of an electronic device according to an embodiment of this application. For example, the electronic device may be understood as a mobile terminal. The electronic device may be configured to perform the foregoing picture rendering method. As shown inFIG.10, the electronic device100may include a processor110, an external memory interface120, an internal memory121, a universal serial bus (universal serial bus, USB) port130, a charging management module140, a power management module141, a battery142, an antenna1, an antenna2, a mobile communications module150, a wireless communications module160, an audio module170, a speaker170A, a receiver170B, a microphone170C, a headset jack170D, a sensor180, a button190, a motor191, an indicator192, a camera193, a display194, a subscriber identification module (subscriber identification module, SIM) card interface195, and the like. It may be understood that a structure shown in embodiments does not constitute a specific limitation on the electronic device100. In some other embodiments of this application, the electronic device100may include more or fewer components than those shown in the figure, or may combine some components, or may split some components, or may have different component arrangements. The components shown in the figure may be implemented by hardware, software, or a combination of software and hardware. The processor110may include one or more processing units. For example, the processor110may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural-network processing unit (neural-network processing unit, NPU). Different processing units may be independent components, or may be integrated into one or more processors. In some embodiments, the electronic device100may alternatively include one or more processors110. The controller may be a nerve center and a command center of the electronic device100. The controller may generate an operation control signal based on instruction operation code and a time sequence signal, to complete control of instruction reading and instruction execution. A memory may be further disposed in the processor110, and is configured to store instructions and data. In some embodiments, the memory in the processor110is a cache. The memory may store instructions or data just used or cyclically used by the processor110. If the processor110needs to use the instructions or the data again, the processor110may directly invoke the instructions or the data from the memory. This avoids repeated access and reduces a waiting time of the processor110, so that system efficiency of the electronic device100is improved. In some embodiments, the processor110may include one or more interfaces. The interface may include an inter-integrated circuit (inter-integrated circuit, I2C) interface, an inter-integrated circuit sound (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (general-purpose input/output, GPIO) interface, a subscriber identification module (subscriber identification module, SIM) interface, a universal serial bus (universal serial bus, USB) port, and/or the like. The USB port130is a port that conforms to a USB standard specification, and may be specifically a mini USB port, a micro USB port, a USB Type-C port, or the like. The USB port130may be configured to connect to the charger to charge the electronic device100, or may be configured to transmit data between the electronic device100and a peripheral device, or may be configured to connect to a headset to play audio by using the headset. It may be understood that an interface connection relationship between the modules that is shown in embodiments of the present invention is merely an example for description, and does not constitute a limitation on the structure of the electronic device100. In other embodiments of this application, the electronic device100may alternatively use an interface connection manner different from that in the foregoing embodiment, or use a combination of a plurality of interface connection manners. The charging management module140is configured to receive a charging input from the charger. The charger may be a wireless charger or a wired charger. In some embodiments of wired charging, the charging management module140may receive a charging input from the wired charger through the USB port130. In some embodiments of wireless charging, the charging management module140may receive a wireless charging input through a wireless charging coil of the electronic device100. The charging management module140supplies power to the electronic device100through the power management module141while charging the battery142. The power management module141is configured to connect to the battery142, the charging management module140, and the processor110. The power management module141receives an input of the battery142and/or an input of the charging management module140, and supplies power to the processor110, the internal memory121, the display194, the camera193, the wireless communications module160, and the like. The power management module141may be further configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health status (electric leakage or impedance). In some other embodiments, the power management module141may alternatively be disposed in the processor110. In some other embodiments, the power management module141and the charging management module140may alternatively be disposed in a same device. A wireless communication function of the electronic device100may be implemented through the antenna1, the antenna2, the mobile communications module150, the wireless communications module160, the modem processor, the baseband processor, and the like. The antenna1and the antenna2are configured to: transmit and receive electromagnetic wave signals. Each antenna in the electronic device100may be configured to cover one or more communication bands. Different antennas may be multiplexed to improve antenna utilization. For example, the antenna1may be multiplexed as a diversity antenna in a wireless local area network. In some other embodiments, the antenna may be used in combination with a tuning switch. The mobile communications module150may provide a wireless communication solution that includes 2G/3G/4G/5G or the like and that is applied to the electronic device100. The mobile communications module150may include at least one filter, a switch, a power amplifier, a low noise amplifier, and the like. The mobile communications module150may receive an electromagnetic wave through the antenna1, perform processing such as filtering and amplification on the received electromagnetic wave, and transmit a processed electromagnetic wave to a modem processor for demodulation. The mobile communications module150may further amplify a signal modulated by the modem processor, and convert the signal into an electromagnetic wave for radiation through the antenna1. In some embodiments, at least some functional modules of the mobile communications module150may be disposed in the processor110. In some embodiments, at least some functional modules of the mobile communications module150may be disposed in a same device as at least some modules of the processor110. The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium-high-frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor, and then transmitted to the application processor. The application processor outputs a sound signal through an audio device (which is not limited to the speaker170A, the receiver170B, or the like), or displays an image or a video through the display194. In some embodiments, the modem processor may be an independent component. In some other embodiments, the modem processor may be independent of the processor110, and is disposed in a same device as the mobile communications module150or another functional module. The wireless communications module160may provide a wireless communication solution that includes a wireless local area network (wireless local area network, WLAN), Bluetooth, a global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, an infrared (infrared, IR) technology, or the like and that is applied to the electronic device100. The wireless communications module160may be one or more components integrating at least one communications processor module. The wireless communications module160receives an electromagnetic wave through the antenna2, performs frequency modulation and filtering processing on an electromagnetic wave signal, and sends a processed signal to the processor110. The wireless communications module160may further receive a to-be-sent signal from the processor110, perform frequency modulation and amplification on the signal, and convert a processed signal into an electromagnetic wave for radiation through the antenna2. In some embodiments, the antenna1and the mobile communications module150in the electronic device100are coupled, and the antenna2and the wireless communications module160in the electronic device100are coupled, so that the electronic device100can communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a GSM, a GPRS, CDMA, WCDMA, TD-SCDMA, LTE, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a BeiDou navigation satellite system (BeiDou navigation satellite system, BDS), a quasi-zenith satellite system (quasi-zenith satellite system, QZSS), and/or a satellite based augmentation system (satellite based augmentation system, SBAS). The electronic device100may implement a display function by using the GPU, the display194, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display194and the application processor. The GPU is configured to: perform mathematical and geometric calculation, and render an image. The processor110may include one or more GPUs that execute instructions to generate or change display information. The display194is configured to display an image, a video, and the like. The display194includes a display panel. The display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED), an active-matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), a flexible light-emitting diode (flexible light-emitting diode, FLED), a mini-LED, a micro-LED, a micro-OLED, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device100may include one or N displays194, where N is a positive integer greater than 1. The electronic device100may implement a photographing function by using the ISP, one or more cameras193, the video codec, the GPU, one or more displays194, the application processor, and the like. The NPU is a neural-network (neural-network, NN) computing processor. The NPU quickly processes input information by referring to a structure of a biological neural network, for example, a transfer mode between human brain neurons, and may further continuously perform self-learning. The NPU can implement applications such as intelligent cognition of the electronic device100, such as image recognition, facial recognition, speech recognition, and text understanding. The external memory interface120may be configured to connect to an external storage card, for example, a micro SD card, to extend a storage capability of the electronic device100. The external storage card communicates with the processor110through the external memory interface120, to implement a data storage function. For example, data files such as music, a photo, and a video are stored in the external storage card. The internal memory121may be configured to store one or more computer programs, and the one or more computer programs include instructions. The processor110may run the instructions stored in the internal memory121, so that the electronic device100performs the voice switching method provided in some embodiments of this application, various function applications, data processing, and the like. The internal memory121may include a program storage area and a data storage area. The program storage area may store an operating system. The program storage area may further store one or more applications (for example, Gallery and Contacts), and the like. The data storage area may store data (such as photos and contacts) created during use of the electronic device100, and the like. In addition, the internal memory121may include a high-speed random access memory, and may further include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash storage device, or a universal flash storage (universal flash storage, UFS). In some embodiments, the processor110may run the instructions stored in the internal memory121and/or the instructions stored in the memory disposed in the processor110, to enable the electronic device100to perform the picture rendering method provided in embodiments of this application, various functional applications, and data processing. The electronic device100may implement audio functions, for example, music playing and recording, by using the audio module170, the speaker170A, the receiver170B, the microphone170C, the headset jack170D, the application processor, and the like. The audio module170is configured to convert digital audio information into analog audio signal output, and is also configured to convert analog audio input into a digital audio signal. The audio module170may be further configured to: code and decode an audio signal. In some embodiments, the audio module170may be disposed in the processor110, or some functional modules of the audio module170are disposed in the processor110. The speaker170A, also referred to as a “horn”, is configured to convert an audio electrical signal into a sound signal. The electronic device100may be used to listen to music or answer a call in a hands-free mode over the speaker170A. The receiver170B, also referred to as an “earpiece”, is configured to convert an audio electrical signal into a sound signal. When a call is answered or voice information is listened to by using the electronic device100, the receiver170B may be put close to a human ear to listen to a voice. The microphone170C, also referred to as a “mike” or a “mic”, is configured to convert a sound signal into an electrical signal. When making a call or sending voice information, the user may make a sound by moving a human mouth close to the microphone170C to input a sound signal to the microphone170C. At least one microphone170C may be disposed in the electronic device100. In some other embodiments, two microphones170C may be disposed in the electronic device100, to collect a sound signal and implement a noise reduction function. In some other embodiments, three, four, or more microphones170C may alternatively be disposed in the electronic device100, to collect a sound signal, implement noise reduction, and identify a sound source, so as to implement a directional recording function and the like. The headset jack170D is configured to connect to a wired headset. The headset jack170D may be the USB port130, or may be a 3.5 mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface or a cellular telecommunications industry association of the USA (cellular telecommunications industry association of the USA, CTIA) standard interface. The sensor180may include a pressure sensor180A, a gyro sensor180B, a barometric pressure sensor180C, a magnetic sensor180D, an acceleration sensor180E, a distance sensor180F, an optical proximity sensor180G, a fingerprint sensor180H, a temperature sensor180J, a touch sensor180K, an ambient light sensor180L, a bone conduction sensor180M, and the like. The pressure sensor180A is configured to sense a pressure signal, and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor180A may be disposed on the display194. There are a plurality of types of pressure sensors180A, such as a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates made of conductive materials. When a force is applied to the pressure sensor180A, capacitance between electrodes changes. The electronic device100determines pressure intensity based on a capacitance change. When a touch operation is performed on the display194, the electronic device100detects intensity of the touch operation by using the pressure sensor180A. The electronic device100may also calculate a touch location based on a detection signal of the pressure sensor180A. In some embodiments, touch operations that are performed at a same touch location but have different touch operation intensity may correspond to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on a Messages icon, an instruction for viewing an SMS message is executed. When a touch operation whose touch operation intensity is greater than or equal to the first pressure threshold is performed on a Messages icon, an instruction for creating a new SMS message is executed. The gyro sensor180B may be configured to determine a motion posture of the electronic device100. In some embodiments, an angular velocity of the electronic device100around three axes (that is, axes X, Y, and Z) may be determined by using the gyro sensor180B. The gyro sensor180B may be configured to perform image stabilization during photographing. For example, when a shutter is pressed, the gyro sensor180B detects an angle at which the electronic device100jitters, calculates, based on the angle, a distance for which a lens module needs to compensate, and allows the lens to cancel the jitter of the electronic device100through reverse motion, to implement image stabilization. The gyro sensor180B may be further used in a navigation scenario, a motion-sensing game scenario, and the like. The acceleration sensor180E may detect magnitudes of accelerations of the electronic device100in various directions (usually on three axes), and may detect a magnitude and a direction of gravity when the electronic device100is still. The acceleration sensor180E may be further configured to identify a posture of the electronic device, and is used in an application such as switching between landscape mode and portrait mode or a pedometer. The distance sensor180F is configured to measure a distance. The electronic device100may measure the distance in an infrared manner or a laser manner. In some embodiments, in a photographing scenario, the electronic device100may measure a distance by using the distance sensor180F to implement quick focusing. The optical proximity sensor180G may include, for example, a light-emitting diode (LED) and an optical detector such as a photodiode. The light-emitting diode may be an infrared light-emitting diode. The electronic device100emits infrared light by using the light-emitting diode. The electronic device100detects infrared reflected light from a nearby object by using the photodiode. When sufficient reflected light is detected, the electronic device100may determine that there is an object near the electronic device100. When insufficient reflected light is detected, the electronic device100may determine that there is no object near the electronic device100. The electronic device100may detect, by using the optical proximity sensor180G, that a user holds the electronic device100close to an ear for a call, to automatically turn off a screen for power saving. The optical proximity sensor180G may also be used in a leather case mode or a pocket mode to automatically unlock or lock the screen. The ambient light sensor180L is configured to sense ambient light brightness. The electronic device100may adaptively adjust brightness of the display194based on the sensed ambient light brightness. The ambient light sensor180L may also be configured to automatically adjust a white balance during photographing. The ambient light sensor180L may further cooperate with the optical proximity sensor180G to detect whether the electronic device100is in a pocket, to prevent an accidental touch. The fingerprint sensor180H (also referred to as a fingerprint recognizer) is configured to collect a fingerprint. The electronic device100may use a feature of the collected fingerprint to implement fingerprint-based unlocking, application lock access, fingerprint-based photographing, fingerprint-based call answering, and the like. In addition, for other descriptions of fingerprint sensors, refer to International Patent Application PCT/CN2017/082773 and entitled “NOTIFICATION PROCESSING METHOD AND ELECTRONIC DEVICE”, which is incorporated herein by reference in its entirety. The touch sensor180K may also be referred to as a touch panel or a touch-sensitive surface. The touch sensor180K may be disposed on the display194, and a touchscreen includes the touch sensor180K and the display194. The touch sensor180K is configured to detect a touch operation performed on or near the touch sensor180K. The touch sensor may transfer the detected touch operation to the application processor, to determine a type of a touch event. A visual output related to the touch operation may be provided on the display194. In some other embodiments, the touch sensor180K may alternatively be disposed on a surface of the electronic device100at a location different from that of the display194. The bone conduction sensor180M may obtain a vibration signal. In some embodiments, the bone conduction sensor180M may obtain a vibration signal of a vibration bone of a human vocal part. The bone conduction sensor180M may also be in contact with a human pulse, to receive a blood pressure beating signal. In some embodiments, the bone conduction sensor180M may alternatively be disposed in the headset, to obtain a bone conduction headset. The audio module170may obtain a voice signal through parsing based on the vibration signal that is of the vibration bone of the vocal part and that is obtained by the bone conduction sensor180M, to implement a voice function. The application processor may parse heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor180M, to implement a heart rate detection function. The button190includes a power button, a volume button, and the like. The button190may be a mechanical button, or may be a touch button. The electronic device100may receive a key input, and generate a key signal input related to a user setting and function control of the electronic device100. The SIM card interface195is configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface195or removed from the SIM card interface195, to implement contact with or separation from the electronic device100. The electronic device100may support one or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface195can support a nano-SIM card, a micro-SIM card, a SIM card, and the like. A plurality of cards may be simultaneously inserted into a same SIM card interface195. The plurality of cards may be of a same type or of different types. The SIM card interface195may also be compatible with different types of SIM cards. The SIM card interface195may also be compatible with an external storage card. The electronic device100interacts with a network through the SIM card, to implement functions such as calling and data communication. In some embodiments, the electronic device100uses an eSIM, that is, an embedded SIM card. The eSIM card may be embedded into the electronic device100, and cannot be separated from the electronic device100. A person skilled in the art can appreciate that technologies in this application may be implemented in various apparatuses or devices, including a wireless handset, an integrated circuit (IC), or a group of ICs (for example, a chipset). Various components, modules, or units are described in this application to emphasize function aspects of the apparatuses configured to perform the disclosed technologies, but are not necessarily implemented by using different hardware units. Actually, as described above, various units may be combined into a codec hardware unit in combination with appropriate software and/or firmware, or may be provided by interoperable hardware units (including one or more processors described above). The foregoing descriptions are merely specific example implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
51,396
11861776
DETAILED DESCRIPTION Embodiments disclosed herein provide a method and system for providing personalized multimedia avatars that provide virtual “student companionship,” “tutoring,” and “librarian” modelization to a user that is studying alone. A virtual avatar production system generates the multimedia avatars and may stream one or more of the multimedia avatars to one or more display devices for rendering to the user. The avatar production system may also be coupled to an online education platform that provides one or more education services to the user. The user may use the one or more display devices or different user devices to conduct one or more studying activities on the online education platform, depending on the user's goals and mood. E.g., if the user wants to study with companionship but without distraction, the user may use a first device (such, as a laptop) to conduct studying activities while the avatar is presented on a second device (such as, a connected television screen in the room in which the user is studying). On the other hand, if the user wants to study with a lot of interaction, the avatar may be presented on the same device (e.g., user's laptop) that the user is using for studying. In some embodiments, the avatar production system may obtain from the online education platform real-time information about the user's studying activities, create one or more of the avatars based on such information, and synchronize special effects of the avatars with such studying activities. The avatar production system may further receive environmental, physiology, or motion information associated with the user and create and synchronize the avatars further based on such information to direct support to the user. The environmental, physiology, or motion information may be captured by sensors of smart phones, wearable devices, or connected speakers of the user, such as microphones, cameras, motion trackers, physiology sensors, or environmental sensors. FIG.1illustrates an example system architecture100for providing personalized multimedia avatars for virtual studying companionship. In some embodiments, the system architecture100may comprise an avatar production system110, an online education platform130, one or more user devices140a, and one or more display devices140b. The one or more display devices140bmay receive a streaming of one or more multimedia avatars from the avatar production system110and display the avatars to the user. The one or more display devices140bmay comprise a monitor, a speaker, a computer, a projector, a hologram projector, a smart phone, a smart tablet, a pair of virtual reality or augmented reality glasses, a wearable device, other suitable devices, or any combination thereof. In some embodiments, the user is a user, such as a registered user, of the online education platform130. In some embodiments, as described further in the instant specification, the user is a human user in the analog universe, who is performing study activities, e.g., reading, writing, taking an examination. In other embodiments, the user is a human user in the metaverse and is wearing a VR headset. That headset includes display, computing platform, motion sensors, front cameras, built-in speakers, microphone, other sensors (eye tracking for example), network, etc., like a smartphone but much more immersive. The rendering and modelization of the 3D avatars would be the same for both the user in the analog world and the user in the metaverse, except that streaming of the avatars would be rendered only within the VR/metaverse display and not on separate display devices. In the metaverse, the user (engaged in study activities) is itself an avatar, among other computed avatars rendered in virtual computed backgrounds, with the other computed avatars reacting to the user's activities. The one or more user devices140amay comprise one or more devices paired to the avatar production system110and one or more devices paired with the online education platform130. One or more of the user devices140amay each comprise one or more sensors141for collecting environmental, physiology, or motion data associated with the user and provide such data to the avatar production system110. One or more of the user devices140amay be paired with the online education platform130through a network to allow the user to access educational content on the online education platform130. The one or more user devices140amay comprise a mobile phone, a speaker, a microphone, a camera, a wearable device, a motion tracking device, a hygrometer, a thermometer, other suitable devices, or any combination thereof. The one or more user devices140apaired with the online education platform130may or may not be the same as the one or more display devices140b. The same device may be used to display avatars streamed from the avatar production system110to the user as well as to display educational content from the online education platform130for the user to interact with. Alternatively, different devices may be used for these different purposes. In other words, the one or more display devices140band the one or more user devices140ashown inFIG.1may be implemented as one device or multiple different devices. The online education platform130may be implemented on one or more server-side computing devices. The online education platform130may be coupled to the avatar production system110and one or more user devices140avia one or more network connections. The online education platform130provides learning services to its registered users. The learning services may include passive learning services that provide content to be read, watched, or listened to by a learner, such as e-textbooks, flash cards, tutorial videos. The learning services may also include active learning services that provide content that is made for interaction with the learner, such as question & answers, quizzes, and interactive tutorials. The learning services may further include recall-type learning services that provide content used for testing the knowledge of the learner, such as tests. The avatar production system110is architected around the modelization, production and post-production of one, or more, personalized avatars, which are streamed to a registered user's paired multimedia capable device, when that user is studying using the online education platform services. The avatar production system110comprises various sub-systems implemented by software, hardware, or a combination thereof. Each sub-system may be implemented on one or more memories or other storage devices configured to store data and computer-executable instructions and one or more processors configured to execute the instructions to perform one or more operations of the sub-system. Different sub-systems may be combined into one sub-system. Each sub-system may be separated into multiple individual sub-systems. The avatar production system110may comprise a device management system111for managing paired devices140. The device management system111may store information associated with a plurality of paired devices140such as identification and network addresses for the paired devices140. A user may select two types of devices to be paired with the avatar production system110. First, the paired devices may comprise multimedia devices140b. Multimedia devices are connected devices with multimedia playback support, such as mobile phone, laptop, or connected TV. These devices are configured to receive and display the multimedia stream of the virtual avatars constructed by the avatar production system110. The avatar production system110may store information associated with one or more multimedia devices of a user and stream the avatars to one of the one or more multimedia devices based on user preferences or instructions. The one or more display devices140bused by the user to view avatars constructed by the virtual avatar production system may be implemented as one or more of the multimedia devices. Second, the paired devices may comprise environmental, physiology, and motion devices such as smart phones, wearables or connected speakers. These devices can be paired by the avatar production system to detect environmental, physiology, and motion conditions which may impact the quality of a studying session. The devices may have sensors141including, for example, cameras, microphones, motion-tracking sensors, or ambient temperature sensors. The sensors141may, for example, capture certain type of body movements from the user, such as standing-up, sitting, walking and stretching, as well as sudden noise level, heart rate, ambient temperature and humidity, among others. The one or more user devices140amay be implemented as one or more of the environmental, physiology, and motion devices. The avatar production system110may comprise a preference management system112for managing user avatar preferences. The preferences may be set by the user by selecting one or more properties in a user interface provided by the avatar production system110for display on at least one of the display devices140b. The preference management system112may store a plurality of preferences by each user. The avatar production system110may create avatars for each user based on the preferences of the user. These preferences may include preferences for: role, number, duration, style, virtual background, content synchronicity, activities synchronicity, environmental, physiology & motion devices, and so on. Under the preferences for “role”, the avatar production system110may allow a user to select among a plurality of avatar roles, such as, “Student”, “Tutor”, and “Librarian”. While the disclosure describes these three roles, other roles can be added as well. Under the preferences for “number”, the avatar production system110may allow a user to select a number of avatars to be displayed by the avatar production system110. For example, the user may select from a number from one avatar up to a classroom full of avatars, represented together in same stream. In some embodiments, the avatar production system110may have one or more restrictions on the number of certain types of avatars. For example, the restrictions may specify that there can only be one “Tutor” or one “Librarian” in a single studying session stream. On the other hand, multiple “Students” may be allowed for a single studying session stream. Under the preferences for “duration”, the avatar production system110may allow a user to set the length of a study session during which one or more avatars will be streamed. This preference may allow the avatar production system to set a time-constrained session by modeling time-based avatar actions. Under the preferences for “style”, the avatar production system110may allow a user to select from a library of real or fictitious persons or characters, or as uploaded by a user. Each avatar can be further personalized using school/university differentiators, such as logos, mascot, colors, or others. Under the preferences for “virtual background”, the avatar production system110may allow a user to select from a library of real or fictitious recorded backgrounds or as uploaded by a user. Each background can be further personalized using school/university differentiators, such as logos, mascot, colors, or others. Under the preferences for “content synchronicity”, the avatar production system110may allow a user to select the type of content to be used by an avatar. Given that the user is interacting with online education platform130, the online education platform130may share the type of content accessed by that user at a given time with the avatar production system110. This category of preferences instruct the avatar production system110to model the content used by virtual avatars based on the content accessed by the user. Options under this category of preferences may include “Random” (any learning content), “Mirror” (same content a the content current accessed by the user), “Related” (similar type of content to that accessed by the user). For example, a user studying the textbook “Biology 101, Chapter 2, Page 22” may instruct the avatar production system110to either select random content (e.g., any book), mirror that textbook (e.g., Biology 101), or use a related one (e.g., another biology book) for the avatar modelization. Under the preferences for “activities synchronicity”, the avatar production system110may allow a user to select the style of synchronization activities between the avatar created by the avatar production system110and the user. Options under this category of preferences may include “Random” (that is, randomly synchronize any user activity with the avatar), “Slow Mirroring” (that is, the avatar mimics the user's activity at a slower pace), “Fast Mirroring” (that is, the avatar mimics the user's activity at a faster pace), “Asynchronous” (e.g., the avatar mimics the user's activity in an asynchronous manner), “None” (that is, do not synchronize activities between the user and avatar). Under the preferences for “environmental, physiology & motion devices”, the avatar production system110may allow a user to select the type of environmental, physiology and/or motion devices140aand associated sensors141to capture data associated with the user during studying sessions. The avatar production system110may comprise an avatar role and attributes management system113for managing the roles of available avatars and the actions associated with each of the roles. An avatar is modeled in function of its learning-based roles, such as “Student”, “Tutor” or “Librarian”. Each role defines a range of pre-determined actions by an avatar of the role. The modeled actions may correlate to activities captured from a user's studying session. Each action is part of a library of pre-defined actions, determined by role, which is modeled and visualized by the avatar production systems. Actions are modeled either by events mirroring the studying activities or modeled from events experienced by the user and its environment. The type of actions done by an avatar depends on its designated role. In some embodiments, in the “Student” role, an avatar mimics a student who is studying. The avatar does so by rendering recorded learning actions that mimic the student's expected actions while the student is focused on learning activities. Actions of the role may be determined based on content and synchronicity activities preferences. The actions may comprise, for example, opening/closing a book, turning pages, writing notes, stretching neck or shoulders, flexing hands or fingers, looking around, and so on. Actions in this role may be considered non-disturbing to the other students, and include actions that will not impact the focus of others specifically. This role may be defined as supportive and unobtrusive. In some embodiments, there may be another role called the “Twin” or “Mirror Image” role, which is a special case of the “Student” role, and in which the avatar not only mimics the student's expected actions, but also mimics the student's appearance. In some embodiments, in the “Tutor” role, an avatar mimics a tutor who tutors a student. The avatar does so by rendering a set of recorded supporting learning actions that mimic the expected actions of a tutor who is helping a student to learn. In this role, the avatar actions may be created to provide positive or negative feedback and other forms of encouragement to the student during a study session, including by reacting to learning activities of the student that have been captured by the online education platform130. The captured learning activities may be reflective of the progress, or lack of, a student makes during the study session and these activities can be translated into a set of recorded actions that can be modeled by the avatar production system110. Examples of captured learning activities include: a reading pace (e.g., a pace of reading pages of a particular section of a textbook), presence or absence of the writing of personal notes, number of correct or incorrect answers provided to a quiz or test, time remaining in learning session, and so on. As the student performs one or more of these learning activities, the tutor avatar may perform such actions as a “thumps-up”, a “thumbs-down”, or encourage the student by an audio of “Keep going for another 15 minutes”, and so on. In some embodiments, in the “Librarian” role, the avatar mimics a librarian who is helping a student. The avatar does so by rendering a set of recorded actions that a librarian may be expected to perform to help a student to focus while studying. In this mode, the librarian is reacting to events captured by the paired environmental, physiology and motion sensors, corresponding to situations which would potentially disturb the learner or other students, such as sudden loud background noises, loud voices, high ambient temperature, heart rate, or fast motion movements, for examples. The predefined actions a librarian may take as a response to these events would include “asking for calm”, “walking towards the student”, “staring at the student through the display”, “open a window because it is too hot here”, “drink a glass of water”, “take a deep breath” or “asking the student to stop creating a disturbance”. Such actions can be modelized through the production of video only avatars, combination of video and audio, or a combination of a video and text overlay content. The audio effects may be created to be non-disturbing, including, for example, the sound of flipping pages of a book, the sound of a pen writing on a piece of paper, the sound of a person's steps, the sound of a person drinking water, the sound of a person taking a deep breath, other suitable audio effects, or any combination thereof. The avatar production system110may comprise an avatar modelization and background system117for modeling avatars and their corresponding backgrounds. The modelization may be based on creating two-dimensional (2D) or three-dimensional (3D) graphical likeness of real or fictitious characters that can be mapped dynamically into the selected virtual backgrounds by the production and post-production systems. The avatar modelization and background system117may retrieve 2D or 3D graphical models of avatars from an avatar database116. The 2D or 3D graphical models may be pre-rendered by the avatar production system110or received by the avatar production system110from another source and stored in avatar database116. The avatar database116may comprise information associated with the look and feel of avatars, such as height, race, gender, fitness, clothes, etc. The avatar database116may further comprise additional information regarding, for example, the way avatars move and the sound of avatars' voices. In addition, the avatar database116may include a set of learning objects associated with the learning activities of a user, such as the front cover of the textbook(s) being read by the user, a pen for taking notes, a notebook, a school uniform, a logo, etc. The learning objects may be rendered along with the avatar modelization, as objects used by the avatars. Backgrounds are digitized into still frames or video clips, based on an existing background library, or as uploaded from the user to the avatar production system110. The background library may be stored in a background database121. The background library may comprise a collection of background images to be used as background with the avatars on the foreground. The background library may comprise pre-defined background images. It may also comprise background images customized from background pictures uploaded by one or more users. One or more background images in the background library may be customized or personalized based on one or more objects, such as a desk, a wall decoration item, plants, etc. The 2D or 3D graphical representation of the avatars may range from simple emoji (2D), stickFIGS.2D), to 3D lifelike characters. A library of pre-rendered graphical actions may be applied to bring animations to the modelization of each avatar, which the post-production system119selects based on the learning, environment, physiology and motion activities detected from the user, in order to form a continuous stream of multimedia avatar content. The library of pre-rendered graphical actions may be implemented as a special effect library122. The special effects of avatars may be pre-rendered by the avatar production system110or received by the avatar production system110from another source. In some embodiments, the avatar production system110may render a two-dimensional or three-dimensional representation of an avatar. It may pre-render one or more special effects associated with the avatar by animating the representation of the avatar to follow one or more actions. Then it may store the one or more special effects in a library comprising the plurality of pre-computed special effects (e.g., the special effect library122). The avatar production system110may comprise a data analyzer114that may analyze data associated with a user's studying activities and environmental events. The data analyzer114may obtain data associated with a user's studying activities from the online education platform130. The online education platform130may comprise a studying data capturing system131that may be implemented as part of the online education platform130by software, hardware, or a combination thereof. The studying data capturing system131may be configured to monitor one or more studying activities of a user on the online education platform130and recording data associated with the user's studying activities. In some embodiments, a user's learning/studying activities are captured in real-time from the online education platform130by the studying data capturing system131when that User is accessing learning services provided by the online education platform130. The online education platform130shares the captured data with the avatar production system110for analysis and modeling. In some embodiments, a user's learning/studying activities are generally classified as Passive, Active, Recall (PAR) with Passive defined as reading (no content gets created), Active defined as adding notes, asking questions (creating content) and Recall defined as testing (answering questions for the purpose of being tested). In some embodiments, the captured PAR activities, along with their related content references, may be used by the video production system118and the post-production system119to construct a virtual representation of avatars performing similar type of activities, with similar type of content, providing the user with virtual studying companionship. For instance, a user's Passive activities, such as reading the “Biology 101, Chapter 3” textbook, can be translated into having one or more avatars in “Student” role reading the same, or different, textbook or chapter. Because the act of reading is typically associated to a quiet environment, having an avatar reading while the user is reading as well, specifically when projected into a typical reading environment, such as a virtual library background for example, provides direct support to the user while reading. As another example, a user's Active activities, such as taking notes or asking a question, can be translated by the video production system118and the post-production system119into one or more avatars in “Student” role performing similar activities. Because the act of taking a note, or asking a question into a chat, is typically associated with using a pen, keyboard, or other type of input device, the “Student” role avatars may go through similar activities when projected into their virtual background, such a classroom or library. As yet another example, a user's Recall activities, such as taking a SAT examination or other form of tests, can be translated by the video production system118and the post-production system119into having avatars in “Student” roles performing the same type of tests in virtual background that duplicate an official SAT test location or university auditorium. The avatar production system110may further comprise a environmental data capturing system115for capturing environmental, physiology, or motion data associated with a user. These data is collectively captured using one or more user devices140aconnected to the avatar production system110. Such devices140amay comprise, for example, smartphones, smartwatches, connected speakers, or fitness bands. The user's environmental data, if present, may be captured in real-time to detect and monitor environmental conditions during a studying session. The environmental data capturing system115may leverage a combination of connected sensors141to capture environmental data including, for example, ambient temperature, humidity level, ambient noise level, and local conversations. In some embodiments, environmental data may provide background information to the avatar production system110for updating the modelization of the avatars to provide notifications to the user, or alternatively to make these avatars appear to react to the environmental conditions of the user. For example, in response to detecting a substantial increase in ambient noise level, an avatar of the “Librarian” role may be streamed to the user to remind the user to reduce the noise level and study in a quiet environment. The user's motion data, if present, may be captured in real-time to detect certain type of body movements from the user, such as standing up, sitting, walking or stretching. This type of motion tracking information may be used by the avatar production system110to adjust the position of the avatars in the virtual background, to make these avatars appear to react to the motion of the user. The avatar production system110may model and create motions of avatars based on the user's preferences of the “activities synchronicity” category. The user's physiology data, if present, may be captured in real-time to detect certain types of physiology information about the user while studying, including, for example, heart rate, oxygen level, blood pressure and dehydration. Based on such data, the avatar production system may make avatars appear to react to the captured physiology data of the user. The data analyzer114may aggregate and analyze data associated with the user's studying activities received from the online education platform130and the environmental, physiology, and motion data associated with the user received from the user devices140a. Based on the analysis, the data analyzer114may detect one or more events associated with the user in real time. The detected events may be fed into the video production system118and the post-production system119to inform their selection of appropriate avatars, backgrounds, and special effects for use in generating a multimedia stream to send to the one or more display devices140b. The avatar production system110may comprise a video production system118for modeling activities of avatars and producing videos displaying activities of avatars. In some embodiments, the avatars' possible actions or reactions are computed ahead of time because there are only a limited and predictable number of activities applicable to all avatars by the video production system118and the post-production system119. These actions or reactions may be referred to as special effects of the avatars. For instance, every 2D/3D representation of an avatar, may be programmed to follow several predetermined actions. The action that an avatar is modeled to carry out for a particular use case may be selected based on the captured data from the user's studying session, including learning, environmental, motion, or physiology data. The pre-determined actions may be stored in a library of special effects122. The computed special effects of avatars may include “studying” special effects that may include, for example, “standing up,” “siting down,” “stretching,” “looking up,” “opening a book,” “flipping pages of a book,” ‘writing a question.” Such special effects are motion related activities that can be pre-calculated and applied to any avatars during the streaming of that avatar. Additional special effects may include, for example, “the pace of reading pages of a particular section of a textbook,” “the presence or absence of the writing of personal notes,” “the number of correct or incorrect answers provided to a quiz or test,” “the time remaining in learning session,” “thumps-up,” “thumbs-down,” “keep going for another 15 minutes,” “asking for calm,” “walking towards the student,” “staring at the student through the display,” “open a window because it is too hot here,” “drink a glass of water,” “take a deep breath,” or “asking the student to stop creating a disturbance.” The above actions may be stored as part of the special effects library122, which are modelized through the production of video only avatars, combination of video and audio, or a combination of a video and text overlay content. The video production system118may select a special effect to apply based on a user's preferences and events detected by the data analyzer114based on data associated with the user's studying activities on the online education platform130and the environmental, physiology, and motion data captured by the user devices140aand shared with the avatar production system110. The video production system118may combine one or more selected avatars and one or more selected special effects to generate a multimedia stream displaying animation of the avatars using the special effects. The avatar production system110may comprise a post-production system119for post-processing avatars and special effects. The post-production system119may take the already processed, and animated avatars, to merge them into a selected virtual background, or scenes. The post-production system119may merge the animated avatars with the virtual background using any appropriate techniques, such as the techniques of green screens. The resulting content may comprise video only content, a combination of video and audio, or a combination of video and text overlay content. The avatar production system110may comprise an audio/video packaging and streaming system120for encoding and packaging the multimedia content generated by the post-production system119. The encoded and packaged multimedia content may be streamed via a content distribution network to the display devices140b. FIG.2illustrates an example workflow200for providing personalized multimedia avatars for virtual studying companionship. The workflow200may be carried out by one or more of the components of the system100as shown inFIG.1. Depending on the implementation, the workflow200may include additional, fewer, or alternative steps performed in various orders or in parallel. The devices or systems performing certain steps as illustrated in the workflow200may be substituted by other suitable devices or systems to perform the same steps. The suitable devices or systems may comprise sub-systems, parent systems, or counterpart systems with similar functionalities. The workflow200may start at step201, where it may be determined that a user's studying session has started or in session. At step202, the avatar production system110may load or access a user profile associated with the user and the user's preferences regarding avatars. The user preferences may be loaded by the preference management system112. After loading or accessing the user profile and user preferences, the avatar production system110may load one or more default avatars (e.g., tutor role, student role, librarian role) for the user according to the user preferences at step212. Specifically, the avatar production system110may load information associated with one or more roles of the one or more loaded avatars, 2D/3D modelization of the one or more avatars, and a background. The loaded avatars and background may be specified by one or more user preferences. Alternatively, the loaded avatars or background may be selected based on system settings associated with the avatar production system110that may or may not be customized by the user. At step203, the avatar production system110may capture real-time data associated with the user, including data associated with the user's studying or learning activities received from the online education platform130and data associated with the user's environmental, physiology, and motion data received from user devices140a. In some embodiments, the avatar production system110may capture one or more current online activities of the user carried on the online education platform130that is providing learning services to the user. The learning services may comprise, for example, educational content services comprising electronic textbooks, flash cards, or tutorial videos, online question-and-answer services, online testing services, other suitable online learning services, or any combination thereof. Accordingly, the current online activities of the user may indicate that the user is viewing flash cards, or watching a tutorial video, and so on. In some embodiments, the avatar production system110may capture or receive sensor data from one or more electronic devices140aof the user regarding the user's environment, the user's physiology conditions, and/or the user's motions. As an example, the avatar production system110may capture or receive user's heart rate from the user's watch. At step204, the avatar production system110may analyze the user's online data and/or environmental, physiology, and motion data in light of the role attributes of avatars available to be created by the avatar product system110. The analysis may be carried out by the data analyzer114and may comprise analysis of one or more captured ongoing or real-time activities of the user on the online education platform130, the sensor data received from the user's electronic devices140a, or any combinations thereof. The role attributes may be managed and maintained by the avatar role and attribute management system113. In some embodiments, the role attributes of the avatars may specify detected events that would trigger the generation of avatars of pre-determined roles performing pre-determined actions corresponding to the detected events. At step205, avatar production system110may determine, based on the analysis of the user's online data, whether a event specified by the avatars' role attributes is detected. In some embodiments, the event may comprise activities of the user on the online education platform130. Such activities may comprise, for example, reading a textbook, taking notes, asking a question, taking an examination, other suitable online learning activities, or any combination thereof. In some embodiments, the event may comprise a change in environmental conditions. A change in environmental conditions may comprise a change in an ambient temperature, a change in a humidity level, a change in an ambient noise level, other suitable environmental events, or any combination thereof. In some embodiments, the event may comprise a movement of the user. The movement of the user recognized by the avatar production system110may comprise, for example, standing up, sitting, walking, yawning, stretching, other suitable motions, or any combination thereof. In some embodiments, the event may comprise a change in a physiology condition of the user. The change in the physiology condition of the user may comprise, for example, a change of heart rate, a change of body temperature, a change of oxygen level, a change of blood pressure, dehydration of the user, other suitable physiology conditions, or any combination thereof. In some embodiments, events may further be determined based on a period of time lapsed for a certain activity or the lack of an activity. Such events may comprise, for example, reading a textbook for a period of time, viewing a particular question for a period of time, a lack of movement for a period of time, other suitable time-based events, or any combination thereof. The detected event may also comprise a combination of any of the aforementioned example events or other suitable events associated with the user. In some embodiments, if it is determined that a new event has been detected at step205, the workflow200may proceed to step206. Otherwise, the workflow200may proceed to step213. At step213, the avatar production system110may create multimedia content associated with the avatars loaded at step212. The multimedia content may be created by the video production system118. The multimedia content may comprises video, a combination of video and audio, a combination of video and text overlay content, other suitable multimedia content, or any combination thereof. The avatar production system110may create the multimedia content based on one or more role attributes of the one or more avatars loaded at step212. The one or more attributes of the one or more avatars may comprise one or more pre-determined rules mapping avatars and special effects to events. They may specify one or more default special effects when no new event is detected. The default special effects may be synchronized with detected learning activities of the user on the online education platform130. In some embodiments, the avatar production system110may create the multimedia content by applying the special effects to the modelization of the one or more loaded avatars. Then, the workflow may proceed to perform steps214-216to provide the multimedia content to the user. At step214, the avatar production system110may post-produce the multimedia content by projecting the avatars and the special effects on a virtual background. This step may be performed by the post-production system119. Here, the virtual background used may be a default background according to system settings or user preferences. At step215, the avatar production system may encode and package the multimedia content into a multimedia stream. It may then stream the multimedia content to the display device140bof the user. Step215may be performed by the audio/video packaging and streaming system120. At step216, the multimedia stream of the avatars may be displayed by the display device140b. At step206, the avatar production system110may correlate the newly detected event with one or more avatar roles based on role attributes of the one or more avatars. The role attributes of the avatars available for production by the avatar production system110may comprise a plurality rules mapping the avatars to events detected. For example, for each avatar, the role attributes of the avatar may comprise a list of events in response to which the avatar may be displayed. At step206, the avatar production system110may select the avatars from a library comprising a plurality of avatars. For example, it may search the avatar database116to identify one or more avatars whose attributes comprise the new event detected at step205. In some embodiments, the avatar production system110may select, based on the detected event, one or more roles from a plurality of available roles of avatars. It may then select avatars that are of the selected roles based on user preferences. The available roles of avatars may comprise a student, a tutor, a librarian, other suitable roles, or any combination thereof. At step207, the avatar production system110may determine if a role change is needed. For example, the avatar production system110may compare the roles of one or more avatars identified at step206with the roles of avatars loaded at step212. If the roles are different, the avatar production system110may determine that a role change is needed. Otherwise, if the roles are the same, the avatar production system110may determine that role change is not needed. In some embodiments, if it is determined that a role change is needed, the workflow200may proceed to step208. Otherwise, the workflow200may proceed to step213. At step213, the avatar production system110may create multimedia content associated with the avatars loaded at step212. The avatar production system110may create the multimedia content based on one or more role attributes of the one or more loaded avatars. Here, because a new event was detected, the avatar production system110may identify one or more special effects matched to the new event. The avatar production system110may select the one or more special effects from a library (e.g., the special effect library122) comprising a plurality of pre-rendered special effects each corresponding to at least one of the plurality of avatars. The one or more special effects may be determined based on one or more rules included as part of the role attributes that map special effects to detected events. The special effects may be customized and synchronized with one or more detected activities of the user based on data from the online education platform130or sensor data from one or more user devices140a. In some embodiments, the avatar production system110may select one or more actions by at least one of the one or more avatars based on a detected activity of the user. The one or more actions of the one or more avatars may be the same as or similar to the detected activity of the user. For example, the avatar production system110may determine that the user is taking an examination. It may accordingly generate one or more avatars also taking an examination to provide the appearance that the user is taking an examination with a number of other students. In some embodiments, the avatar production system110may further determine one or more points in time associated with the one or more selected actions. The avatar production system110may then generate one or more special effects representing the at least one of the one or more avatars performing the selected one or more actions at the one or more points in time, respectively. For example, an avatar representing a studying student may be animated such that it stands up and walk around every thirty minutes, which may provide a reminder to the user to take a break from studying. In some embodiments, the avatar production system110may determine a duration of the detected activity of the user and triggering a special effect when the duration of the detected activity reaches a threshold. For example, the avatar production system110may determine that a student has not moved for more than an hour and trigger a special effect of an avatar drinking a glass of water to remind the user to get hydrated. In some embodiments, the avatar production system110may identify content accessed by the user on the online education platform and customize the one or more special effects based on the content accessed by the user. For example, the avatar production system110may identify that the user is reading the e-textbook Biology 101 on the online education platform130. The avatar production system110may create an avatar in the student role also reading Biology 101, thereby giving the user the appearance of studying the same subject with a classmate. The avatar production system110may create the multimedia content by applying the identified special effects on the modelization of the avatars. Then, the workflow may proceed to perform steps214-216to provide the multimedia content to the user. At step214, the avatar production system110may post-produce the multimedia content by projecting the avatars and the special effects on a virtual background. This step may be performed by the post-production system119. The virtual background may be selected based on one or more rules mapping backgrounds to events. The avatar production system110may identify a background that matches the detected event and map the avatars and special effects on the identified background. At step215, the avatar production system may encode and package the multimedia content into a multimedia stream. It may then stream the multimedia content to the display device140bof the user. Step215may be performed by the audio/video packaging and streaming system120. At step216, the multimedia stream of the avatars may be displayed by the display device140b. At step208, in response to determining that a role change is needed, the avatar production system110may switch the currently displayed avatars to avatars having new roles and corresponding role attributes. The workflow200may then proceed to the steps212-216. The steps212-216are performed is a way essential the same as described above. The avatar production system110may load one or more new avatars and their role attributes, modelization, along with a virtual background selected based on the newly detected event. The avatar production system110may apply special effects selected based on the newly created event on the avatars and project the avatars on the virtual background to create multimedia content, encode and package the multimedia content, and stream the multimedia content to the display device140bfor display to the user. At step209, the avatar production system110may determine whether a current studying session of the user has ended. The determination may be performed based on information received from the online education system130. The determination may be performed periodically during a studying session. The frequency of the determination may be set by the avatar production system110or be controlled by one or more user preferences. Alternatively, the determination may be triggered by certain conditions. For example, when an event is detected, the avatar production system110may analyze data related to the user's studying activities and environment conditions to determine if the event has ended. When a current event is ended, the avatar production system may analyze if the studying session has ended. If not, the avatar production system110further analyzes the data to determine the next piece of multimedia content to stream to the user given that the current event has ended. If it is determined at step209that the current studying session has not ended, the workflow may return to step203. The avatar production system110may repeat the steps203-209to determine if new events occur and if the currently streamed avatars, special effect, and background need to be changed. If so, the avatar production system110may perform some or all of steps212-216to implement the changes. As shown inFIG.2, the avatar production system110may perform one or more loops of at least part of the steps shown during a studying session. This looped process allows the avatar production system110to adjust the multimedia content streamed to the user based on real-time data related to the user's activities. Such a technique may provide the user the apprehension of a real-world scenario where the environment and other people's behavior are responsive to the user's own behavior. It may also facilitate providing the avatars and special effects most appropriate for the user's current studying needs. If it is determined at step209that the current studying session has ended, the workflow may proceed to step210, where the avatar production system110may end avatar streaming. For example, instructions may be sent to the audio/video packaging and streaming system120such that the audio/video packaging and streaming system120stops to perform step215, and thus terminate the streaming of the multimedia content to the display device140b. Step211then marks the end of the study session. FIG.3illustrates an example timeline300for providing a user experience with changing multimedia avatars responsive to detected user activities. The horizontal axis ofFIG.3illustrates a time period from the beginning to the end of a user's studying session. As shown by the time arrow301, multimedia content showing avatars may be streamed to display devices of a user throughout the studying session. Alternatively, the multimedia content may also be streamed to display devices of the user for part of the time period of a study session. During the studying session, an avatar production system (e.g., the avatar production system110) may constantly or periodically gather data associated with the user's online PAR (Passive, Active, Recall) learning activities302and data associated with the user's environmental, physiology, and motion activities.303. The avatar production system may dynamically stream multimedia content showing avatars customized to the user and synchronized with the user's activities to one or more display devices of the user. The studying session may start at a time point311. Before any event is detected, the avatar production system may stream one or more default avatars to the user. The multimedia stream may be created based on one or more preferences of the user. For example, in this case, the avatar production system may stream the multimedia stream321showing two avatars having the role “Student” studying. The while streaming the default avatars to the user, the avatar production system may continue to analyze data associated with user activities and determine if an event is detected. At a time point312, the avatar production system may detect and flag a recognized event about the user's learning activities. For example, the flagged event may include that the user studying a chapter of a textbook for a period of time longer than an expected period of time based on historical data of other students studying the same chapter. As another example, the flagged event may include that a student achieving a below-average correct rate for a quiz. In response to flagging the event, the avatar production system may create multimedia content322comprising one or more avatars, one or more special effects, and a virtual background for streaming to the user. For example, in recognizing that the user may have encountered a difficulty in the current learning task, the avatar production system may display an avatar with the role “Tutor” in the multimedia stream. The tutor avatar may be displayed in addition to (as illustrated inFIG.3) or in alternative to the two student avatars. This may signal to the user that it may be time to seek help using tools of the online education system. At a time point313, the avatar production system may determine that the flagged event has ended and proceed to unflag the event. For example, the avatar production system may determine that the user has moved on to the next problem. At this time point313, the avatar production system may resume streaming of the default multimedia content323(here, as illustrated, two student avatars) to the user. At a time point314, the avatar production system may detect and flag a new event about the user's environment. For example, the flagged event may include that there is an abnormally high level of ambient noise in the user's environment. In response to flagging this event, the avatar production system may create multimedia content324for streaming to the user. For example, in recognizing the level of ambient noise in the user's environment, the avatar production system may display an avatar with the role “Librarian” in the multimedia stream. The avatar may be animated to walk toward the screen or the user. This avatar may give the user the appearance that the user is working in a quiet environment and that the user should reduce the noise in the environment to concentrate on studying. As illustrated inFIG.3, the librarian avatar is saying something to the student (e.g., telling the user to reduce noise levels), in addition to the two student avatars being present. At a time point315, the avatar production system may determine that the flagged event has ended and proceed to unflag the event. For example, the avatar production system may determine that the ambient noise in the user's environment has been reduced to a normal level. At this time point315, the avatar production system may resume streaming of the default multimedia content325(here, two student avatars) to the user. This content may be streamed until the end of the study session, marked by the time point316. FIGS.4A-4Fillustrates additional example avatars streamed for display to a user. In some embodiments, an avatar production system may determine the quantity of avatars to stream to a user depending on detected events and the user's preferences. For example, the avatar production system may create one student avatar to provide companionship for a user that is reading a book quietly. The avatar production system may create a classroom full of student avatars to provide the user an appearance of taking an examination along with a number of other students.FIG.4Aillustrates a multimedia stream having one student avatar410.FIG.4Billustrates a multimedia stream having a plurality of student avatars420. In some embodiments, the avatar production system may select a virtual background and project one or more avatars in the virtual background in rendering the multimedia stream. For example,FIG.4Cillustrates a student avatar430being projected in a virtual background435representing a classroom. Other examples of virtual backgrounds may include cafés, libraries, study halls, etc.FIG.4Dillustrates a student avatar440being projected in a virtual background445representing a library. In some embodiments, the avatar production system may create a special effect of an avatar interacting with educational content. The educational content interacted with by the avatar may be chosen based on the content studied by the user on the online education system. For example,FIG.4Eillustrates a student avatar450reading a “Biology 101” textbook455. This textbook may be chosen because the user is studying the same textbook or a related one. In some embodiments, the multimedia content may comprise avatars animated to perform activities or movements related to detected events. For example,FIG.4Fillustrates an avatar460stretching. Such a special effect may be displayed to a user after it is determined that the user has not moved for an extended period of time. Such a special effect may remind the user to take a break and relax a bit after intensive studying. FIG.5illustrates an example method500for providing personalized avatars for virtual companionship. The method500may be performed by a device, apparatus, or system illustrated inFIG.1or6, such as one or more components of the avatar production system110. Depending on the implementation, the method500may include additional, fewer, or alternative steps performed in various orders or in parallel. Block510includes capturing one or more current online activities of a user of an online education platform providing learning services to the user. The current online activities of a user may include the user's online activities in a current studying session and/or the user's online activities that occur as a multimedia stream is being presented to the user. While the user's current online activities may or may not be captured instantaneously as they occur, such current online activities exclude the user's online activities that occurred in previous studying sessions or that have been captured and stored as historical data. In some embodiments, the learning services may comprise educational content services comprising electronic textbooks, flash cards, or tutorial videos; online question-and-answer services; or online testing services. Block520includes receiving sensor data from one or more electronic devices of the user. Block530includes detecting an event by analyzing a combination of the one or more captured online activities of the user and the received sensor data. In some embodiments, the event may comprise a change in environmental conditions, the change in environmental conditions comprising: a change in an ambient temperature; a change in a humidity level; or a change in an ambient noise level. In some embodiments, the event may comprise a movement of the user, the movement of the user comprising: standing up; sitting; walking; yawning; or stretching. In some embodiments, the event may comprise a change in a physiology condition of the user, the change in the physiology condition of the user comprising: a change of heart rate; a change of body temperature; a change of oxygen level; a change of blood pressure; or dehydration. Block540includes determining one or more avatars and one or more special effects associated with the one or more avatars based on the detected event and one or more pre-determined rules mapping avatars and special effects to events. In some embodiments, the determining one or more avatars may comprise selecting, based on the detected event, one or more roles from a plurality of available roles of avatars, wherein each of the determined one or more avatars is of a role among the one or more selected roles. In some embodiments, the available roles of avatars comprise one or more of: a student; a tutor; or a librarian. In some embodiments, the determining one or more avatars may comprises: determining, based on the detected event, a quantity of avatars to present, wherein the determined one or more avatars consist of one or more avatars of the determined quantity. In some embodiments, the determining one or more avatars and one or more special effects may comprise: selecting the one or more avatars from a library comprising a plurality of avatars; and selecting the one or more special effects from a library comprising a plurality of pre-rendered special effects each corresponding to at least one of the plurality of avatars. Block550includes generating multimedia content comprising the one or more avatars and the one or more special effects. In some embodiments, the multimedia content may comprise: video; a combination of video and audio; or a combination of video and text overlay content. In some embodiments, the generating the multimedia content may comprise: detecting an activity of the user based on the one or more captured online activities and the received sensor data; customizing the one or more special effects based on the detected activity; and generating the multimedia content based on the one or more customized special effects. In some embodiments, the customizing the one or more special effects based on the detected activity may comprise: selecting one or more actions by at least one of the one or more avatars based on the detected activity of the user, wherein the one or more actions are the same as or similar to the detected activity; determining one or more points in time associated with the one or more selected actions; and generating one or more special effects representing the at least one of the one or more avatars performing the selected one or more actions at the one or more points in time, respectively. In some embodiments, the customizing the one or more special effects based on the detected activity may comprise: determining a duration of the detected activity; and triggering a special effect when the duration of the detected activity reaches a threshold. In some embodiments, the generating the multimedia content may comprise: determining a virtual background based on the detected event; and projecting the one or more avatars on the virtual background. In some embodiments, the generating the multimedia content may comprise: identifying content accessed by the user on the online education platform; customizing the one or more special effects based on the content accessed by the user; and generating the multimedia content based on the one or more customized special effects. In some embodiments, the customizing the one or more special effects based on the content accessed by the user comprises: selecting content to be interacted with by at least one of the one or more avatars based on the content accessed by the user; and generating a special effect representing the at least one of the one or more avatars interacting with the selected content, wherein selected content comprises the content accessed by the user, content related to the content accessed by the user, or random content. Block560includes streaming the generated multimedia content to a multimedia display device of the user, wherein the streamed multimedia content is synchronized with the one or more online activities of the user. The streamed multimedia content is synchronized with the one or more online activities of the user in that the multimedia content is streamed to the user for at least part of a studying session during which the user performs the online activities and that avatars, special effects, or virtual backgrounds in the multimedia content are dynamically adjusted or updated based on the user's online activities as the multimedia content is streamed to the user. In some embodiments, the method500may further comprise determining that the detected event has ended; and streaming default multimedia content to the multimedia display device of the user. In some embodiments, the method500may further comprise, prior to capturing one or more online activities of a user on an online education platform, for one of the plurality of avatars: rendering a two-dimensional or three-dimensional representation of the avatar; pre-rendering one or more special effects associated with the avatar by animating the representation of the avatar to follow one or more actions; and storing the one or more special effects in the library comprising the plurality of pre-computed special effects. The steps of method500may be repeated a plurality of times during a studying session. After performing the step in block560, the method may return to the step in510and repeat the method500. Execution of the method500may be terminated after it is determined that a studying session of a user has ended. FIG.6illustrates a block diagram of a computer system600in which any of the embodiments described herein may be implemented. For example, the computer system600may be used to implement at least part of one or more computing devices associated with the online education platform130, one or more computing devices associated with the avatar production system110, one or more display devices140b, and one or more user devices140aas shown inFIG.1. The computer system600may further execute the methods, workflows, and processes disclosed herein. Illustrated are at least one processor602coupled to a chipset604. The chipset604includes a memory controller hub620and an input/output (I/O) controller hub622. A memory606and a graphics adapter612are coupled to the memory controller hub620, and a display device618is coupled to the graphics adapter612. A storage device608, keyboard610, pointing device614, and network adapter616are coupled to the I/O controller hub622. Other embodiments of the computer600have different architectures. For example, the memory606is directly coupled to the processor602in some embodiments. The storage device608is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory606holds instructions and data used by the processor602. The pointing device614is a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard610to input data into the computer600. The graphics adapter612displays images and other information on the display device618. The network adapter616couples the computer600to a network. Some embodiments of the computer600have different and/or other components than those shown inFIG.6. The types of computer600can vary depending upon the embodiment and the desired processing power. Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this specification. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The examples of blocks or states may be performed in serial, in parallel, or in some other manner Blocks or states may be added to or removed from the disclosed embodiments. The examples of systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed embodiments. The various operations of methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented engines that operate to perform one or more operations or functions described herein. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented engines. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some embodiments, the processors or processor-implemented engines may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other embodiments, the processors or processor-implemented engines may be distributed across a number of geographic locations. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and the figures are not intended to require that the operations be performed in the order illustrated. Structures and functionality presented as separate components in configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Although an overview of the subject matter has been described with reference to specific embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the specification. The Detailed Description should not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled. Furthermore, related terms (such as “first,” “second,” “third,” etc.) used herein do not denote any order, height, or importance, but rather are used to distinguish one element from another element. Furthermore, the terms “a,” “an,” and “plurality” do not denote a limitation of quantity herein, but rather denote the presence of at least one of the articles mentioned.
68,626
11861777
DESCRIPTION OF THE SPECIFIC EMBODIMENTS Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the invention. Accordingly, the exemplary embodiments of the invention described below are set forth without any loss of generality to, and without imposing limitations upon, the claimed invention. Introduction Although motion capture works quite well with human performers it somewhat more difficult with animals, especially wild animals. Specifically, there are challenges to fitting animals with motion capture markers and animals are less likely to act naturally with motion capture markers affixed to them. It would be much more advantageous to animate directly from video of animals, particularly wild animals. The frames of such video can be analyzed, e.g., with image analysis software, to determine the pose of the animal at each frame. According to aspects of the present disclosure, animation of characters such as animals may be derived from video frames. Specifically, segmentation masks of an animal can be generated from video frames of the animal and from a 3D model of an animal. The more poses of a real animal and the 3D animal model differ the more their segmentation masks differ. A quantitative representation of the difference may be intersection over union, for example. As is generally understood, Intersection over Union is an evaluation metric used to measure the accuracy of an object detector on a particular dataset. Intersection over Union is used to evaluate the performance of object detectors and Convolutional Neural Network detectors (R-CNN, Faster R-CNN, YOLO, etc.) independent of the algorithm used to generate the predictions. Any algorithm that provides predicted bounding boxes or segmentation masks for an object or character in an image as output can be evaluated using Intersection over Union (IoU). Applying Intersection over Union to evaluate an (arbitrary) object detector typically requires (1) ground-truth bounding boxes (e.g., hand labeled bounding boxes from a testing set that specify where in the image the object is) and (2) the predicted bounding boxes from a model. With these two sets of bounding boxes Intersection over Union (IoU) can be determined as IoU=Area of Overlap/Area of Union. The closer this value is to 1, the better the prediction. As shown inFIG.1A, to generate a computer animation frame (target frame) of a source character SC from an input video frame (referred to herein as a source frame) an animation program generates a segmentation mask of a character C in the video image. Image segmentation creates a pixel-wise source mask102for the character in the video image. The animation program uses the source mask102to model the source character SC and generate a corresponding current animation frame with a corresponding current character CC in some predicted initial pose. A current segmentation mask104is then generated from the current animation frame. The computer animation may model the current character CC using three-dimensional data representing the locations and orientations of the current character's joints and extremities. The combination of locations and orientations of the character's joints and extremities is often referred to as the character's pose. The current pose may be optimized by iteratively comparing the current segmentation mask to the source segmentation mask102(e.g., by computing IoU) and adjusting the pose to generate an updated current animation frame and current segmentation mask. To determine the correct pose for the character C, a target segmentation mask106is generated for a corresponding target character TC in a known pose. The target segmentation mask106may be generated from a corresponding three-dimensional target data set representing the locations and orientations of the target character's joints and extremities when the character is in a known pose. The correct pose can be determined by determining the IoU between the current segmentation mask104and one or more different target segmentation masks106. According to some implementations, the current segmentation mask and/or one or more of the target segmentation masks may be edge masks that show only the outline of the pose of the character. A benefit of using edge masks for the segmentation masks is that the edge mask may provide a more accurate pose match. Use of edge masks as the segmentation masks may avoid cases where the virtual character is farther away and thus of a different scale. In such cases the virtual character may fit inside of the target segmentation mask and be obscured by the target mask. The foregoing process may be repeated for subsequent frames. In addition, physics-based simulations may simulate interactions between the character and its surroundings to evaluate the viability of a given candidate pose determined from the current segmentation mask104and/or the target segmentation masks106. Examples of non-viable poses include, e.g., poses that would result in the character falling. This pose viability evaluation process may be iteratively repeated prior to generating target segmentation masks106so that segmentation mask generation is limited to viable poses. A segmentation mask is a 2D projection of all body points onto an image plane. Due to that it doesn't carry complete information about the original 3D pose. Consequently, there may be ambiguities in the pose of an object in a monocular image.FIG.1Billustrates an example of such an ambiguity. InFIG.1B, an image of a wolf W from a frame of video has been analyzed to generate a segmented image100as an input for computer animation. InFIG.1, there is an ambiguity as to which of the wolf's front legs F1F2, or hind legs H1, H2is closest to the camera. In order to better match poses in 3D a few techniques can be used. Disambiguation of Poses According to aspects of the present disclosure, a computer animation method may use target segmentation masks for multiple camera views of a character to resolve ambiguities in pose. This can be done by minimizing differences between a current segmentation mask and different target segmentation masks for different poses to get the correct pose, e.g., Intersection over Union. FIG.2AandFIG.2Bdepict a possible implementation for the computer animation method according to aspects of the present disclosure. As shown inFIG.2A, video frames201are analyzed to generate a corresponding current segmentation masks203for two different contemporaneous views of a character CC from the video frames201. By way of example and not by way of limitation, the two video frames201showing the different contemporaneous views of the character CC may be generated using two different synchronized cameras. As used herein, the term “contemporaneous views” generally means that the views are obtained at approximately the same time, e.g., within one or two frame increments of each other for standard video frame rates. In some implementations, it may be possible to obtain two different images at different angles using a single camera that views the character CC via two or more angled mirrors. In such an implementation, two or more different contemporaneous images and corresponding segmentation masks could be derived from different portions of the same video frame that correspond to the different images. Corresponding target segmentation masks205may be generated by first generating three dimensional animation data203from the source video frames201and using the animation data to generate the target segmentation masks205. By way of example, and not by way of limitation, the different views of the current character CC may include views oriented at +45° and −45° relative to a reference plane, e.g., an image plane of a virtual camera used to generate the target segmentation masks205. Likewise, the source masks207may be generated from simultaneous frames of video of the character CC taken with two cameras oriented at +45° and −45° relative to a corresponding reference plane. In the implementation shown inFIG.2B, the target segmentation masks205may be generated from the animation data203as follows. As indicated at202, the input frames201are analyzed by a computer animation program to generate the animation data203. The animation data203corresponds to a three-dimensional model TC of the character CC from the video frames201in a target pose. The animation program generates the target segmentation masks205through a process that involves projecting different views of the model TC from virtual cameras VC1, VC2. Orientations of the virtual cameras may correspond to orientations of real cameras that generated the video frames201. Source segmentation masks205are also generated from the input video frame201, as indicated at204. In some implementations, the source segmentation masks207may optionally be used in the process of generating or refining the animation data203. To determine whether the pose of the three-dimensional model TC shows corresponds to the pose for the character CC in the video frames201, the target segmentation masks205are compared to the corresponding source segmentation masks, as indicated at206. The results of the comparisons are then analyzed, as indicated at208, By way of example, and not by way of limitation, at206the IoU for each target/source mask comparison may be computed. Then, at208, the results of each of the IoU computations may be compared to some threshold to determine whether the pose of the model TC corresponds to the pose of the character CC. Depending on the results of the analysis at208, the animation data203may then be adjusted to adjust the pose of the model TC at202. New target masks may be generated at204and compared to the source masks at206. Adjusting the animation data may include, but is not limited to, adjusting one or more joint angles of the model TC, rotating the orientation of the virtual cameras VC1, VC2with respect to the reference plane, or some combination of joint angle adjustment and camera orientation adjustment. This process may iterate until the result of the analysis indicates a match between the pose of the model TC and the character CC in the video frames201. Once a match is obtained, final pose data209may then be used to generate an animation frame211, as indicated at210. By way of example, and not by way of limitation, the different views of the current character CC may include views oriented at +45° and −45° relative to a reference plane, e.g., an image plane of a virtual camera used to generate the current source mask205. As indicated at214, the current segmentation masks207may then be compared to each of the target segmentation masks213,215to determine final pose data217for the current character CC corresponding to a correct pose of the source character in the video frame201. By way of example, and not by way of limitation, comparing the current masks205to the target masks209,211may include computing an Intersection over Union (IoU) between each of the target segmentation masks213,215and the current segmentation mask207. The IoU values may be compared to a threshold and the correct pose may be determined from the current masks e.g. the current mask that has IoU values for each target masks that at least meet the threshold. In the event that none of the multiple current masks207meet the threshold an error state may be determined and the target masks may be adjusted to correct the problem. For example, if neither IoU value is above an IoU threshold or difference between the two IoU values is below a difference threshold, target data211may be adjusted to change the pose of the target character TC to a different pose and new target segmentation masks213,215may be generated as indicated at212. If the IoU values are above the threshold for a certain pose but not others, the animation program may generate final pose data217corresponding to the certain pose. The animation program may then use the final pose data217to generate a final frame of animation219depicting current character CC in the correct pose, as indicated at216. The foregoing process may then be repeated for the next video frame, as indicated at218. Although the forgoing example uses two different views of the model TC and the character CC to generate two target segmentation masks and two corresponding source masks, three or more different views may be used to generate three or more corresponding different target and source segmentation masks. Furthermore, in alternative implementations, the target segmentation masks may be generated from two or more contemporaneous video frames of a target character from two or more corresponding different angles obtained using two or more different cameras. Pose Disambiguation Apparatus FIG.3depicts an apparatus for computer animation involving pose disambiguation as described, for example, with respect toFIG.2AandFIG.2B. The apparatus may include a computing device300coupled to a user input device302. The user input device302may be a controller, touch screen, microphone, keyboard, mouse, joystick or other device that allows the user to input information including sound data in to the system. The user input device may be coupled to or include a haptic feedback device, e.g., a vibration motor, force feedback system, ultrasonic feedback system, or air pressure feedback system. Additionally, the system may include a controller301for a movable joint for example and without limitation, the controller may control a motor or actuator for a joint on a robot in implementations involving physics-based animation for control of a physical robot. The computing device300may include one or more processor units303, which may be configured according to well-known architectures, such as, e.g., single-core, dual-core, quad-core, multi-core, processor-coprocessor, cell processor, and the like. The computing device may also include one or more memory units304(e.g., random access memory (RAM), dynamic random access memory (DRAM), read-only memory (ROM), and the like). The processor unit303may execute one or more programs317, portions of which may be stored in the memory304and the processor303may be operatively coupled to the memory, e.g., by accessing the memory via a data bus305. The programs317may also be stored in a mass storage315, such as a disk drive, CD-ROM drive, tape drive, flash memory, or the like. The programs may implement instructions that cause the processor unit to carry out an animation method, such as that described above with respect toFIG.2AandFIG.2B. The programs may additionally include machine learning algorithms configured to adjust the weights and transition values of Neural Networks (NNs)314for implementations involving NNs in a physics-based animation input control scheme as discussed elsewhere herein. Additionally, the Memory304may store video frame data308and animation data309that may be used to generate source segmentation masks310and target segmentation masks312, respectively, as described hereinabove. The video frame data308, animation data309, and segmentation masks310,312may also be stored as data318in the Mass Store315. The processor unit303is further configured to execute one or more programs317stored in the mass store315or in memory304which cause processor to carry out the one or more of the methods described above. The computing device300may also include well-known support circuits306, such as input/output (I/O) circuits307, power supplies (P/S)321, a clock (CLK)322, and cache323, which may communicate with other components of the system, e.g., via the bus305. The computing device300may include a network interface332to facilitate communication via an electronic communications network330. The network interface332may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The computing device300may send and receive data and/or requests for files via one or more message packets over the network320. Message packets sent over the network320may temporarily be stored in a buffer in memory304. The animation frames308, video frames309and segmentation masks311,312,313may be obtained from remote computing or storage devices via the network330and stored partially in the memory304and/or mass storage device315for use by the computing device300. The processor unit303and network interface332may be configured to implement a local area network (LAN) or personal area network (PAN), via a suitable network protocol, e.g., Bluetooth, for a PAN. The computing device may also include a user interface316to facilitate interaction between the system and a user. The user interface may include a monitor, television screen, speakers, headphones or other devices that communicate information to the user. Monocular Pose Prediction According to alternative aspects of the present disclosure, consecutive animation frames can be analyzed as a single problem instead of analyzing each individual video frame independently. In such implementations, pose candidates are constructed for the very first animation frame. Each of the pose candidates has the same segmentation mask. But in 3D space the candidate poses for the model TC are distributed as far as possible from each other. Subsequently, a real-life actor-critic trained neural network (NN) analyzes the candidate poses. There are different methods for evaluating the candidate poses. These methods can be combined together in various implementations, which are discussed below. FIG.4Adepicts an example of a generalized method for monocular pose prediction in computer animation according to aspects of the present disclosure. The method may begin with an input video sequence of frames401. The input video frame may be obtained from a live feed or from archival footage. Any suitable type of video frame that shows a character may be used. Preferably, the input video frame sequence401is in the form of frames of digital video. Alternatively, a non-digital video frame or motion picture frame may be digitized to provide the input video frame sequence401. An animation program may generate a corresponding sequence of segmentation masks403of a character in each frame of the input video frame sequence401, as indicated at402. The segmentation mask403may be an edge mask. It is noted that, in some implementations, the animation program may receive the segmentation masks403from some external source, in which case generation of the segmentation mask is not necessary. The animation program may generate a three-dimensional animation model405, as indicated at404. The animation model405includes three-dimensional data representing joints and extremities of an animation character that corresponds to the character in the segmentation mask403. As indicated at406, the computer animation program then generates pose sequence data407corresponding to possible candidate pose sequences, each sequence containing two or more poses of the character represented by the animation model405at different time steps corresponding to consecutive frames of the video sequence. Each pose in each candidate pose sequence is generated in such a way that it has a segmentation mask that matches the segmentation mask of a corresponding frame in the video sequence401. By way of example, and not by way of limitation, the segmentation masks for different possible candidate poses may be edge masks. Due to the above-mentioned issue of pose ambiguity, it is desirable that the candidate poses are generated in such a way that a distance between candidate poses at each time step is maximized. By way of example, and not by way of limitation each candidate pose may be represented a N multi-dimensional vector of N joint angles in the three-dimensional model405and the distance between poses may be calculated with an N-dimensional distance formula; The animation program determines an optimum pose sequence of the plurality of candidate pose sequences, as indicated at408. The animation program uses the resulting optimum pose sequence data409at410to generate an animation frame411. The animation program may then repeat the foregoing process for another input video frame, as indicated at412. As noted above, there are different ways of generating the pose sequences at406and determining the optimum pose sequence at408. According to one implementation, pairs of pose candidates from two or more consecutive animation frames in the video sequence401may be used as an input to a neural network that performs the pose optimization408. As part of the pose optimization, a value network (critic) may test the pose candidate sequences. The sequence that gives the highest value is assumed to be the correct sequence. In such implementations, several three-dimensional (3D) poses of the animation model405are generated based on the segmentation mask of the first video frame of the sequence401. All of the poses have segmentation masks that match the segmentation mask of the first video frame in the sequence. The 3D poses are generated such as to be as distant from each other as possible. The distance between poses can be measured, e.g., as an average angle difference between the joints in the 3D model of the animation character. For each 3D pose derived from the first frame in the sequence401the optimization process408adjusts the 3D pose over time in such a way that for each video frame the segmentation mask of the pose matches the segmentation mask of the corresponding video frame of the sequence401. During the optimization process the movement of the character represented by the model405is simulated by a physics simulation environment. The optimization process408makes the segmentation mask for a frame of the sequence401match a corresponding projection of a candidate pose of the model405and at the same time makes sure that the movement of the animated character is physically consistent, e.g., doesn't cause the animation character to fall or violate joint constraints. A genetic (evolutional) algorithm can be used for this purpose. In an alternative implementation, several pose candidates may be generated as described above but for each animation frame. All pose candidates for a given animation frame have segmentation masks matching the segmentation mask of a corresponding video frame of the sequence401. During the optimization process408pairs of pose candidates for consecutive video frames may be fed into a Neural Network which has been pre-trained to control the character in a physics simulation environment using similar animations. Pose candidate pairs are then evaluated by the Neural Network. The segmentation masks for the best pose candidate pair should provide the best match with the segmentation masks obtained from the corresponding video frames. At the same time movement of the character in a simulated physics environment must not cause the character to fall or violate joint constraints. The solution consecutively progresses from the first frame pair to the end of the video sequence401. In some implementations, the animation program may use an output of the pose optimization process at408to generate robot control inputs413, as indicated at414. The animation program may supply the control inputs413to a robot controller415, which converts the control inputs to control signals that are transmitted to an articulated robot417. The robot controller415may be implemented in hardware or software. For hardware implementations the optimization process408of the animation program provides inputs in a convenient form and the robot controller can convert the inputs to robot commands. For software implementations, the robot controller415may be implemented by code running on the same computer system as the animation program. Such robot controller code may be a separate program from the animation program or may be incorporated into the animation program. As noted above, the pose optimization process408may be informed by a physics simulation to evaluate a viability of various combinations of pose combination for sequences of poses of the robot417or corresponding animated character. By way of example, the pose optimization process408may limit movement of the animated character or robot417according to one or more physics-based constraints. Alternatively, the pose optimization process408may reject poses that would be inconsistent with operation of the robot417, e.g., poses that would cause the robot to fall or violate a joint constraint. In some implementations, the pose optimization process408may optionally include pose disambiguation using techniques such as those described above with respect toFIG.2AandFIG.2B. This could involve generating two or more different candidate target masks from different views of the 3D animation model405and comparing the target masks to corresponding masks403generated from different contemporaneous views the input video frame401. FIG.4Bdepicts an example of pose optimization408for use of monocular pose prediction in computer animation according to aspects of the present disclosure. As shown inFIG.4B, the pose optimization process408may use Neural Networks420to fit candidate poses in pose sequences407to corresponding segmentation masks403and, optionally, generate the control inputs413. In the illustrated implementation, the inputs to the Neural Networks420are the segmentation masks403obtained from the video frame sequence401. In the Example depicted inFIG.4B, the goals of the Neural Networks420are segmentation masks421,423corresponding to candidates for the next two poses. Specifically, the goals may be two consecutive poses taken from a target animation that the robot417mimics. The Neural Networks420transform the target animation in real time in such a way that it can run on the real robot417without causing it to fall over. The Neural Networks420may be trained to determine the next two poses from a current pose. The Neural Networks420training may include the use of a character model in a physics simulation. Motion capture or hand animated poses may be used as a target and the Neural Network420may trained to replicate the target poses within the constraints of the physics simulation using a machine learning algorithm. The machine learning algorithm and/or Neural Network layout may be for example and without limitation a reinforcement learning algorithm, an imitation learning algorithm or a supervised learning algorithm. The trained Neural Network may be used to output a score based fore each of the candidate poses. As a result of the training the score represents the viability of the pose within the simulation. The pose is evaluated on such factors as Stability over the next two frames (e.g., does the character fall over in simulation), do any of the joints violate their constraints (e.g., does an elbow bend backwards), attempt to minimize the distance all joints move, do any of the extremities collide, are the extremities connected to their corresponding joints, etc. Some or all of these evaluation factors may be generated by the neural network and represented by the score or alternatively some or all of these factors may be determined by the user and added to the score. From the candidate poses the best set of poses is selected, this may be done by hand or within the Neural Network through the use of min max layers. For more information on pose determining Neural Networks see concurrently filed application Ser. No. 17/095,586 (U.S. Patent Application Publication Number: 20220143820). From the chosen candidate poses a robot may be controlled using the Neural Networks420. Outputs of the Neural Networks420include an action425and a value427. The action425corresponds to the control inputs to the robot415. The value427is an internal training algorithm quantity. It is needed only during training step and is used to estimate the effect of random attempts at improvement. The robot controller415provides the commands based on the action425to motors in the robot417. In general, the robot417may include movable joints connected by structural elements and sensors. Each joint may be connected to a sensor that is configured to generate sensor values that related to information about the state of the joint. Sensors for physical robots may include for example and without limitation, encoders, potentiometers, linear variable differential transformers, pressure sensors, gyroscopes, gravimeters, accelerometers, resolvers, velocity, or speed sensor. The sensor values for such sensors would correspond to the outputs of such sensors or information derived therefrom. Examples of sensor values from sensors on a robot include, but are not limited to a joint position, a joint velocity, a joint torque, a robot orientation, a robot linear velocity, a robot angular velocity, a foot contact point, a foot pressure, or two or more of these. For animation characters, the sensors may be virtual sensors and the sensor values may simply include data, e.g., position, velocity, acceleration data, related to the state of the movable joint. Examples of sensor values from a robot simulation include, but are not limited to a joint position, a joint velocity, a joint torque, a model orientation, a model linear velocity, a model angular velocity, a foot contact point, a foot pressure, or two or more of these. Position Data from the controller415or the animation program may be passed to a motion decision neural network and used as state data during reinforcement learning in conjunction with the pose optimization process408. The nature of the control inputs depends on the control parameterization used by the robot controller415to control the joints of the robot417. Commonly used control parameterizations for articulated robots include position control, velocity control, and torque control. One possible implementation employs a hybrid scheme in which a neural network outputs target joint velocities, which may be labeled as position derivatives v. An integrator block integrates the derivatives v into joint positions x according to x=∫vdt before being applied directly to either position derivate (PD) controllers in a simulation or animation or to the actuators of the robot417. The output of the integrator block may also be used as a feedback signal by routing it into the neural network as input. The integration step may advantageously suppress motor jitter in simulation and control of the robot417to visually unobservable levels by smoothing out the robot's reaction to noisy sensors and sensor spikes. The integration can also moderate the robot's movement when the network input enters out-of-distribution areas of the state space during failure scenarios. In the illustrated example the Neural Networks that generate the action425and value427split policy and value functions into separate networks422,424, with no shared weights. The illustrated policy network422and the critic network424may each consist of three layers containing the same number of neurons in each layer. Each of the neurons may have the same activation function. By way of example, and not by way of limitation, each of these layers contains 128 neurons and use softsign as their activation function. The network input (observation) is subject to normalization using the running mean and standard deviation. The input may include any or all of the following features: goal orientations, joint sensor readings, action at previous time step, actuator inputs at previous time step, gravity vector in local reference frame, accelerometer readings, gyro readings, and foot pressure sensor readings. The goal orientations may be represented in axis-angle form and encoded into a latent representation using two encoding layers426,428. By way of example, each encoding layer may include a first layer containing 128 neurons coupled to a second layer containing 64 neurons. Each of the neurons may use leaky ReLU activation functions. The action425specifies the set of joint position derivatives output by the neural network. The actuator inputs indicate the updated joint positions calculated by integrating the position derivatives. Feeding the action and actuator inputs from the previous time step into the networks introduces a feedback signal. Exploration occurs during training by sampling the policy network output from the learned Gaussian distributions. Sampling in this manner introduces jitter during training that makes learning difficult as it induces falling. The integration scheme discussed above helps to alleviate the jitter. In addition, instead of sampling random actions from the Gaussian distribution at each time step, with fixed probability ε a random action may be sampled from the policy network422and with probability 1−ε the robot417executes a deterministic action specified by the mean of the Gaussian. Furthermore, updates may be performed using only samples where exploration noise is applied. Pose Prediction Apparatus FIG.5depicts an apparatus for computer animation involving monocular pose prediction as described, for example, with respect toFIG.4AandFIG.4B. The apparatus may include a computing device500coupled to a user input device502. The user input device502may be a controller, touch screen, microphone, keyboard, mouse, joystick or other device that allows the user to input information including sound data in to the system. The user input device may be coupled to or include a haptic feedback device, e.g., a vibration motor, force feedback system, ultrasonic feedback system, or air pressure feedback system. Additionally, the system may include a controller501for a movable joint for example and without limitation, the controller may control a motor or actuator for a joint on a robot in implementations involving physics-based animation for control of a physical robot. The computing device500may include one or more processor units503, which may be configured according to well-known architectures, such as, e.g., single-core, dual-core, quad-core, multi-core, processor-coprocessor, cell processor, and the like. The computing device may also include one or more memory units504(e.g., random access memory (RAM), dynamic random access memory (DRAM), read-only memory (ROM), and the like). The processor unit503may execute one or more programs517, portions of which may be stored in the memory504and the processor503may be operatively coupled to the memory, e.g., by accessing the memory via a data bus505. The programs517may also be stored in a mass storage515, such as a disk drive, CD-ROM drive, tape drive, flash memory, or the like. The programs may implement instructions that cause the processor unit to carry out an animation method, such as that described above with respect toFIG.4AandFIG.4B. The programs may additionally include machine learning algorithms configured to adjust the weights and transition values of Neural Networks (NNs)513for implementations involving NNs in a physics-based animation input control scheme as discussed elsewhere herein. Additionally, the Memory504may store video frame data508and animation frame data509. The video frame data508may be used to generate segmentation masks510for use in pose prediction as described above. Pose data511used in pose prediction may also be stored in the memory504. When used for control of a robot530, the memory may also store robot commands512and quality values514generated by the neural networks513, e.g., as discussed above. The video frame data508, animation data509, segmentation masks510, pose sequence data511, robot commands512and quality values514may also be stored as data518in the mass storage515. The computing device500may also include well-known support circuits506, such as input/output (I/O) circuits507, power supplies (P/S)521, a clock (CLK)522, and cache523, which may communicate with other components of the system, e.g., via the bus505. In implementations involving control of a robot530, the robot commands512may be relayed to the robot via the I/O circuits. The computing device500may include a network interface532to facilitate communication via an electronic communications network530. The network interface532may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The computing device500may send and receive data and/or requests for files via one or more message packets over the network520. Message packets sent over the network520may temporarily be stored in a buffer in memory504. The animation frames508, video frames509and segmentation masks511may be obtained from remote computing or storage devices via the network520and stored partially in the memory504and/or mass storage device315for use by the computing device500. The processor unit503and network interface532may be configured to implement a local area network (LAN) or personal area network (PAN), via a suitable network protocol, e.g., Bluetooth, for a PAN. The computing device may also include a user interface516to facilitate interaction between the system and a user. The user interface may include a monitor, television screen, speakers, headphones or other devices that communicate information to the user. Although certain implementations are described herein in terms of computer animation for the purpose of controlling a robot, aspects of the present disclosure are not so limited. Pose disambiguation and monocular pose prediction are useful in many other applications. Furthermore, although certain implementations are described herein in terms of animation of animals, aspects of the present disclosure are not so limited. For example, the techniques described herein may be used to generate computer animation of human characters and/or robot characters or other moving objects from archival footage or other situations where motion capture is not practical or not possible. While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A”, or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”
39,071
11861778
The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or that render other details difficult to perceive may have been omitted. DETAILED DESCRIPTION At a high level, aspects of the present disclosure are directed to apparatuses and methods for generating virtual avatars. In an embodiment, generating virtual avatars may include generating a virtual avatar model. Aspects of the present disclosure can be used to modify virtual avatars. Aspects of the present disclosure can also be used to generate virtual avatar models from one or more images. This is so, at least in part, because an apparatus may generate virtual avatar models from image data using a machine vision process. Aspects of the present disclosure allow for modifying virtual avatars as a function of user input. Exemplary embodiments illustrating aspects of the present disclosure are described below in the context of several specific examples. Referring now toFIG.1, an exemplary embodiment of an apparatus100for generating a virtual avatar is presented. Apparatus100may include at least a processor and a memory communicatively connected to the at least a processor. A memory may contain instructions configuring the at least a processor to perform various tasks. As used in this disclosure, “communicatively connected” means connected by way of a connection, attachment or linkage between two or more relata which allows for reception and/or transmittance of information therebetween. For example, and without limitation, this connection may be wired or wireless, direct or indirect, and between two or more components, circuits, devices, systems, and the like, which allows for reception and/or transmittance of data and/or signal(s) therebetween. Data and/or signals therebetween may include, without limitation, electrical, electromagnetic, magnetic, video, audio, radio and microwave data and/or signals, combinations thereof, and the like, among others. A communicative connection may be achieved, for example and without limitation, through wired or wireless electronic, digital or analog, communication, either directly or by way of one or more intervening devices or components. Further, communicative connection may include electrically coupling or connecting at least an output of one device, component, or circuit to at least an input of another device, component, or circuit. For example, and without limitation, via a bus or other facility for intercommunication between elements of a computing device. Communicative connecting may also include indirect connections via, for example and without limitation, wireless connection, radio communication, low power wide area network, optical communication, magnetic, capacitive, or optical coupling, and the like. In some instances, the terminology “communicatively coupled” may be used in place of communicatively connected in this disclosure. Still referring toFIG.1, apparatus100may include a computing device. A computing device may include any computing device as described in this disclosure, including without limitation a microcontroller, microprocessor, digital signal processor (DSP) and/or system on a chip (SoC) as described in this disclosure. A computing device may include, be included in, and/or communicate with a mobile device such as a mobile telephone or smartphone. Apparatus100may include a single computing device operating independently, or may include two or more computing device operating in concert, in parallel, sequentially or the like; two or more computing devices may be included together in a single computing device or in two or more computing devices. Apparatus100may interface or communicate with one or more additional devices as described below in further detail via a network interface device. Network interface device may be utilized for connecting apparatus100to one or more of a variety of networks, and one or more devices. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software etc.) may be communicated to and/or from a computer and/or a computing device. Apparatus100may include but is not limited to, for example, a computing device or cluster of computing devices in a first location and a second computing device or cluster of computing devices in a second location. Apparatus100may include one or more computing devices dedicated to data storage, security, distribution of traffic for load balancing, and the like. Apparatus100may distribute one or more computing tasks as described below across a plurality of computing devices of computing device, which may operate in parallel, in series, redundantly, or in any other manner used for distribution of tasks or memory between computing devices. Apparatus100may be implemented using a “shared nothing” architecture in which data is cached at the worker, in an embodiment, this may enable scalability of system100and/or computing device. With continued reference toFIG.1, apparatus100may be designed and/or configured to perform any method, method step, or sequence of method steps in any embodiment described in this disclosure, in any order and with any degree of repetition. For instance, apparatus100may be configured to perform a single step or sequence repeatedly until a desired or commanded outcome is achieved; repetition of a step or a sequence of steps may be performed iteratively and/or recursively using outputs of previous repetitions as inputs to subsequent repetitions, aggregating inputs and/or outputs of repetitions to produce an aggregate result, reduction or decrement of one or more variables such as global variables, and/or division of a larger processing task into a set of iteratively addressed smaller processing tasks. Apparatus100may perform any step or sequence of steps as described in this disclosure in parallel, such as simultaneously and/or substantially simultaneously performing a step two or more times using two or more parallel threads, processor cores, or the like; division of tasks between parallel threads and/or processes may be performed according to any protocol suitable for division of tasks between iterations. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various ways in which steps, sequences of steps, processing tasks, and/or data may be subdivided, shared, or otherwise dealt with using iteration, recursion, and/or parallel processing. Still referring toFIG.1, apparatus100may be configured to generate virtual avatar model104. A “virtual avatar model” as used in this disclosure is a computer process that simulates one or more digital characters. Apparatus100may generate virtual avatar model104through one or more modeling software's, such as, but not limited to, Sketchup, Blender, ZBrush, AutoCAD, SolidWorks, 3Ds Max, Maya, Rhino3d, CATIA, and the like. In some embodiments, virtual avatar model104may include one or more formats, such as, but not limited to, “dwg”, “dxf”, “3ds”, “dae”, “dem”, “def”, “ifc”, “kmz”, “stl”, “3dxml”, “3dm”, “3ds”, “cd”, “vda”, “vrml”, and the like. Still referring toFIG.1, virtual avatar model104may include one or more parameters for generating virtual entity108. In some embodiments, parameters of virtual avatar model104may include, but are not limited to, avatar body type, avatar dimensions, avatar physics, and the like. An “avatar body type” as used in this disclosure is a category of a base model of a character. An avatar body type may include, but is not limited to, human, animal, robot, ethereal, amorphous, and the like. In some embodiments, an avatar body type may include a combination of two or more avatar body types, without limitation. As a non-limiting example, avatar body type may include robot animal. “Avatar dimensions” as used in this disclosure are digital character measurements. Avatar dimensions may include, but are not limited to, height, width, length, volume, and the like. In some embodiments, avatar dimensions may include one or more geometries of one or more parts of virtual entity108. Geometries may include surface areas, angles, diameters, radii, points, concavity, convexity, and the like. Avatar dimensions of virtual avatar model104may include one or more geometries of limbs, appendages, facial features, clothing, hair, fur, and/or other aspects of virtual entity108. For instance and without limitation, virtual avatar model104may include geometries of a hand structure of virtual entity108, such as finger length, finger circumference, palm shape, and the like. In some embodiments, avatar dimensions of virtual avatar model104may be represented as one or more polygons, such as, but not limited to, triangles, squares, hexagons, and the like. Virtual avatar model104may utilize one or more rendering techniques to generate virtual entity108, such as, but not limited to, shading, texturing, and the like. In some embodiments, virtual avatar model104may generate virtual entity108through mesh shading such as, but not limited to, flat-shading, smooth-shading, and the like. Avatar dimensions of virtual avatar model104may include one or more sets of coordinates for one or more parts of virtual entity108in a coordinate system. A “coordinate system” as used in this disclosure is a system that uses one or more numbers to determine position of one or more points. A coordinate system may include, without limitation, cartesian, polar, and the like. In some embodiments, a coordinate system may represent a real world plane in a digital reality. Generating a coordinate system in a digital reality representing a real world plane may include a machine vision process as described below. In some embodiments, parameters of virtual avatar model104may include relative sizes of one or more parts of virtual entity108. A “relative size” as used in this disclosure is an apparent stature of an object and/or entity from a perspective view. A relative size of virtual entity108and/or other virtual objects may be calculated by apparatus100. Apparatus100may use a machine learning model, machine vision process, and/or other processing model described throughout this disclosure, without limitation, to generate a relative size of virtual entity108. In some embodiments, a relative size of virtual entity108may be updated as a function of user input120. Still referring toFIG.1, a “virtual entity” as used in this disclosure is a digital representation of a character. A character may include, but is not limited to, animals, humans, robots, inanimate objects, and/or any combination thereof, without limitation. For instance, and without limitation, virtual entity108may include a digital representation of a penguin character. Virtual entity108may include, but is not limited to, two-dimensional characters, three-dimensional characters, and the like. Apparatus100may generate virtual entity108in an augmented reality (AR) space, virtual reality (VR), space, and/or any other digital realities. Still referring toFIG.1, in some embodiments, virtual avatar model104may include operational model116. An “operational model” as used in this disclosure is a computer process that dictates animations and/or interactions of one or more virtual entities. Operational model116may be programmed to configure virtual entity108to perform one or more tasks, movements, conversations, and the like. In some embodiments, operational model116may comprise behavioral parameters corresponding to animations of virtual entity108. “Behavioral parameters” as used in this disclosure are metrics associated with interactions of a virtual entity. Behavioral parameters may include, but are not limited to, facial animations, responsiveness, interaction with an environment, and the like. Facial animations may include, but are not limited to, grinding teeth, smirking, crying, laughing, clenching, grinding teeth, showing surprise, and the like. In some embodiments, behavioral parameters may be tuned as a function of an avatar body of virtual entity108. For instance and without limitation, virtual entity108may include a shark character, which may have corresponding behavioral parameters of a more serious demeanor. Virtual entity108may include a monkey character, which may have corresponding behavioral parameters of a light-hearted, energized demeanor. In other embodiments, behavioral parameters may be consistent throughout multiple varying avatar models. In some embodiments, facial animations of behavioral parameters may be tuned to an avatar body. For instance and without limitation, virtual entity108may include a shark character. A facial animation of a grin for a shark character may include an overextended, dramatic teeth-bearing smile whereas a facial animation of a grin for a bee may include a closed mouth smile. Still referring toFIG.1, operational model116may include one or more animations and/or triggers of animations of virtual entity108. Animations may include, but are not limited to, walking, running, jumping, hiding, celebrating, nodding, and the like. Triggers of animations may include, but are not limited to, geographical positions, user input, engagement with virtual objects, and the like. For instance and without limitation, operational model116may include an animation of jumping for joy, which may have a trigger including a proximity of a user to virtual entity108. Animations and triggers of animations of operational model116may be based on avatar models, user profiles, and/or other factors. In some embodiments, apparatus100may include a behavioral machine learning model. In some embodiments, operational model116may include the behavioral machine learning model. A behavioral machine learning model may be trained with training data correlating user data to behavioral parameters. In some embodiments, the processor may be configured to train the behavioral machine learning model. Training data may be received through user input, external computing devices, and/or through previous iterations of processing. In some embodiments, training data may be received from a database, such as a training data database. In some embodiments, the behavioral machine learning model may be configured receive user data as input and output one or more behavioral parameters. Operational model116may use the behavioral machine learning model to determine behavioral parameters of virtual entity108based on user input120. “User input” as used throughout this disclosure is information received from an individual. User input120may include, but is not limited to, text entries, voice input, images, videos, and the like. In some embodiments, apparatus100may receive user input120from one or more computing devices and/or software, such as, but not limited to, cloud-computing networks, web applications, mobile applications, and the like. For instance and without limitation, apparatus100may receive photographic images through a web camera of a laptop that may be connected to apparatus100through a wireless and/or wired connection. In other embodiments, apparatus100may receive user input120directly, such as through, but not limited to, keyboards, mouse input, camera input, microphone input, and the like. Still referring toFIG.1, in some embodiments, apparatus100may utilize the behavioral machine learning model to mimic and/or replicate a user's emotions and/or behavioral patterns. Virtual entity108may appear to “learn” certain behaviors and/or patterns. User input120may include user data showing a user is highly engaged, happy, and energetic. Apparatus100may determine, using a behavioral machine learning model, that one or more behavioral parameters of operational model112should include high engagement and/or happy behaviors. In other embodiments, apparatus100may determine one or more behavioral parameters to be different than that of one or more behaviors of user input120. For instance and without limitation, a user may exhibit signs of solemness. Apparatus100may determine, in some embodiments, through the behavioral machine learning model, that one or more behavioral parameters of operational model112should include happy behavioral patterns. Apparatus100may compare user behaviors and/or patterns of user input120to a behavioral threshold. A “behavioral threshold” as used in this disclosure is a value or values constraining a triggering of a change of one or more behavioral parameters. A behavioral threshold may include, but is not limited to, one or more numbers, percentages, and the like, which may correspond to one or more behaviors. Apparatus100may compare behaviors of user input120to a behavioral threshold corresponding to happiness. If behaviors of user input120meet a behavioral threshold of happiness, operational model112may adjust one or more behavioral parameters of virtual entity108to increase an engagement of virtual entity108with a user, without limitation. In some embodiments, animation may be generated using stored rules for representation and/or modification of static images. Stored rules may include, without limitation, rules associating an event as detected by sensing devices with an image and/or sound representing a reaction thereto by an animated character. For instance, a given event and/or input may be associated with an endpoint image, such as a “surprising” event with an image of an avatar with a surprised expression. Similar associations may be made between expressions and/or poses indicating simulated reactions to pleasing events, exciting events, annoying events, humorous events. Animated sequences may be stored transitioning from a first pose representing a first simulated emotional state and/or response and a second pose representing a second simulated emotional state and/or response. Alternatively or additionally, stored rules may indicate modifications to images and/or for creation of transitional images that can be used to generate an animated sequence of images from one simulated emotional state and/or response. Emotional states and/or responses may be regulated, without limitation, using a finite state machine directing transition from one emotional state and/or response to another. Still referring toFIG.1, stored rules, modified images, and/or modifications to images may be entered and/or defined manually; alternatively or additionally, modified images, and/or modifications to images may be generated using a machine-learning process that may be trained using manually generated images, modifications thereto, and/or sequences of such images and/or modifications, and/or manually identified examples of such training examples in existing animated and/or live-action stills and/or sequences. Machine-learning models may include models trained to recognize features in a picture of a character, models trained to modify identified features and/or entire images, models trained to identify and/or generate transitional images traversing from one static image to another static image in a sequence, or the like. Static images and/or modifications may be associated with responses to particular inputs by additional models. Still referring toFIG.1, in some embodiments, apparatus100may generate a chatbot. A “chatbot” as used in this disclosure is a program that communicates semantic information between an individual and a computing device. A chatbot may be communicative with apparatus100. Apparatus100may be configured to operate a chatbot. In some cases, a chatbot may be local to apparatus100. Alternatively or additionally, in some cases, a chatbot may be remote to apparatus100and communicative with apparatus100, by way of one or more networks, such as without limitation the internet. Alternatively or additionally, a chatbot may communicate with apparatus100using telephonic devices and networks, such as without limitation fax machines, short message service (SMS), or multimedia message service (MMS). In some embodiments, a chatbot may communicate with apparatus100using text-based communication, for example without limitation using a character encoding protocol, such as American Standard for Information Interchange (ASCII). Apparatus100may interface with a chatbot, by way of at least a submission from a user, such as through the chatbot, and a response from the chatbot. In many cases, one or both of submissions and responses may be text-based communication. Alternatively or additionally, in some cases, one or both of submissions and responses may be audio-based communication. Continuing in reference toFIG.1, a submission once received by apparatus100operating a chatbot, may be processed by apparatus100. In some embodiments, apparatus100may processes a submission using one or more of keyword recognition, pattern matching, and natural language processing, machine learning models, and the like. In some embodiments, apparatus100may employ real-time learning with evolutionary algorithms. In some cases, apparatus100may retrieve a pre-prepared response from a storage component, based upon a submission. Alternatively or additionally, in some embodiments, apparatus100may communicate a response without first receiving a submission, which may initiate a conversation. In some cases, apparatus100may communicate an inquiry to a chatbot. Apparatus100may be configured to process an answer to the inquiry in a following submission from a chatbot. In some cases, an answer to an inquiry present within a submission from a user through a chatbot may be used by apparatus100as an input to another function, for example without limitation a feature or a preference input. Still referring toFIG.1, in some embodiments, apparatus100may determine user data of user input120. “User data” as used throughout this disclosure is information pertaining to an individual. User data may include, but is not limited to, engagement data, preferences, biographical data, locational data, and the like. In some embodiments, apparatus100may determine user data of user input120, such as a textual entry, using a language processing module. Language processing module may include any hardware and/or software module. Language processing module may be configured to extract, from the one or more documents, one or more words. One or more words may include, without limitation, strings of one or more characters, including without limitation any sequence or sequences of letters, numbers, punctuation, diacritic marks, engineering symbols, geometric dimensioning and tolerancing (GD&T) symbols, chemical symbols and formulas, spaces, whitespace, and other symbols, including any symbols usable as textual data as described above. Textual data may be parsed into tokens, which may include a simple word (sequence of letters separated by whitespace) or more generally a sequence of characters as described previously. The term “token,” as used herein, refers to any smaller, individual groupings of text from a larger source of text; tokens may be broken up by word, pair of words, sentence, or other delimitation. These tokens may in turn be parsed in various ways. Textual data may be parsed into words or sequences of words, which may be considered words as well. Textual data may be parsed into “n-grams”, where all sequences of n consecutive characters are considered. Any or all possible sequences of tokens or words may be stored as “chains”, for example for use as a Markov chain or Hidden Markov Model. Still referring toFIG.1, language processing module may operate to produce a language processing model. Language processing model may include a program automatically generated by computing device and/or language processing module to produce associations between one or more words extracted from at least a document and detect associations, including without limitation mathematical associations, between such words. Associations between language elements, where language elements include for purposes herein extracted words, relationships of such categories to other such term may include, without limitation, mathematical associations, including without limitation statistical correlations between any language element and any other language element and/or language elements. Statistical correlations and/or mathematical associations may include probabilistic formulas or relationships indicating, for instance, a likelihood that a given extracted word indicates a given category of semantic meaning. As a further example, statistical correlations and/or mathematical associations may include probabilistic formulas or relationships indicating a positive and/or negative association between at least an extracted word and/or a given semantic meaning; positive or negative indication may include an indication that a given document is or is not indicating a category semantic meaning. Whether a phrase, sentence, word, or other textual element in a document or corpus of documents constitutes a positive or negative indicator may be determined, in an embodiment, by mathematical associations between detected words, comparisons to phrases and/or words indicating positive and/or negative indicators that are stored in memory at computing device, or the like. Still referring to1, language processing module and/or diagnostic engine may generate the language processing model by any suitable method, including without limitation a natural language processing classification algorithm; language processing model may include a natural language process classification model that enumerates and/or derives statistical relationships between input terms and output terms. Algorithm to generate language processing model may include a stochastic gradient descent algorithm, which may include a method that iteratively optimizes an objective function, such as an objective function representing a statistical estimation of relationships between terms, including relationships between input terms and output terms, in the form of a sum of relationships to be estimated. In an alternative or additional approach, sequential tokens may be modeled as chains, serving as the observations in a Hidden Markov Model (HMM). HMMs as used herein are statistical models with inference algorithms that that may be applied to the models. In such models, a hidden state to be estimated may include an association between an extracted words, phrases, and/or other semantic units. There may be a finite number of categories to which an extracted word may pertain; an HMM inference algorithm, such as the forward-backward algorithm or the Viterbi algorithm, may be used to estimate the most likely discrete state given a word or sequence of words. Language processing module may combine two or more approaches. For instance, and without limitation, machine-learning program may use a combination of Naive-Bayes (NB), Stochastic Gradient Descent (SGD), and parameter grid-searching classification techniques; the result may include a classification algorithm that returns ranked associations. Continuing to refer toFIG.1, generating language processing model may include generating a vector space, which may be a collection of vectors, defined as a set of mathematical objects that can be added together under an operation of addition following properties of associativity, commutativity, existence of an identity element, and existence of an inverse element for each vector, and can be multiplied by scalar values under an operation of scalar multiplication compatible with field multiplication, and that has an identity element is distributive with respect to vector addition, and is distributive with respect to field addition. Each vector in an n-dimensional vector space may be represented by an n-tuple of numerical values. Each unique extracted word and/or language element as described above may be represented by a vector of the vector space. In an embodiment, each unique extracted and/or other language element may be represented by a dimension of vector space; as a non-limiting example, each element of a vector may include a number representing an enumeration of co-occurrences of the word and/or language element represented by the vector with another word and/or language element. Vectors may be normalized, scaled according to relative frequencies of appearance and/or file sizes. In an embodiment associating language elements to one another as described above may include computing a degree of vector similarity between a vector representing each language element and a vector representing another language element; vector similarity may be measured according to any norm for proximity and/or similarity of two vectors, including without limitation cosine similarity, which measures the similarity of two vectors by evaluating the cosine of the angle between the vectors, which can be computed using a dot product of the two vectors divided by the lengths of the two vectors. Degree of similarity may include any other geometric measure of distance between vectors. Still referring toFIG.1, language processing module may use a corpus of documents to generate associations between language elements in a language processing module, and diagnostic engine may then use such associations to analyze words extracted from one or more documents and determine that the one or more documents indicate significance of a category. In an embodiment, language module and/or apparatus100may perform this analysis using a selected set of significant documents, such as documents identified by one or more experts as representing good information; experts may identify or enter such documents via graphical user interface, or may communicate identities of significant documents according to any other suitable method of electronic communication, or by providing such identity to other persons who may enter such identifications into apparatus100. Documents may be entered into a computing device by being uploaded by an expert or other persons using, without limitation, file transfer protocol (FTP) or other suitable methods for transmission and/or upload of documents; alternatively or additionally, where a document is identified by a citation, a uniform resource identifier (URI), uniform resource locator (URL) or other datum permitting unambiguous identification of the document, diagnostic engine may automatically obtain the document using such an identifier, for instance by submitting a request to a database or compendium of documents such as JSTOR as provided by Ithaka Harbors, Inc. of New York. Still referring toFIG.1, in some embodiments, apparatus100may utilize an automatic speech recognition model. An automatic speech recognition may require training (i.e., enrollment). In some cases, training an automatic speech recognition model may require an individual speaker to read text or isolated vocabulary. In some cases, a solicitation video may include an audio component having an audible verbal content, the contents of which are known a priori by apparatus100. Apparatus100may then train an automatic speech recognition model according to training data which includes audible verbal content correlated to known content. In this way, apparatus100may analyze a person's specific voice and train an automatic speech recognition model to the person's speech, resulting in increased accuracy. Alternatively or additionally, in some cases, apparatus100may include an automatic speech recognition model that is speaker-independent. As used in this disclosure, a “speaker independent” automatic speech recognition process does not require training for each individual speaker. Conversely, as used in this disclosure, automatic speech recognition processes that employ individual speaker specific training are “speaker dependent.” Still referring toFIG.1, in some embodiments, an automatic speech recognition process may perform voice recognition or speaker identification. As used in this disclosure, “voice recognition” refers to identifying a speaker, from audio content, rather than what the speaker is saying. In some cases, apparatus100may first recognize a speaker of verbal audio content and then automatically recognize speech of the speaker, for example by way of a speaker dependent automatic speech recognition model or process. In some embodiments, an automatic speech recognition process can be used to authenticate or verify an identity of a speaker. In some cases, a speaker may or may not include a subject. For example, a subject may speak within solicitation video, but others may speak as well. Still referring toFIG.1, in some embodiments, an automatic speech recognition process may include one or all of acoustic modeling, language modeling, and statistically-based speech recognition algorithms. In some cases, an automatic speech recognition process may employ hidden Markov models (HMIs). As discussed in greater detail below, language modeling such as that employed in natural language processing applications like document classification or statistical machine translation, may also be employed by an automatic speech recognition process. Still referring toFIG.1, an exemplary algorithm employed in automatic speech recognition may include or even be based upon hidden Markov models. Hidden Markov models (HMIs) may include statistical models that output a sequence of symbols or quantities. HMIs can be used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. For example, over a short time scale (e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech (i.e., audible verbal content) can be understood as a Markov model for many stochastic purposes. Still referring toFIG.1, in some embodiments HMIs can be trained automatically and may be relatively simple and computationally feasible to use. In an exemplary automatic speech recognition process, a hidden Markov model may output a sequence of n-dimensional real-valued vectors (with n being a small integer, such as 10), at a rate of about one vector every 10 milliseconds. Vectors may consist of cepstral coefficients. A cepstral coefficient requires using a spectral domain. Cepstral coefficients may be obtained by taking a Fourier transform of a short time window of speech yielding a spectrum, decorrelating the spectrum using a cosine transform, and taking first (i.e., most significant) coefficients. In some cases, an HMM may have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, yielding a likelihood for each observed vector. In some cases, each word, or phoneme, may have a different output distribution; an HMI for a sequence of words or phonemes may be made by concatenating an HMIs for separate words and phonemes. Still referring toFIG.1, in some embodiments, an automatic speech recognition process may use various combinations of a number of techniques in order to improve results. In some cases, a large-vocabulary automatic speech recognition process may include context dependency for phonemes. For example, in some cases, phonemes with different left and right context may have different realizations as HMI states. In some cases, an automatic speech recognition process may use cepstral normalization to normalize for different speakers and recording conditions. In some cases, an automatic speech recognition process may use vocal tract length normalization (VTLN) for male-female normalization and maximum likelihood linear regression (MLLR) for more general speaker adaptation. In some cases, an automatic speech recognition process may determine so-called delta and delta-delta coefficients to capture speech dynamics and might use heteroscedastic linear discriminant analysis (HLDA). In some cases, an automatic speech recognition process may use splicing and an linear discriminate analysis (LDA)-based projection, which may include heteroscedastic linear discriminant analysis or a global semi-tied covariance transform (also known as maximum likelihood linear transform [MLLT]). In some cases, an automatic speech recognition process may use discriminative training techniques, which may dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of training data; examples may include maximum mutual information (MMI), minimum classification error (MCE), and minimum phone error (MPE). Still referring toFIG.1, in some embodiments, an automatic speech recognition process may be said to decode speech (i.e., audible verbal content). Decoding of speech may occur when an automatic speech recognition system is presented with a new utterance and must compute a most likely sentence. In some cases, speech decoding may include a Viterbi algorithm. A Viterbi algorithm may include a dynamic programming algorithm for obtaining a maximum a posteriori probability estimate of a most likely sequence of hidden states (i.e., Viterbi path) that results in a sequence of observed events. Viterbi algorithms may be employed in context of Markov information sources and hidden Markov models. A Viterbi algorithm may be used to find a best path, for example using a dynamically created combination hidden Markov model, having both acoustic and language model information, using a statically created combination hidden Markov model (e.g., finite state transducer [FST] approach). Still referring toFIG.1, in some embodiments, speech (i.e., audible verbal content) decoding may include considering a set of good candidates and not only a best candidate, when presented with a new utterance. In some cases, a better scoring function (i.e., re-scoring) may be used to rate each of a set of good candidates, allowing selection of a best candidate according to this refined score. In some cases, a set of candidates can be kept either as a list (i.e., N-best list approach) or as a subset of models (i.e., a lattice). In some cases, re-scoring may be performed by optimizing Bayes risk (or an approximation thereof). In some cases, re-scoring may include optimizing for sentence (including keywords) that minimizes an expectancy of a given loss function with regards to all possible transcriptions. For example, re-scoring may allow selection of a sentence that minimizes an average distance to other possible sentences weighted by their estimated probability. In some cases, an employed loss function may include Levenshtein distance, although different distance calculations may be performed, for instance for specific tasks. In some cases, a set of candidates may be pruned to maintain tractability. Still referring toFIG.1, in some embodiments, an automatic speech recognition process may employ dynamic time warping (DTW)-based approaches. Dynamic time warping may include algorithms for measuring similarity between two sequences, which may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics—indeed, any data that can be turned into a linear representation can be analyzed with DTW. In some cases, DTW may be used by an automatic speech recognition process to cope with different speaking (i.e., audible verbal content) speeds. In some cases, DTW may allow computing device104to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, in some cases, sequences can be “warped” non-linearly to match each other. In some cases, a DTW-based sequence alignment method may be used in context of hidden Markov models. Still referring toFIG.1, in some embodiments, an automatic speech recognition process may include a neural network. In some cases, neural networks may be used for automatic speech recognition, including phoneme classification, phoneme classification through multi-objective evolutionary algorithms, isolated word recognition, audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation. In some cases. neural networks employed in automatic speech recognition may make fewer explicit assumptions about feature statistical properties than HMIs and therefore may have several qualities making them attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks may allow discriminative training in a natural and efficient manner. In some cases, neural networks may be used to effectively classify audible verbal content over short-time interval, for instance such as individual phonemes and isolated words. In some embodiments, a neural network may be employed by automatic speech recognition processes for pre-processing, feature transformation and/or dimensionality reduction, for example prior to HMM-based recognition. In some embodiments, long short-term memory (LSTM) and related recurrent neural networks (RNNs) and Time Delay Neural Networks (TDNN's) may be used for automatic speech recognition, for example over longer time intervals for continuous speech recognition. Apparatus100may utilize an automatic speech recognition process to determine one or more voice inputs of user data116. For instance and without limitation, a user may say the phrase “show me a purple penguin!” to which apparatus100may utilize an automatic speech recognition process and generate virtual avatar model104to include a purple penguin. Still referring toFIG.1, in some embodiments, user input120may include avatar modifier124. An “avatar modifier” as used in this disclosure is data representing digital character attributes. Avatar modifier124may include digital character attributes, such as attributes of virtual entity108, which may include, without limitation, clothing, avatar body models, hair colors, skin colors, appendages, eye colors, voices, behaviors, and the like. Avatar modifier124may be received at apparatus100through a graphical user interface (GUI). A GUI may include a two-dimensional GUI that may be displayed on a monitor, laptop, and/or other screen. A GUI may include a three-dimensional GUI that may be displayed within a virtual reality, augmented reality, and the like. A GUI may include one or more sliders, buttons, drop-down menus, tables, and the like, which may be responsive to user input120. For instance and without limitation, user input120may include a selection of a box of a GUI, where the box includes a representation of a giraffe avatar body model. Avatar modifier124may include a giraffe avatar body model. Apparatus100may modify at least a portion of virtual avatar model104as a function of user input120and/or avatar modifier124. As a non-limiting example, apparatus may modify the height of virtual avatar model104as a function of user input120and/or avatar modifier124. Apparatus100may display virtual entity108and/or virtual avatar model104through display device128. A “display device” as used in this disclosure is a device having a screen. Display device128may include, but is not limited to, VR headsets, monitors, smartphones, laptops, mixed-reality headsets, smart glasses, and the like. In some embodiments, apparatus100may be connected to display device128through a wired and/or wireless connection. In some embodiments, apparatus100may be connected to display device128locally. In other embodiments, apparatus100may communicate virtual avatar model104to display device128through one or more computing devices, networks, and the like. Still referring toFIG.1, in some embodiments, apparatus100may generate virtual avatar model104through one or more images of user input120. Apparatus100may be in communication with and/or may include one or more cameras. As used in this disclosure, a “camera” is a device that is configured to sense electromagnetic radiation, such as without limitation visible light, and generate an image representing the electromagnetic radiation. In some cases, a camera may include one or more optics. Exemplary non-limiting optics include spherical lenses, aspherical lenses, reflectors, polarizers, filters, windows, aperture stops, and the like. In some cases, at least a camera may include an image sensor. Exemplary non-limiting image sensors include digital image sensors, such as without limitation charge-coupled device (CCD) sensors and complimentary metal-oxide-semiconductor (CMOS) sensors, chemical image sensors, and analog image sensors, such as without limitation film. In some cases, a camera may be sensitive within a non-visible range of electromagnetic radiation, such as without limitation infrared. As used in this disclosure, “image data” is information representing at least a physical scene, space, and/or object. In some cases, image data may be generated by a camera. “Image data” may be used interchangeably through this disclosure with “image,” where image is used as a noun. An image may be optical, such as without limitation where at least an optic is used to generate an image of an object. An image may be material, such as without limitation when film is used to capture an image. An image may be digital, such as without limitation when represented as a bitmap. Alternatively, an image may be comprised of any media capable of representing a physical scene, space, and/or object. Alternatively where “image” is used as a verb, in this disclosure, it refers to generation and/or formation of an image. Still referring toFIG.1, in some embodiments, apparatus100may include a machine vision system that includes at a camera. A machine vision system may use images from at least a camera124, to make a determination about a scene, space, and/or object. For example, in some cases a machine vision system may be used for world modeling or registration of objects within a space. In some cases, registration may include image processing, such as without limitation object recognition, feature detection, edge/corner detection, and the like. Non-limiting example of feature detection may include scale invariant feature transform (SIFT), Canny edge detection, Shi Tomasi corner detection, and the like. In some cases, registration may include one or more transformations to orient a camera frame (or an image or video stream) relative a three-dimensional coordinate system; exemplary transformations include without limitation homography transforms and affine transforms. In an embodiment, registration of first frame to a coordinate system may be verified and/or corrected using object identification and/or computer vision, as described above. For instance, and without limitation, an initial registration to two dimensions, represented for instance as registration to the x and y coordinates, may be performed using a two-dimensional projection of points in three dimensions onto a first frame, however. A third dimension of registration, representing depth and/or a z axis, may be detected by comparison of two frames; for instance, where first frame includes a pair of frames captured using a pair of cameras (e.g., stereoscopic camera also referred to in this disclosure as stereo-camera), image recognition and/or edge detection software may be used to detect a pair of stereoscopic views of images of an object; two stereoscopic views may be compared to derive z-axis values of points on object permitting, for instance, derivation of further z-axis points within and/or around the object using interpolation. This may be repeated with multiple objects in field of view, including without limitation environmental features of interest identified by object classifier and/or indicated by an operator. In an embodiment, x and y axes may be chosen to span a plane common to two cameras used for stereoscopic image capturing and/or an xy plane of a first frame; a result, x and y translational components and ϕ may be pre-populated in translational and rotational matrices, for affine transformation of coordinates of object, also as described above. Initial x and y coordinates and/or guesses at transformational matrices may alternatively or additionally be performed between first frame and second frame, as described above. For each point of a plurality of points on object and/or edge and/or edges of object as described above, x and y coordinates of a first stereoscopic frame may be populated, with an initial estimate of z coordinates based, for instance, on assumptions about object, such as an assumption that ground is substantially parallel to an xy plane as selected above. Z coordinates, and/or x, y, and z coordinates, registered using image capturing and/or object identification processes as described above may then be compared to coordinates predicted using initial guess at transformation matrices; an error function may be computed using by comparing the two sets of points, and new x, y, and/or z coordinates, may be iteratively estimated and compared until the error function drops below a threshold level. In some cases, a machine vision system may use a classifier, such as any classifier described throughout this disclosure. Still referring toFIG.1, an exemplary range-imaging camera that may be included is Intel® RealSense™D430 Module, from Intel® of Mountainview, California, U.S.A. D430 Module comprises active infrared (IR) illumination and a stereoscopic camera, having global shutters and frame rate of up to 90 fps. D430 Module provide a field of view (FOV) of 85.2° (horizontal) by 58° (vertical) and an image resolution of 1280×720. Range-sensing camera may be operated independently by dedicated hardware or, in some cases, range-sensing camera may be operated by a computing device. In some cases, range-sensing camera may include software and firmware resources (for execution on hardware, such as without limitation dedicated hardware or a computing device). D430 Module may be operating using software resources including Intel® RealSense™ SDK 2.0, which include opensource cross platform libraries. Still referring toFIG.1, an exemplary machine vision camera may include an OpenMV Cam H7 from OpenMV, LLC of Atlanta, Georgia, U.S.A. OpenMV Cam comprises a small, low power, microcontroller which allows execution of machine vision applications. OpenMV Cam comprises an ARM Cortex M7 processor and a 640×480 image sensor operating at a frame rate up to 150 fps. OpenMV Cam may be programmed with Python using a Remote Python/Procedure Call (RPC) library. OpenMV CAM may be used to operate image classification and segmentation models, such as without limitation by way of TensorFlow Lite; detection motion, for example by way of frame differencing algorithms; marker detection, for example blob detection; object detection, for example face detection; eye tracking; person detection, for example by way of a trained machine learning model; camera motion detection, for example by way of optical flow detection; code (barcode) detection and decoding; image capture; and video recording. Still referring toFIG.1, apparatus100may be in communication with and/or may include a stereo-camera. As used in this disclosure, a “stereo-camera” is a camera that senses two or more images from two or more vantages. As used in this disclosure, a “vantage” is a location of a camera relative a scene, space and/or object which the camera is configured to sense. In some cases, a stereo-camera may determine depth of an object in a scene as a function of parallax. As used in this disclosure, “parallax” is a difference in perceived location of a corresponding object in two or more images. An exemplary stereo-camera is TaraXL from e-con Systems, Inc of San Jose, California. TaraXL is a USB 3.0 stereo-camera which is optimized for NVIDIA® Jetson AGX Xavier™/Jetson™ TX2 and NVIDIA GPU Cards. TaraXL's accelerated Software Development Kit (TaraXL SDK) is capable of doing high quality 3D depth mapping of WVGA at a rate of up to 60 frames per second. TaraXL is based on MT9V024 stereo sensor from ON Semiconductor. Additionally, TaraXL includes a global shutter, houses 6 inertial measurement units (IMUs), and allows mounting of optics by way of an S-mount lens holder. TaraXL may operate at depth ranges of about 50 cm to about 300 cm. With continued reference toFIG.1, apparatus100may include at least an eye sensor. As used in this disclosure, an “eye sensor” is any system or device that is configured or adapted to detect an eye parameter as a function of an eye phenomenon. In some cases, at least an eye sensor may be configured to detect at least an eye parameter as a function of at least an eye phenomenon. As used in this disclosure, an “eye parameter” is an element of information associated with an eye. Exemplary non-limiting eye parameters may include blink rate, eye-tracking parameters, pupil location, gaze directions, pupil dilation, and the like. Exemplary eye parameters are described in greater detail below. In some cases, an eye parameter may be transmitted or represented by an eye signal. An eye signal may include any signal described in this disclosure. As used in this disclosure, an “eye phenomenon” may include any observable phenomenon associated with an eye, including without limitation focusing, blinking, eye-movement, and the like. In some embodiments, at least an eye sensor may include an electromyography sensor. Electromyography sensor may be configured to detect at least an eye parameter as a function of at least an eye phenomenon. Still referring toFIG.1, in some embodiments, an eye sensor may include an optical eye sensor. Optical eye sensor may be configured to detect at least an eye parameter as a function of at least an eye phenomenon. In some cases, an optical eye sensor may include a camera directed toward one or both of person's eyes. In some cases, optical eye sensor may include a light source, likewise directed to person's eyes. Light source may have a non-visible wavelength, for instance infrared or near-infrared. In some cases, a wavelength may be selected which reflects at an eye's pupil (e.g., infrared). Light that selectively reflects at an eye's pupil may be detected, for instance by camera. Images of eyes may be captured by camera. As used in this disclosure, a “camera” is a device that is configured to sense electromagnetic radiation, such as without limitation visible light, and generate an image representing the electromagnetic radiation. In some cases, a camera may include one or more optics. Exemplary non-limiting optics include spherical lenses, aspherical lenses, reflectors, polarizers, filters, windows, aperture stops, and the like. In some cases, at least a camera may include an image sensor. Exemplary non-limiting image sensors include digital image sensors, such as without limitation charge-coupled device (CCD) sensors and complimentary metal-oxide-semiconductor (CMOS) sensors, chemical image sensors, and analog image sensors, such as without limitation film. In some cases, a camera may be sensitive within a non-visible range of electromagnetic radiation, such as without limitation infrared. As used in this disclosure, “image data” is information representing at least a physical scene, space, and/or object (e.g., person or person's eyes). In some cases, image data may be generated by a camera. “Image data” may be used interchangeably through this disclosure with “image,” where image is used as a noun. An image may be optical, such as without limitation where at least an optic is used to generate an image of an object. An image may be material, such as without limitation when film is used to capture an image116. An image116may be digital, such as without limitation when represented as a bitmap. Alternatively, an image may be comprised of any media capable of representing a physical scene, space, and/or object108. Alternatively where “image” is used as a verb, in this disclosure, it refers to generation and/or formation of an image. Still referring toFIG.1, an exemplary camera is an OpenMV Cam H7 from OpenMV, LLC of Atlanta, Georgia, U.S.A. OpenMV Cam includes a small, low power, microcontroller104which allows execution of processes. OpenMV Cam comprises an ARM Cortex M7 processor104and a 640×480 image sensor operating at a frame rate up to 150 fps. OpenMV Cam may be programmed with Python using a Remote Python/Procedure Call (RPC) library. OpenMV CAM may be used to operate image classification and segmentation models, such as without limitation by way of TensorFlow Lite; detect motion, for example by way of frame differencing algorithms; detect markers, for example blob detection; detect objects, for example face detection; track eyes; detection persons, for example by way of a trained machine learning model; detect camera motion, for example by way of optical flow detection; detect and decode barcodes; capture images; and record video. Still referring toFIG.1, in some cases, a camera may be used to determine eye patterns (e.g., track eye movements). For instance, a camera may capture images and a processor (internal or external) to camera may process images to track eye movements. In some embodiments, a video-based eye tracker may use corneal reflection (e.g., first Purkinje image) and a center of pupil as features to track over time. A more sensitive type of eye-tracker, a dual-Purkinje eye tracker, may use reflections from a front of cornea (i.e., first Purkinje image) and back of lens (i.e., fourth Purkinje image) as features to track. A still more sensitive method of tracking may include use of image features from inside eye, such as retinal blood vessels, and follow these features as the eye rotates. In some cases, optical methods, particularly those based on video recording, may be used for gaze-tracking and may be non-invasive and inexpensive. For instance, in some cases a relative position between a camera and a person may be known or estimable. Pupil location may be determined through analysis of images (either visible or infrared images). In some cases, a camera may focus on one or both eyes and record eye movement as a viewer looks. In some cases, an eye-tracker may use center of pupil and infrared/near-infrared non-collimated light to create corneal reflections (CR). A vector between pupil center and corneal reflections can be used to compute a point of regard on surface (i.e., a gaze direction). In some cases, a simple calibration procedure with an individual person may be needed before using an optical eye tracker. In some cases, two general types of infrared/near-infrared (also known as active light) eye-tracking techniques can be used: bright-pupil (light reflected by pupil) and dark-pupil (light not reflected by pupil). Difference between bright-pupil and dark pupil images may be based on a location of illumination source with respect to optics. For instance, if illumination is coaxial with optical path, then eye may act as a retroreflector as the light reflects off retina creating a bright pupil effect similar to red eye. If illumination source is offset from optical path, then pupil may appear dark because reflection from retina is directed away from camera. In some cases, bright-pupil tracking creates greater iris/pupil contrast, allowing more robust eye-tracking with all iris pigmentation, and greatly reduces interference caused by eyelashes and other obscuring features. In some cases, bright-pupil tracking may also allow tracking in lighting conditions ranging from total darkness to very bright. Still referring toFIG.1, alternatively, in some cases, a passive light optical eye tracking method may be employed. Passive light optical eye tracking may use visible light to illuminate. In some cases, passive light optical tracking yields less contrast of pupil than with active light methods; therefore, in some cases, a center of iris may be used for calculating a gaze vector. In some cases, a center of iris determination requires detection of a boundary of iris and sclera (e.g., limbus tracking). In some case, eyelid obstruction of iris and our sclera may challenge calculations of an iris center. Still referring toFIG.1, some optical eye tracking systems may be head-mounted, some may require the head to be stable, and some may function remotely and automatically track the head during motion. Optical eye tracking systems112may capture images116at frame rate. Exemplary frame rates include 15, 30, 60, 120, 240, 350, 1000, and 1250 Hz. In some embodiments, apparatus100may utilize an eye tracking method to determine which parts of virtual entity108a user may be looking at. Apparatus100may update virtual avatar model104as a function of an eye tracking method. For instance and without limitation, apparatus100may determine a user is looking at a hat of virtual entity108and generate one or more alternative hat choices for a user to select for virtual entity108. Still referring toFIG.1, apparatus100may use a machine vision process to generate virtual entity108. In some embodiments, user input120may include one or more images and/or videos. Images and/or videos may be captured through, but not limited to, smartphone cameras, web cameras, and/or other camera systems. In some embodiments, images and/or videos may include, but are not limited to, selfies, photos of entities, photos of inanimate objects, and the like. Apparatus100may generate virtual entity108from image data of user116. For instance and without limitation, user input120may include a photo of a banana. Apparatus100may generate virtual avatar model104to include an avatar body model of a banana. In some embodiments, user input120may include a selfie. Apparatus100may generate virtual avatar model104to represent a likeness of a selfie of a user. Still referring toFIG.1, user input120may include one or more task commands. A “task command” as used in this disclosure is an order for a digital character to perform an action. Task commands may include, but are not limited to, reminders, notifications, and the like. Apparatus100may utilize a task machine learning model to determine one or more task commands. A task machine learning model may be trained with training data correlating user data to one or more task commands. Training data may be received through user input, external computing devices, and/or previous iterations of processing. A task machine learning model may input user input and output one or more task commands. In some embodiments, a user may enter one or more task commands for virtual entity108to perform through apparatus100. In other embodiments, apparatus100may determine one or more task commands as a function of historical data of a user, such as timing of tasks, types of tasks, importance of tasks, and the like. Referring now toFIG.2, an exemplary embodiment of virtual entity200is illustrated. Virtual entity200may be consistent with virtual entity108as described above with reference toFIG.1. In some embodiments, virtual entity200may include body model204. Body model204may include a base model for virtual entity200to be generated from. Body model204may include, but is not limited to, animals, humans, robots, inanimate objects, and the like. For instance and without limitation, virtual entity200may include a penguin model. A user may select one or more body models204from a plurality of body models204. Selection may include, but is not limited to, clicking on one or more icons of a GUI, moving one or more sliding icons of a GUI, voice entries, and the like. In some embodiments, a user may select one or more avatar dimensions212of virtual entity200. Avatar dimensions200may include dimensions as described above. In some embodiments, avatar dimensions212may include, but are not limited to, heights, widths, lengths, appendages, and the like. In some embodiments, a user may select one or more avatar dimensions212through a GUI. In some embodiments, virtual entity200may include one or more facial features208. Facial features208may include, but are not limited to, eye spacing, eye size, eye color, eyebrow details, nose size, nose position, mouth size, mouth position, and the like. In some embodiments, facial features208may include one or more facial animations. Facial animations may include, but are not limited to, smiling, grinning, smirking, pouting, yelling, laughing, and the like. A user may select one or more facial animations208through a GUI, textual entries, and/or voice input. In some embodiments, virtual entity200may include apparel, such as, without limitation, shoes, socks, hats, shirts, jackets, bathing suits, helmets, backpacks, watches, glasses, bicycles, roller blades, skis, and the like. A user may select various apparel of virtual entity200through a GUI, textual entries, and/or voice input. Still referring toFIG.2, in some embodiments, a user may interact with virtual entity200. Interaction may include, but is not limited to, textual interaction, verbal interaction, physical interaction, and/or other interactions. Interactions may include speaking to virtual entity200, sending messages to virtual entity200, and the like. A user may interact with virtual entity200through AR, VR, and/or other virtual realities. In some embodiments, interaction may include performing a task with virtual entity200. A task may include, without limitation, retrieving one or more digital objects, manipulating one or more real world and/or virtual objects, and the like. A user may provide one or more digital objects to virtual entity200, such as, without limitation, digital foods, apparel, sports equipment, tools, and the like. Virtual entity200may provide one or more digital objects to a user through a GUI, VR, AR, and/or other display method. Apparatus100may determine user data as a function of user interaction with virtual entity200. A user may engage in a virtual game of catch with virtual entity200. Apparatus100may determine user data of a user to include an energetic behavioral pattern. Apparatus100may modify parameters of virtual avatar model204and/or operational model116as described above with reference toFIG.1. Referring now toFIG.3, an exemplary embodiment of a machine-learning module300that may perform one or more machine-learning processes as described in this disclosure is illustrated. Machine-learning module may perform determinations, classification, and/or analysis steps, methods, processes, or the like as described in this disclosure using machine learning processes. A “machine learning process,” as used in this disclosure, is a process that automatedly uses training data304to generate an algorithm that will be performed by a computing device/module to produce outputs308given data provided as inputs312; this is in contrast to a non-machine learning software program where the commands to be executed are determined in advance by a user and written in a programming language. Still referring toFIG.3, “training data,” as used herein, is data containing correlations that a machine-learning process may use to model relationships between two or more categories of data elements. For instance, and without limitation, training data304may include a plurality of data entries, each entry representing a set of data elements that were recorded, received, and/or generated together; data elements may be correlated by shared existence in a given data entry, by proximity in a given data entry, or the like. Multiple data entries in training data304may evince one or more trends in correlations between categories of data elements; for instance, and without limitation, a higher value of a first data element belonging to a first category of data element may tend to correlate to a higher value of a second data element belonging to a second category of data element, indicating a possible proportional or other mathematical relationship linking values belonging to the two categories. Multiple categories of data elements may be related in training data304according to various correlations; correlations may indicate causative and/or predictive links between categories of data elements, which may be modeled as relationships such as mathematical relationships by machine-learning processes as described in further detail below. Training data304may be formatted and/or organized by categories of data elements, for instance by associating data elements with one or more descriptors corresponding to categories of data elements. As a non-limiting example, training data304may include data entered in standardized forms by persons or processes, such that entry of a given data element in a given field in a form may be mapped to one or more descriptors of categories. Elements in training data304may be linked to descriptors of categories by tags, tokens, or other data elements; for instance, and without limitation, training data304may be provided in fixed-length formats, formats linking positions of data to categories such as comma-separated value (CSV) formats and/or self-describing formats such as extensible markup language (XML), JavaScript Object Notation (JSON), or the like, enabling processes or devices to detect categories of data. Alternatively or additionally, and continuing to refer toFIG.3, training data304may include one or more elements that are not categorized; that is, training data304may not be formatted or contain descriptors for some elements of data. Machine-learning algorithms and/or other processes may sort training data304according to one or more categorizations using, for instance, natural language processing algorithms, tokenization, detection of correlated values in raw data and the like; categories may be generated using correlation and/or other processing algorithms. As a non-limiting example, in a corpus of text, phrases making up a number “n” of compound words, such as nouns modified by other nouns, may be identified according to a statistically significant prevalence of n-grams containing such words in a particular order; such an n-gram may be categorized as an element of language such as a “word” to be tracked similarly to single words, generating a new category as a result of statistical analysis. Similarly, in a data entry including some textual data, a person's name may be identified by reference to a list, dictionary, or other compendium of terms, permitting ad-hoc categorization by machine-learning algorithms, and/or automated association of data in the data entry with descriptors or into a given format. The ability to categorize data entries automatedly may enable the same training data304to be made applicable for two or more distinct machine-learning algorithms as described in further detail below. Training data304used by machine-learning module300may correlate any input data as described in this disclosure to any output data as described in this disclosure. As a non-limiting illustrative example, inputs may include user data and outputs may include behavioral parameters. Further referring toFIG.3, training data may be filtered, sorted, and/or selected using one or more supervised and/or unsupervised machine-learning processes and/or models as described in further detail below; such models may include without limitation a training data classifier316. Training data classifier316may include a “classifier,” which as used in this disclosure is a machine-learning model as defined below, such as a mathematical model, neural net, or program generated by a machine learning algorithm known as a “classification algorithm,” as described in further detail below, that sorts inputs into categories or bins of data, outputting the categories or bins of data and/or labels associated therewith. A classifier may be configured to output at least a datum that labels or otherwise identifies a set of data that are clustered together, found to be close under a distance metric as described below, or the like. Machine-learning module300may generate a classifier using a classification algorithm, defined as a processes whereby a computing device and/or any module and/or component operating thereon derives a classifier from training data304. Classification may be performed using, without limitation, linear classifiers such as without limitation logistic regression and/or naive Bayes classifiers, nearest neighbor classifiers such as k-nearest neighbors classifiers, support vector machines, least squares support vector machines, fisher's linear discriminant, quadratic classifiers, decision trees, boosted trees, random forest classifiers, learning vector quantization, and/or neural network-based classifiers. As a non-limiting example, training data classifier316may classify elements of training data to behavioral parameters. Still referring toFIG.3, machine-learning module300may be configured to perform a lazy-learning process320and/or protocol, which may alternatively be referred to as a “lazy loading” or “call-when-needed” process and/or protocol, may be a process whereby machine learning is conducted upon receipt of an input to be converted to an output, by combining the input and training set to derive the algorithm to be used to produce the output on demand. For instance, an initial set of simulations may be performed to cover an initial heuristic and/or “first guess” at an output and/or relationship. As a non-limiting example, an initial heuristic may include a ranking of associations between inputs and elements of training data304. Heuristic may include selecting some number of highest-ranking associations and/or training data304elements. Lazy learning may implement any suitable lazy learning algorithm, including without limitation a K-nearest neighbors algorithm, a lazy naïve Bayes algorithm, or the like; persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various lazy-learning algorithms that may be applied to generate outputs as described in this disclosure, including without limitation lazy learning applications of machine-learning algorithms as described in further detail below. Alternatively or additionally, and with continued reference toFIG.3, machine-learning processes as described in this disclosure may be used to generate machine-learning models324. A “machine-learning model,” as used in this disclosure, is a mathematical and/or algorithmic representation of a relationship between inputs and outputs, as generated using any machine-learning process including without limitation any process as described above, and stored in memory; an input is submitted to a machine-learning model324once created, which generates an output based on the relationship that was derived. For instance, and without limitation, a linear regression model, generated using a linear regression algorithm, may compute a linear combination of input data using coefficients derived during machine-learning processes to calculate an output datum. As a further non-limiting example, a machine-learning model324may be generated by creating an artificial neural network, such as a convolutional neural network comprising an input layer of nodes, one or more intermediate layers, and an output layer of nodes. Connections between nodes may be created via the process of “training” the network, in which elements from a training data304set are applied to the input nodes, a suitable training algorithm (such as Levenberg-Marquardt, conjugate gradient, simulated annealing, or other algorithms) is then used to adjust the connections and weights between nodes in adjacent layers of the neural network to produce the desired values at the output nodes. This process is sometimes referred to as deep learning. Still referring toFIG.3, machine-learning algorithms may include at least a supervised machine-learning process328. At least a supervised machine-learning process328, as defined herein, include algorithms that receive a training set relating a number of inputs to a number of outputs, and seek to find one or more mathematical relations relating inputs to outputs, where each of the one or more mathematical relations is optimal according to some criterion specified to the algorithm using some scoring function. For instance, a supervised learning algorithm may include user input as described above as inputs, virtual avatar models as outputs, and a scoring function representing a desired form of relationship to be detected between inputs and outputs; scoring function may, for instance, seek to maximize the probability that a given input and/or combination of elements inputs is associated with a given output to minimize the probability that a given input is not associated with a given output. Scoring function may be expressed as a risk function representing an “expected loss” of an algorithm relating inputs to outputs, where loss is computed as an error function representing a degree to which a prediction generated by the relation is incorrect when compared to a given input-output pair provided in training data304. Persons skilled in the art, upon reviewing the entirety of this disclosure, will be aware of various possible variations of at least a supervised machine-learning process328that may be used to determine relation between inputs and outputs. Supervised machine-learning processes may include classification algorithms as defined above. Further referring toFIG.3, machine learning processes may include at least an unsupervised machine-learning processes332. An unsupervised machine-learning process, as used herein, is a process that derives inferences in datasets without regard to labels; as a result, an unsupervised machine-learning process may be free to discover any structure, relationship, and/or correlation provided in the data. Unsupervised processes may not require a response variable; unsupervised processes may be used to find interesting patterns and/or inferences between variables, to determine a degree of correlation between two or more variables, or the like. Still referring toFIG.3, machine-learning module300may be designed and configured to create a machine-learning model324using techniques for development of linear regression models. Linear regression models may include ordinary least squares regression, which aims to minimize the square of the difference between predicted outcomes and actual outcomes according to an appropriate norm for measuring such a difference (e.g. a vector-space distance norm); coefficients of the resulting linear equation may be modified to improve minimization. Linear regression models may include ridge regression methods, where the function to be minimized includes the least-squares function plus term multiplying the square of each coefficient by a scalar amount to penalize large coefficients. Linear regression models may include least absolute shrinkage and selection operator (LASSO) models, in which ridge regression is combined with multiplying the least-squares term by a factor of 1 divided by double the number of samples. Linear regression models may include a multi-task lasso model wherein the norm applied in the least-squares term of the lasso model is the Frobenius norm amounting to the square root of the sum of squares of all terms. Linear regression models may include the elastic net model, a multi-task elastic net model, a least angle regression model, a LARS lasso model, an orthogonal matching pursuit model, a Bayesian regression model, a logistic regression model, a stochastic gradient descent model, a perceptron model, a passive aggressive algorithm, a robustness regression model, a Huber regression model, or any other suitable model that may occur to persons skilled in the art upon reviewing the entirety of this disclosure. Linear regression models may be generalized in an embodiment to polynomial regression models, whereby a polynomial equation (e.g. a quadratic, cubic or higher-order equation) providing a best predicted output/actual output fit is sought; similar methods to those described above may be applied to minimize error functions, as will be apparent to persons skilled in the art upon reviewing the entirety of this disclosure. Continuing to refer toFIG.3, machine-learning algorithms may include, without limitation, linear discriminant analysis. Machine-learning algorithm may include quadratic discriminate analysis. Machine-learning algorithms may include kernel ridge regression. Machine-learning algorithms may include support vector machines, including without limitation support vector classification-based regression processes. Machine-learning algorithms may include stochastic gradient descent algorithms, including classification and regression algorithms based on stochastic gradient descent. Machine-learning algorithms may include nearest neighbors algorithms. Machine-learning algorithms may include various forms of latent space regularization such as variational regularization. Machine-learning algorithms may include Gaussian processes such as Gaussian Process Regression. Machine-learning algorithms may include cross-decomposition algorithms, including partial least squares and/or canonical correlation analysis. Machine-learning algorithms may include naïve Bayes methods. Machine-learning algorithms may include algorithms based on decision trees, such as decision tree classification or regression algorithms. Machine-learning algorithms may include ensemble methods such as bagging meta-estimator, forest of randomized tress, AdaBoost, gradient tree boosting, and/or voting classifier methods. Machine-learning algorithms may include neural net algorithms, including convolutional neural net processes. Referring now toFIG.4, a flowchart of method400of generating a virtual avatar is illustrated. At step405, method400includes generating a virtual avatar model. Generating a virtual avatar model may include generating a virtual avatar model through a machine vision process. This step may be implemented as described above with reference toFIGS.1-3, without limitation. Still referring toFIG.4, at step410, method400includes receiving user input. User input may be received at an apparatus locally, remotely, and/or a combination thereof. This step may be implemented as described above with reference toFIGS.1-3, without limitation. Still referring toFIG.4, at step415, method400includes modifying at least a portion of a virtual avatar model. Modification may include altering a visual, behavioral, and/or other aspect of a virtual entity of a virtual avatar model. This step may be implemented as described above with reference toFIGS.1-3, without limitation. Still referring toFIG.4, at step420, method400includes displaying a virtual avatar model. A virtual avatar model may be displayed in AR, VR, and/or other virtual realities. Displaying a virtual avatar model may include displaying the virtual avatar model through one or more screens, such as, without limitation, smartphones, laptops, monitors, tablets, and the like. This step may be implemented as described above with reference toFIGS.1-3, without limitation. With continued reference toFIG.4, in some embodiments, method400may include a step of determining image data from the user data. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, generating the virtual avatar model comprises generating the virtual avatar model through a machine vision process as a function of the image data. In some embodiments, method400may include a step of generating a chatbot, wherein the chatbot is configured to communicate textual data from user input to the at least a processor. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, modifying the at least a portion of the virtual avatar model of step415may be a function of textual data received from the chatbot. In some embodiments, method400may include a step of determining behavioral patterns as a function of engagement of the virtual entity with a user. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, method400may include a step of determining objectives of the virtual entity as a function of the user input. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, method400may include a step of determining, by the processor, a behavioral status of a user. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, modifying the at least a portion of the virtual avatar model of step415may be a function of the behavioral status of the user. In some embodiments, method400may further include a step of receiving training data correlating user data to virtual avatar model parameters. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments, method400may include a step of training a virtual avatar machine learning model with the training data. This step may be implemented as described above with reference toFIGS.1-3, without limitation. In some embodiments the virtual avatar machine learning model may be configured to input user data and output virtual avatar model parameters. In some embodiments, modifying at least a portion of the virtual avatar model of step415may include modifying at least a portion of the virtual avatar model as a function of the virtual avatar machine learning model. It is to be noted that any one or more of the aspects and embodiments described herein may be conveniently implemented using one or more machines (e.g., one or more computing devices that are utilized as a user computing device for an electronic document, one or more server devices, such as a document server, etc.) programmed according to the teachings of the present specification, as will be apparent to those of ordinary skill in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those of ordinary skill in the software art. Aspects and implementations discussed above employing software and/or software modules may also include appropriate hardware for assisting in the implementation of the machine executable instructions of the software and/or software module. Such software may be a computer program product that employs a machine-readable storage medium. A machine-readable storage medium may be any medium that is capable of storing and/or encoding a sequence of instructions for execution by a machine (e.g., a computing device) and that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a machine-readable storage medium include, but are not limited to, a magnetic disk, an optical disc (e.g., CD, CD-R, DVD, DVD-R, etc.), a magneto-optical disk, a read-only memory “ROM” device, a random access memory “RAM” device, a magnetic card, an optical card, a solid-state memory device, an EPROM, an EEPROM, and any combinations thereof. A machine-readable medium, as used herein, is intended to include a single medium as well as a collection of physically separate media, such as, for example, a collection of compact discs or one or more hard disk drives in combination with a computer memory. As used herein, a machine-readable storage medium does not include transitory forms of signal transmission. Such software may also include information (e.g., data) carried as a data signal on a data carrier, such as a carrier wave. For example, machine-executable information may be included as a data-carrying signal embodied in a data carrier in which the signal encodes a sequence of instruction, or portion thereof, for execution by a machine (e.g., a computing device) and any related information (e.g., data structures and data) that causes the machine to perform any one of the methodologies and/or embodiments described herein. Examples of a computing device include, but are not limited to, an electronic book reading device, a computer workstation, a terminal computer, a server computer, a handheld device (e.g., a tablet computer, a smartphone, etc.), a web appliance, a network router, a network switch, a network bridge, any machine capable of executing a sequence of instructions that specify an action to be taken by that machine, and any combinations thereof. In one example, a computing device may include and/or be included in a kiosk. FIG.5shows a diagrammatic representation of one embodiment of a computing device in the exemplary form of a computer system500within which a set of instructions for causing a control system to perform any one or more of the aspects and/or methodologies of the present disclosure may be executed. It is also contemplated that multiple computing devices may be utilized to implement a specially configured set of instructions for causing one or more of the devices to perform any one or more of the aspects and/or methodologies of the present disclosure. Computer system500includes a processor504and a memory508that communicate with each other, and with other components, via a bus512. Bus512may include any of several types of bus structures including, but not limited to, a memory bus, a memory controller, a peripheral bus, a local bus, and any combinations thereof, using any of a variety of bus architectures. Still referring toFIG.5, processor504may include any suitable processor, such as without limitation a processor incorporating logical circuitry for performing arithmetic and logical operations, such as an arithmetic and logic unit (ALU), which may be regulated with a state machine and directed by operational inputs from memory and/or sensors; processor504may be organized according to Von Neumann and/or Harvard architecture as a non-limiting example. Processor504may include, incorporate, and/or be incorporated in, without limitation, a microcontroller, microprocessor, digital signal processor (DSP), Field Programmable Gate Array (FPGA), Complex Programmable Logic Device (CPLD), Graphical Processing Unit (GPU), general purpose GPU, Tensor Processing Unit (TPU), analog or mixed signal processor, Trusted Platform Module (TPM), a floating point unit (FPU), and/or system on a chip (SoC). Still referring toFIG.5, memory508may include various components (e.g., machine-readable media) including, but not limited to, a random-access memory component, a read only component, and any combinations thereof. In one example, a basic input/output system516(BIOS), including basic routines that help to transfer information between elements within computer system500, such as during start-up, may be stored in memory508. Memory508may also include (e.g., stored on one or more machine-readable media) instructions (e.g., software)520embodying any one or more of the aspects and/or methodologies of the present disclosure. In another example, memory508may further include any number of program modules including, but not limited to, an operating system, one or more application programs, other program modules, program data, and any combinations thereof. Still referring toFIG.5, computer system500may also include a storage device524. Examples of a storage device (e.g., storage device524) include, but are not limited to, a hard disk drive, a magnetic disk drive, an optical disc drive in combination with an optical medium, a solid-state memory device, and any combinations thereof. Storage device524may be connected to bus512by an appropriate interface (not shown). Example interfaces include, but are not limited to, SCSI, advanced technology attachment (ATA), serial ATA, universal serial bus (USB), IEEE 1394 (FIREWIRE), and any combinations thereof. In one example, storage device524(or one or more components thereof) may be removably interfaced with computer system500(e.g., via an external port connector (not shown)). Particularly, storage device524and an associated machine-readable medium528may provide nonvolatile and/or volatile storage of machine-readable instructions, data structures, program modules, and/or other data for computer system500. In one example, software520may reside, completely or partially, within machine-readable medium528. In another example, software520may reside, completely or partially, within processor504. Still referring toFIG.5, computer system500may also include an input device532. In one example, a user of computer system500may enter commands and/or other information into computer system500via input device532. Examples of an input device532include, but are not limited to, an alpha-numeric input device (e.g., a keyboard), a pointing device, a joystick, a gamepad, an audio input device (e.g., a microphone, a voice response system, etc.), a cursor control device (e.g., a mouse), a touchpad, an optical scanner, a video capture device (e.g., a still camera, a video camera), a touchscreen, and any combinations thereof. Input device532may be interfaced to bus512via any of a variety of interfaces (not shown) including, but not limited to, a serial interface, a parallel interface, a game port, a USB interface, a FIREWIRE interface, a direct interface to bus512, and any combinations thereof. Input device532may include a touch screen interface that may be a part of or separate from display536, discussed further below. Input device532may be utilized as a user selection device for selecting one or more graphical representations in a graphical interface as described above. Still referring toFIG.5, a user may also input commands and/or other information to computer system500via storage device524(e.g., a removable disk drive, a flash drive, etc.) and/or network interface device540. A network interface device, such as network interface device540, may be utilized for connecting computer system500to one or more of a variety of networks, such as network544, and one or more remote devices548connected thereto. Examples of a network interface device include, but are not limited to, a network interface card (e.g., a mobile network interface card, a LAN card), a modem, and any combination thereof. Examples of a network include, but are not limited to, a wide area network (e.g., the Internet, an enterprise network), a local area network (e.g., a network associated with an office, a building, a campus or other relatively small geographic space), a telephone network, a data network associated with a telephone/voice provider (e.g., a mobile communications provider data and/or voice network), a direct connection between two computing devices, and any combinations thereof. A network, such as network544, may employ a wired and/or a wireless mode of communication. In general, any network topology may be used. Information (e.g., data, software520, etc.) may be communicated to and/or from computer system500via network interface device540. Still referring toFIG.5, computer system500may further include a video display adapter552for communicating a displayable image to a display device, such as display device536. Examples of a display device include, but are not limited to, a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof. Display adapter552and display device536may be utilized in combination with processor504to provide graphical representations of aspects of the present disclosure. In addition to a display device, computer system500may include one or more other peripheral output devices including, but not limited to, an audio speaker, a printer, and any combinations thereof. Such peripheral output devices may be connected to bus512via a peripheral interface556. Examples of a peripheral interface include, but are not limited to, a serial port, a USB connection, a FIREWIRE connection, a parallel connection, and any combinations thereof. The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above may be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments, what has been described herein is merely illustrative of the application of the principles of the present invention. Additionally, although particular methods herein may be illustrated and/or described as being performed in a specific order, the ordering is highly variable within ordinary skill to achieve methods, systems, and software according to the present disclosure. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention. Exemplary embodiments have been disclosed above and illustrated in the accompanying drawings. It will be understood by those skilled in the art that various changes, omissions and additions may be made to that which is specifically disclosed herein without departing from the spirit and scope of the present invention.
98,325
11861779
DETAILED DESCRIPTION Overview Animations that depict movement of a digital object that mimics movement of a subject captured from digital images are found in social media platforms, prime-time television programs, and so on. In one example, a digital camera is used to capture digital images of a user, e.g., as part of a digital video stream. Motion tracking is then used to generate frames of an animation that exhibit movement that follows the user's movement captured in the digital images. Conventional techniques to do so, however, face numerous technical challenges. In one conventional example, movement of the digital object is achieved by rotations to “bones” specified for corresponding portions of the digital object. However, in some instances this causes visual artifacts, e.g., in order to address changes in a z-axis in a two-dimensional digital object that result in unnatural bends to portions of the object. In another example, the digital object appears to float over a surface over time as errors in generating the digital object accumulate over time, thereby giving an unnatural appearance that is often hidden in practice by avoiding display of lower portions of the object. Accordingly, digital object animation techniques are described that overcome these technical challenges to improve accuracy and efficiency in computational resource consumption. In a first example, translation-based animation of the digital object operates using control points (e.g., warp handles) of the digital object. Calibration data is generated that defines positional offsets of the control points of a digital object with respect to feature positions of a subject captured in a calibration digital image. This is used to define a baseline that indicates correspondence between features (e.g., eyes, shoulders, elbow, hands, etc.) and scaling of these features of the subject with respect to control points of the digital object. Frames of the animation are then generated by scaling the positional offsets based on changes to the feature positions captured in subsequent digital images. In this way, the techniques described herein overcome the challenges and inaccuracies caused by conventional use of rotation-based techniques that introduce errors and are resource intensive, thus improving operation of computing devices that implement these techniques. Additional functionality is also implemented by the techniques and systems described herein to improve digital object animation generation. The animation system, for instance, is configured to minimize an amount of feature positions that are used to generate the animation. This supports generation of animations for a “close in” subject (e.g., when sitting at a desk) in which an entirety of the subject is not visible by estimating proportions and scaling positional offsets for features that are not visible based on the features that are, e.g., by sequentially progressing through a control point hierarchy. In a third example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. This is performed, for instance, by detecting a scale of features positions included as part of a face or shoulders of the subject, which is then compared with positional offsets of control points from the calibration data that correspond to these feature positions to generate a global scale factor. This global scale factor is thus usable to “factor out” distance from the digital camera to improve consistency in animation of the digital object and reduce artifacts. In the above example, the global scale factor is based on detection of feature positions in a face of the subject. Because of this, accuracy of this detection has a direct effect on overall accuracy in the generation of the animation as a whole. In order to improve this accuracy, a body tracking module is used to compute initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. This improves accuracy and overcomes issues encountered in real-world scenarios in which face tracking accuracy departs from the subject as captured by the digital images, e.g., due to background textures, inclusion of faces on other articles such as depicted on a T-shirt of the subject, and so forth. Digital object animation is employed in a wide range of scenarios, including scenarios involving different positioning of the subject in relation to a digital camera, e.g., distance from the digital camera. Changes in this distance cause differences in scale between the features as described above as well as differences in features visibility that serve as a basis to form the animation of the digital object in successive frames. In order to address these different scenarios, the animation system supports a plurality of modes used to generate changes to the digital object. A full-body mode, for instance, is usable by the animation system when portions of a full-body of the subject are visible. This causes the animation system to scale the positional offsets in a hierarchy starting at a middle of the digital object and proceeding outward, e.g., from the waist and processing outward to the head and the feet of the digital object. This minimizes accumulation of error in animation of the digital object as opposed to conventional techniques that are limited to a top/down approach that causes “float” as error accumulates over time. On the other hand, when a middle of the subject is not visible, an upper body mode is employed in which the positional offsets are scaled in a hierarchy beginning at a top (e.g., head) of the digital object and progressing downward. In this way, the animation system adapts to differences in visibility of the subject to improve accuracy in generation of the animation of the digital object and overcome inaccuracies encountered in conventional techniques. As also described above, conventional techniques are challenged with inaccuracies that cause an animation of a digital object to “float” over a surface over time. This is caused, typically, by error accumulation over time and use of a top/down hierarchy. Accordingly, the animation system employs techniques in which vertical offsets of a base of the digital object (e.g., feet) are defined directly from positional offsets scaled by a global scale factor as described above. The animation system is also configured to employ techniques using a friction term that limits movement of features positions based on contact with a ground plane and constrains these positions to occur above the ground plane. This promotes realism and reduces artifacts such as “foot skating” caused by conventional techniques. Further discussion of these and other examples is included in the following sections and shown in corresponding figures. In the following discussion, an example environment is described that employs the techniques described herein. Example procedures are also described that are performable in the example environment as well as other environments. Consequently, performance of the example procedures is not limited to the example environment and the example environment is not limited to performance of the example procedures. Example Environment FIG.1is an illustration of a digital medium environment100in an example implementation that is operable to employ digital object techniques described herein. The illustrated environment100includes a computing device102, which is configurable in a variety of ways. The computing device102, for instance, is configurable as a desktop computer, a laptop computer, a mobile device (e.g., assuming a handheld configuration such as a tablet or mobile phone as illustrated), and so forth. Thus, the computing device102ranges from full resource devices with substantial memory and processor resources (e.g., personal computers, game consoles) to a low-resource device with limited memory and/or processing resources (e.g., mobile devices). Additionally, although a single computing device102is shown, the computing device102is also representative of a plurality of different devices, such as multiple servers utilized by a business to perform operations “over the cloud” as described inFIG.14. The computing device102is illustrated as including a digital camera104that is configured to capture a digital image106. The digital camera104, for instance, includes a charge-coupled device (CCD) as a sensor that is configured to generate the digital image106as a collection of pixels. The digital image106is illustrated as being communicated to an image processing system108. The image processing system108is implemented at least partially in hardware of the computing device102to process and transform the digital image106. Such processing includes creation of the digital image106, modification of the digital image106, and rendering of the digital image106in a user interface110for output, e.g., by a display device112. Although illustrated as implemented locally at the computing device102, functionality of the image processing system108is also configurable as whole or part via functionality available via the network114, such as part of a web service or “in the cloud.” An example of functionality incorporated by the image processing system108to process the digital image106is illustrated as an animation system116. The animation system116is representative of functionality to generate an animation118by processing the digital image106to configure digital objects120for respective frames122of the animation, which is illustrated as stored in a storage device124. The digital camera104, for instance, captures a digital image106of a subject126in a physical environment128, e.g., the “real world.” From this, the animation system116configures a digital object120based on the subject126, e.g., to configure corresponding portions of a body to mimic a pose of the subject126. This is performed by retargeting control points the digital object120to form a retargeted digital object140based on correspondence with feature positions of the subject126. Feature positions of the subject126(e.g., a chin, shoulders, elbows, corners of the mouth, and so on), for instance, are mapped to corresponding control points of the digital object120. Changes to positions of these features (i.e., feature positions) over successive digital images106are then used to retarget the control points of the digital object120(e.g., through translation) to generate respective frames122of the animation118. Although humanoid subjects126and digital objects120that are generally humanoid are described in the following discussion, subjects126and digital objects120are each configurable as a variety of non-humanoid objects, e.g., a beach ball, automobile, dog, and so forth. Further discussion of these and other examples is included in the following section and shown in corresponding figures. In general, functionality, features, and concepts described in relation to the examples above and below are employed in the context of the example procedures described in this section. Further, functionality, features, and concepts described in relation to different figures and examples in this document are interchangeable among one another and are not limited to implementation in the context of a particular figure or procedure. Moreover, blocks associated with different representative procedures and corresponding figures herein are applicable together and/or combinable in different ways. Thus, individual functionality, features, and concepts described in relation to different example environments, devices, components, figures, and procedures herein are usable in any suitable combinations and are not limited to the particular combinations represented by the enumerated examples in this description. Digital Object Animation The following discussion describes digital object animation techniques that are implementable utilizing the previously described systems and devices. Aspects of each of the procedures are implemented in hardware, firmware, software, or a combination thereof. The procedures are shown as a set of blocks that specify operations performed by one or more devices and are not necessarily limited to the orders shown for performing the operations by the respective blocks. In portions of the following discussion, reference will be made toFIGS.1-12. During the discussion, reference is also made in parallel to an example procedure500ofFIG.5. FIG.2depicts a system200in an example implementation showing operation of the animation system116in greater detail as generating calibration data (block502). This example begins by capturing a digital image106of a calibration pose202of a subject126using a digital camera104. The calibration pose202is specified as having a sufficient number of features of the subject126viewable to infer an overall structure of the subject126as well as positional offsets between the features of the subject126and control points of the digital object120. The calibration pose202, for instance, may include a threshold number of features that are usable to infer other features and distances of these features with respect to each other that are not currently viewable in the digital image106. As shown in an example300ofFIG.3, a digital object120is illustrated in which control points are represented using black circles. The control points correspond to joints and facial features of the digital object that are moveable to mimic movement detected in the subject. The warp handles, for instance are included within a mesh such that changes to a location of the warp handles cause a corresponding change to vertices of the mesh. Therefore, by generation of positional offsets based on the calibration feature positions206is usable to translate the control points210to cause the digital object120to mimic the subject, and more particularly a calibration pose202of the subject. The digital image106, for instance, is provided as an input to a motion detection system204of the image processing system108. The motion detection system204is configured to identify features and positions of the features for use in calibration, which is represented as calibration feature positions206inFIG.2. Features of the subject126include features of a body, such as joints (e.g., elbow, knee), bottoms of feet, top of head, and so forth. The body feature positions are usable to construct a skeleton that joins these features together. The calibration feature positions206also include facial features, such as corners of the mouth, eyes, tip of nose, ears, chine, and so on. The calibration feature positions206are then provided as an output to the animation system116. The animation system116includes a digital object input module208that is configured to obtain a digital object120(e.g., from a storage device124) and from this identify control points210included in the digital object120. The control points210are usable to define an overall shape, position, and scale of parts of the digital object120. In one example, the identified control points210are configured as warp handles of a mesh and the digital object120is configured as a puppet. Warp handles are moveable to manipulate the shape of the digital object120. Movement of the warp handles is used, for instance, as a basis to warp an underlying mesh of the digital object120. This is usable to warp particular portions of the digital object120, the digital object120as a whole, and so forth. Identification of the control points210includes examining data for predefined control points210included as part of the digital object120, output of the digital object120in the user interface110for manual specification via user inputs, and so forth. A calibration module212is then employed to generate calibration data214based on the control points210and calibration feature positions206. To do so, an offset determination module216is utilized to generate positional offsets218for the identified control points210based on the calibration feature positions206of the calibration pose202. This operates as a mapping between the identified control points210of the digital object120to be animated and the calibration feature positions206of the subject126as positional offsets. In this way, subsequent movement of the subject126in subsequent digital images is transferred to the digital object120by retargeting the control points210by scaling the positional offsets218, further discussion of which is included below. FIG.4depicts a system400in an example implementation showing operation of the animation system116in greater detail as generating a frame122of an animation118based on a subject126captured in a digital image106. This example continues by receiving calibration data214defining positional offsets218between calibration feature positions206of a subject and control points210of a digital object120to be animated (block504). A digital camera104is used to capture a digital image106of a subject126(block506), e.g., subsequent to the capture of the digital image106used for the calibration techniques described above. The motion detection system204is employed to generate input feature positions402detected from the subject126captured in the digital image106, which are then received by the animation system116(block508). The motion detection system204, for instance, is configured to differentiate the subject126from a background of the digital image106, and then identify features from the subject126. This includes body features as described above such as joint, shoulders, knees, elbows, hands, ankles, head, and so on. This also includes facial features such as a corner of a mouth, tip of nose, eyebrows, chin, jawline, and so forth. The input feature positions402thus represent both the features and identifies a position of the features, e.g., in two-dimensional or three-dimensional space. The digital object input module208is also utilized to obtain control points210(e.g., to identify warp handles) from the digital object120as described above. The control points210and input feature positions402are then passed as inputs to a retargeting module404. The retargeting module404is configured to retarget the control points210to generate retargeted control points406by scaling the positional offsets218of the calibration data214based on the input feature positions402(block510). The control points406, for instance, are configurable as warp handles associated with an underlying mesh. Movement of the warp handles thus causes a corresponding warp to the mesh, e.g., as continuing across vertices of the mesh until another control point is reached. Therefore, in this example the retargeting module404is configured to generate the retargeted control points406by scaling the positional offsets218for the control points210based on respective input feature positions402. The approach supports an ability to “hallucinate” three-dimensional movement and is robust to various digital object styles having extreme body proportions, which is not possible in conventional techniques. As shown in an example implementation600ofFIG.6, a first usage scenario602is depicted of an input digital image604and a frame606of an animation generated using a conventional technique. The first usage scenario602is an example of a rotation-based approach. As previously described, rotation-based approaches rely on “bones” that are joined together to form a skeleton, which are then rotated. However, this rotation may cause visual artifacts as shown in an expanded view608, e.g., when feature positions are located close together in order to give an appearance of depth in a two-dimensional digital image. In the illustrated example, a forearm of the digital object includes an unnatural bend resulting from the rotation of a forearm toward the digital camera104. In a second usage scenario610in which the described translation and scaling based retargeting techniques are employed, these artifacts are avoided. Forearms pointed towards a digital camera104in a subject captured by a digital image612, for instance, result in a frame614of an animation having a natural appearance that support an appearance of depth of respective portions of the digital object120in relation to each other. In this way, the techniques described herein overcome the technical challenges of conventional techniques to avoid inclusion of visual artifacts and support a realistic appearance. Returning again toFIG.4, the retargeted control points406are received as an input by a frame generation module408to generate a frame122of the animation118that includes a retargeted digital object410(block512). As previously described, the control points210are used to control an underlying structure (e.g., mesh) of the digital object120. Therefore, changes to locations of the control points210as retargeted control points406when implemented as warp handles causes generation of a retargeted digital object410having a configuration (e.g., pose) that mimics the configuration of the subject126as captured in the digital image106. The frame122of the animation118is then displayed in a user interface110(block514) by a display module412in the illustrated example. In this way, the animation system116overcomes conventional challenges to improve accuracy in generation of the animation118and computational resource utilization. FIG.7depicts an example700of additional functionality incorporated as part of the retargeting module404. The above example described a scenario involving configuration of a digital object120using scale and translation. Additional functionality is also incorporated as part of the retargeting module404to overcome conventional technical challenges. Examples of this functionality are represented as a normalization module702, a mode management module704, a vertical offset control module706, and a contact management module708. Functionality of each of these features is described in relation to corresponding figures in the following discussion. FIG.8depicts an example800showing operation of the normalization module702ofFIG.7in greater detail. In some usage scenarios, animation of the digital object120is performed to be invariant to a distance, at which, a subject is positioned from a digital camera104. In order to do this, the normalization module702computes a global scale factor802for digital images as received from the digital camera104. This is illustrated inFIG.8through use of first and second stages804,806. As the first stage804, feature positions illustrated as dots and lines that track a subject's126shoulders and face when positioned close to digital camera104. This results in generation of the digital object120to appear at a set distance in the frame122of the animation. At the second stage806, the subject126is positioned further away from the digital camera104such that feature positions corresponding to the shoulder, face, and arms are visible. The digital object120is generated to mimic this pose at the same set distance in the frame122as in the first stage804. To achieve this, the global scale factor802is calculated to “factor out” changes in depth by detecting a scale between feature positions that typically have a set distance, e.g., eyes, shoulders, edges of a head, etc. In this way, the digital object120appears at a same depth regardless of movement of the subject along a z-axis. In the illustrated example, a global scale factor is computed between the shoulders of the subject126in the digital image106in the first stage804that is used to address when the subject126is positioned close to the digital camera104. Likewise, another global scale factor is computed between the shoulders of the subject in the digital image106at the second stage806to “factor out” the change of the subject126as positioned further away from the digital camera104. In an implementation, an option is configured as selectable via the user interface110(e.g., as a button) to turn this functionality “off” or “on,” e.g., to enable changes in a z-axis defined in relation to the digital camera104by not employing the global scale factor802. FIGS.9and10depict examples900,1000showing operation of the mode management module704ofFIG.7in greater detail. As described above, a subject may be positioned at different depths from the digital camera104. As such, this introduces challenges for retargeting control points because in some instances those control points are not visible in a current digital image106. Accordingly, in these examples a mode management module704is utilized to select modes used to control an order, in which, input feature positions and corresponding control points are retargeted. This is performed by detecting whether input feature positions include a particular portion of the subject and based on this selecting a mode from a plurality of modes that define an order using a hierarchy to retarget the control points based on the corresponding feature positions. In the example900ofFIG.9, for instance, the subject126is positioned close to the digital camera104such that an upper body is visible in the digital image106. However, feature positions corresponding to a lower portion of the subject126are not visible. Therefore, the mode management module704selects an upper-body hierarchy902to specify an order for processing control points beginning at a head and/or shoulders of digital object120using corresponding feature positions from the subject126. The example upper-body hierarchy902starts at head control points904to shoulder control points906. The hierarchy then branches outward to arm control points908and hand control points910down one branch and waist control points912, knee control points914, and feet control points916down another branch. In this way, the retargeting module404“walks” the upper-body hierarchy902to scale factors between control points based on correspondence to portions of the subject126. On the other hand, in the example1000ofFIG.10, the particular portion of the subject126is visible in the digital image106, e.g., includes input feature positions corresponding to a waist of the subject126. In response, the mode management module704selects a full-body hierarchy1002to process control points and corresponding feature positions starting at a middle (e.g., waist) and progressing “outward.” This helps to minimize float by reducing an amount of error accumulated between the waist and the feet. The full-body hierarchy1002, for instance, begins at the waist control points912and proceeds outward to a first branch that includes knee control points914and feet control points916. A second branch includes shoulder control points906and then branches between head control points904and shoulder control points908, which is followed by hand control points910. As a result, different roots of the different hierarchies define the processing orders, which also control how error accumulates across retargeting the digital object120. This minimizes retargeting error at a base of the digital object120, e.g., the feet, and thus reduces float such that the digital object120appears grounded at a ground plane in the frame122. FIG.11depicts an example implementation1100of improved feature position detection. Accuracy in detection of feature positions in the face and shoulder portions of the subject126has a direct effect on accuracy of subsequent processing performed by the retargeting module404. As described in relation toFIG.8, for instance, distances between feature positions in the face and/or shoulders are used to generate the global scale factor802used to “factor out” changes in depth from the digital camera104. Conventional techniques that rely solely on face tracking, for instance, can fail in instances involving wrongful detection that causes the face to “come off” the body of the subject126due to textures or other objects included in the digital image106, e.g., a T-shirt worn by the subject126that also includes an image of a face. To overcome this, the motion detection system204includes a body tracker module1102that is configured to detect initial feature positions1104as global feature positions of an overall body of the subject126. The initial feature positions1104are then used to initialize a face tracker module1106to generate the input feature positions402, e.g., based on a position of a head of the subject126indicated by the initial feature positions1104. Improvements in accuracy are especially notable in full body real world scenarios in which a relative scale of the face of the subject126is small. In this way, the motion detection system204improves accuracy in the generation of the input feature positions402and overcomes conventional challenges and improves operation of the computing device102. FIG.12depicts another example1200of techniques usable to overcome conventional challenges involving float of a digital object120in a frame122of an animation. Accurate alignment of a digital object120with a ground plane1202(e.g., surface) is one of the primary ways to support a realistic appearance of a frame122of an animation. However, conventional techniques to do so often fail and result in “float” of the digital object120. This is due to a variety of factors, including use of a top/down hierarchy such that errors accumulate when configuring the digital object120for a single frame122, which is further exacerbated over time across multiple frames. This is addressed inFIGS.9and10through use of different modes based on visibility of particular feature points, which improves accuracy by limiting accumulation of errors. Additional techniques are also usable to improve alignment of a base of the digital object120to the ground plane1202, functionality of which is represented as a vertical offset control module706and a contact management module708. The vertical offset control module706is configured to set vertical offsets of control points associated with a based on the digital object120are taken directly from feature points associated with a base of the subject126, e.g., the feet, as scaled by the global scale factor802. Thus, this is performed “outside” of the previously described hierarchies for the vertical offsets. Horizontal offsets of the base, on the other hand, are retargeted through the hierarchy. Otherwise, foot retargeting is dependent on how the digital object120places the feet, and not with respect to tracked poses from the digital image106. Even with the above-described functionality, float may be encountered in instances due to errors in detection of feature positions by the motion detection system204. To address this, the contact management module708is configured to incorporate a term that limits movement within a threshold1204vertical distance from the ground plane1202. In this way, the threshold1204acts to damper movement away from the ground plane1202. A friction term is also usable to limit horizontal movement of the base (e.g., the feet) of the digital object120, and thus reduces errors viewable as horizontal jitters. In another instance, initial positions of the base of the digital object120are used by the contact management module708to define the ground plane1202. The contact management module708then sets a base of the digital object120to be positioned above the ground plane1202, and thus limits unnatural movement of the digital object120below the ground plane1202. In this way, the animation118improves accuracy over conventional techniques in the generation of the digital object120for the frame122of the animation118. FIGS.13A-13Gdepict examples of translation-based retargeting through use of a hierarchy and positional offsets. In a first example1300ofFIG.13A, a face scale1302is determined in a subject126in a digital image106that is used to set a body scale in relation to the digital object1304. In the second example1310ofFIG.13B, a scale factor is set for a segment1312between a neck and shoulder of the subject126in the digital image106and a corresponding segment1314between a neck and shoulder of the digital object120. This process continues “down” the hierarchy in a third example1320ofFIG.13Cin which a scale factor is set for a segment1322between a shoulder and elbow of the subject126in the digital image106and a corresponding segment1324between a shoulder and elbow of the digital object120. In the fourth example1330ofFIG.13D, a scale factor is set for a segment1332between an elbow and wrist of the subject126in the digital image106and a corresponding segment1334between an elbow and wrist of the digital object120. In the fifth example1340ofFIG.13E, a face scale1342is determined and applied to a segment1344between a neck and shoulder of the subject126in the digital image106to factor out distance in order to scale a corresponding segment1346of the digital object120. In the sixth example1350ofFIG.13F, this process continues to factor out body scale which is applied to a segment1352between a shoulder and elbow of the subject in the digital image106and a segment1354of the digital object120and in the seventh example1360ofFIG.13Gto factor out body scale in a segment1362between an elbow and wrist of the subject126in the digital image106and a corresponding segment1364in the digital object120. Example System and Device FIG.14illustrates an example system generally at1400that includes an example computing device1402that is representative of one or more computing systems and/or devices that implement the various techniques described herein. This is illustrated through inclusion of the animation system116. The computing device1402is configurable, for example, as a server of a service provider, a device associated with a client (e.g., a client device), an on-chip system, and/or any other suitable computing device or computing system. The example computing device1402as illustrated includes a processing system1404, one or more computer-readable media1406, and one or more I/O interface1408that are communicatively coupled, one to another. Although not shown, the computing device1402further includes a system bus or other data and command transfer system that couples the various components, one to another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. A variety of other examples are also contemplated, such as control and data lines. The processing system1404is representative of functionality to perform one or more operations using hardware. Accordingly, the processing system1404is illustrated as including hardware element1410that is configurable as processors, functional blocks, and so forth. This includes implementation in hardware as an application specific integrated circuit or other logic device formed using one or more semiconductors. The hardware elements1410are not limited by the materials from which they are formed or the processing mechanisms employed therein. For example, processors are configurable as semiconductor(s) and/or transistors (e.g., electronic integrated circuits (ICs)). In such a context, processor-executable instructions are electronically-executable instructions. The computer-readable storage media1406is illustrated as including memory/storage1412. The memory/storage1412represents memory/storage capacity associated with one or more computer-readable media. The memory/storage1412includes volatile media (such as random access memory (RAM)) and/or nonvolatile media (such as read only memory (ROM), Flash memory, optical disks, magnetic disks, and so forth). The memory/storage1412includes fixed media (e.g., RAM, ROM, a fixed hard drive, and so on) as well as removable media (e.g., Flash memory, a removable hard drive, an optical disc, and so forth). The computer-readable media1406is configurable in a variety of other ways as further described below. Input/output interface(s)1408are representative of functionality to allow a user to enter commands and information to computing device1402, and also allow information to be presented to the user and/or other components or devices using various input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors that are configured to detect physical touch), a camera (e.g., employing visible or non-visible wavelengths such as infrared frequencies to recognize movement as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, tactile-response device, and so forth. Thus, the computing device1402is configurable in a variety of ways as further described below to support user interaction. Various techniques are described herein in the general context of software, hardware elements, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, and so forth that perform particular tasks or implement particular abstract data types. The terms “module,” “functionality,” and “component” as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques are configurable on a variety of commercial computing platforms having a variety of processors. An implementation of the described modules and techniques is stored on or transmitted across some form of computer-readable media. The computer-readable media includes a variety of media that is accessed by the computing device1402. By way of example, and not limitation, computer-readable media includes “computer-readable storage media” and “computer-readable signal media.” “Computer-readable storage media” refers to media and/or devices that enable persistent and/or non-transitory storage of information in contrast to mere signal transmission, carrier waves, or signals per se. Thus, computer-readable storage media refers to non-signal bearing media. The computer-readable storage media includes hardware such as volatile and non-volatile, removable and non-removable media and/or storage devices implemented in a method or technology suitable for storage of information such as computer readable instructions, data structures, program modules, logic elements/circuits, or other data. Examples of computer-readable storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage device, tangible media, or article of manufacture suitable to store the desired information and are accessible by a computer. “Computer-readable signal media” refers to a signal-bearing medium that is configured to transmit instructions to the hardware of the computing device1402, such as via a network. Signal media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier waves, data signals, or other transport mechanism. Signal media also include any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. As previously described, hardware elements1410and computer-readable media1406are representative of modules, programmable device logic and/or fixed device logic implemented in a hardware form that are employed in some embodiments to implement at least some aspects of the techniques described herein, such as to perform one or more instructions. Hardware includes components of an integrated circuit or on-chip system, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a complex programmable logic device (CPLD), and other implementations in silicon or other hardware. In this context, hardware operates as a processing device that performs program tasks defined by instructions and/or logic embodied by the hardware as well as a hardware utilized to store instructions for execution, e.g., the computer-readable storage media described previously. Combinations of the foregoing are also be employed to implement various techniques described herein. Accordingly, software, hardware, or executable modules are implemented as one or more instructions and/or logic embodied on some form of computer-readable storage media and/or by one or more hardware elements1410. The computing device1402is configured to implement particular instructions and/or functions corresponding to the software and/or hardware modules. Accordingly, implementation of a module that is executable by the computing device1402as software is achieved at least partially in hardware, e.g., through use of computer-readable storage media and/or hardware elements1410of the processing system1404. The instructions and/or functions are executable/operable by one or more articles of manufacture (for example, one or more computing devices1402and/or processing systems1404) to implement techniques, modules, and examples described herein. The techniques described herein are supported by various configurations of the computing device1402and are not limited to the specific examples of the techniques described herein. This functionality is also implementable all or in part through use of a distributed system, such as over a “cloud”1414via a platform1416as described below. The cloud1414includes and/or is representative of a platform1416for resources1418. The platform1416abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud1414. The resources1418include applications and/or data that can be utilized while computer processing is executed on servers that are remote from the computing device1402. Resources1418can also include services provided over the Internet and/or through a subscriber network, such as a cellular or Wi-Fi network. The platform1416abstracts resources and functions to connect the computing device1402with other computing devices. The platform1416also serves to abstract scaling of resources to provide a corresponding level of scale to encountered demand for the resources1418that are implemented via the platform1416. Accordingly, in an interconnected device embodiment, implementation of functionality described herein is distributable throughout the system1400. For example, the functionality is implementable in part on the computing device1402as well as via the platform1416that abstracts the functionality of the cloud1414. CONCLUSION Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.
43,657
11861780
DETAILED DESCRIPTION The present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. The illustrative embodiments recognize and take into account a number of different considerations. For example, the illustrative embodiments recognize and take into account the data points in point clouds can be classified. In other words, each point can be assigned a classification. For example, a point in a point cloud can be classified as ground, a building, a road surface, water, wire conductor, a rail, low vegetation, medium vegetation, high vegetation, a cell tower, or other type of point. Storing these point clouds can take more space than desired. For example, files for cloud point data at 132 points per square meter (ppsm) generated for a geographic area of 13.15 square kilometers of rural land can use 52.2 Gb of storage. The illustrative embodiments recognize and take into account that one manner in which the amount of data can be reduced can be to generate a rasterized layer for each class of data points. The illustrative embodiments recognize and take into account that the resulting files for these rasterized layers can provide space savings over storing the point clouds by themselves. For example, the illustrative embodiments recognize and take into account that converting the point cloud for the geographic area into rasterized layers can result in files that use 32.2 Gb of storage. The illustrative embodiments recognize and take into account that a rasterized layer is an image in which null values are included for pixels in which data points are not present for the particular class represented by the rasterized layer. In other words, empty portions of the image are represented by null values. The illustrative embodiments recognize and take into account that the amount of space savings may not be as great as desired. Further, the illustrative embodiments also recognize and take into account that further storage savings can be obtained by converting the rasterized layers into key value pairs. The illustrative embodiments recognize and take into account that only key value pairs are stored for the points in the class. As a result, portions of a rasterized layer that do not include data for the class are not stored. For example, the storage of the key value pairs can use 0.59 G of storage space as compared to 32.2 Gb of storage for rasterized layers and 52.2 Gb of storage for a point cloud for the same geographic area. The illustrative embodiments also recognize and take in account that by converting the rasterized layers into key value pairs for storage, the point cloud data, which is normally difficult to query, can be easily queried when converted into key value pairs. The illustrative embodiments also recognize and take in account that by converting the rasterized layers into key value pairs for storage, the point cloud data can be easily combined with other geospatial layers. As a result, the illustrative embodiments recognize and take account that this conversion can make point cloud data that is unsearchable and difficult to visualize into a form that can be more easily queried. Thus, the illustrative embodiments provide a method, apparatus, computer system, and computer program product for rasterizing point cloud data and storing the rasterized point cloud data as key value pairs. A computer implemented method rasterizes point cloud data. A number of processor units rasterizes the point cloud data into rasterized layers based on classes in which each rasterized layer in the rasterized layers corresponds to a class in the classes. The number of processor units creates key value pairs from the rasterized layers. The number of processor units store the key value pairs in a key value store. According to other illustrative embodiments, a computer system and a computer program product for rasterizing point cloud data are provided. As a result, the storage of key value pairs derived from the point cloud data can use less storage. Additionally, the key value pairs can be searched in response to receiving queries. With reference now to the figures and, in particular, with reference toFIG.1, a pictorial representation of a network of data processing systems is depicted in which illustrative embodiments may be implemented. Network data processing system100is a network of computers in which the illustrative embodiments may be implemented. Network data processing system100contains network102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system100. Network102may include connections, such as wire, wireless communication links, or fiber optic cables. In the depicted example, server computer104and server computer106connect to network102along with storage unit108. In addition, client devices110connect to network102. As depicted, client devices110include client computer112, client computer114, and client computer116. Client devices110can be, for example, computers, workstations, or network computers. In the depicted example, server computer104provides information, such as boot files, operating system images, and applications to client devices110. Further, client devices110can also include other types of client devices such as drone118, tablet computer120, and smart glasses122. In this illustrative example, server computer104, server computer106, storage unit108, and client devices110are network devices that connect to network102in which network102is the communications media for these network devices. Some or all of client devices110may form an Internet of things (IoT) in which these physical devices can connect to network102and exchange information with each other over network102. Client devices110are clients to server computer104in this example. Network data processing system100may include additional server computers, client computers, and other devices not shown. Client devices110connect to network102utilizing at least one of wired, optical fiber, or wireless connections. Program instructions located in network data processing system100can be stored on a computer-recordable storage media and downloaded to a data processing system or other device for use. For example, program instructions can be stored on a computer-recordable storage media on server computer104and downloaded to client devices110over network102for use on client devices110. In the depicted example, network data processing system100is the Internet with network102representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers consisting of thousands of commercial, governmental, educational, and other computer systems that route data and messages. Of course, network data processing system100also may be implemented using a number of different types of networks. For example, network102can be comprised of at least one of the Internet, an intranet, a local area network (LAN), a metropolitan area network (MAN), or a wide area network (WAN).FIG.1is intended as an example, and not as an architectural limitation for the different illustrative embodiments. As used herein, “a number of” when used with reference to items, means one or more items. For example, “a number of different types of networks” is one or more different types of networks. Further, the phrase “at least one of,” when used with a list of items, means different combinations of one or more of the listed items can be used, and only one of each item in the list may be needed. In other words, “at least one of” means any combination of items and number of items may be used from the list, but not all of the items in the list are required. The item can be a particular object, a thing, or a category. For example, without limitation, “at least one of item A, item B, or item C” may include item A, item A and item B, or item B. This example also may include item A, item B, and item C or item B and item C. Of course, any combinations of these items can be present. In some illustrative examples, “at least one of” can be, for example, without limitation, two of item A; one of item B; and ten of item C; four of item B and seven of item C; or other suitable combinations. In this illustrative example, drone118having a lidar sensor system can generate point cloud data130by scanning terrain131. Drone118can transmit point cloud data130in files133to data manager134in server computer104. Files133can be LAS (laser) files in this example. As depicted, data manager134can process point cloud data130received in files133on a class-by-class basis to create rasterized layers136. The classes can be, for example, unclassified, ground, low vegetation, medium vegetation, high vegetation, road, and building. As a result, each rasterized layer in rasterized layers136contains information for a class. In creating a rasterization layer, data manager134creates a matrix representation of data points for a class in point cloud data130in creating a rasterization layer. The creation of rasterized layers136reduces the amount of data for storage as compared to point cloud data130. Data manager134processes rasterized layers136created from point cloud data130generate key value pairs138. Key value pairs138are stored in key value store140. The amount of data for key value pairs138is less than the amount of data for rasterized layers136. In this illustrative example, key value pairs138are generated only for portions of rasterized layers136that contain data. Rasterized layers136is a matrix representation in which some cells or entries are empty and represented by null values. In other words, values are absent for some portions because data points are not present for the class corresponding to the rasterization layer. These portions can be considered empty portions in rasterized layers136. For example, for a rasterization layer representing the class roads in point cloud data130, values are not present in the rasterization layer for data points in point cloud data130where roads are absent. Instead, null values are used to indicate that no data is present for the roads in those portions of the rasterization layer. In this case, key value pairs138are not generated for those empty portions. As a result, the amount of data is further reduced by only generating key value pairs138for the portions of the rasterization layer containing roads. Thus, the amount of data needed to be stored in key value store140can be greatly reduced as compared to storing point cloud data130. In this illustrative example, key value store140can be a database. With the storage of key value pairs138in key value store140, point cloud data130represented as key value pairs138can be searched. For example, user142and client computer112can send query144to data management134to search key value store140. For example, the query can be to return information about buildings having an elevation greater than 50 feet. Data manager134searches key value pairs138in key value store140and returns search result146to user142and client computer112. In other illustrative examples, user142can take other forms other than a person. In some illustrative examples, a user can be a program or a process running on a computing device. With reference now toFIG.2, a block diagram of a point cloud environment is depicted in accordance with an illustrative embodiment. In this illustrative example, point cloud environment200includes components that can be implemented in hardware such as the hardware shown in network data processing system100inFIG.1. In this illustrative example, point cloud processing system202in point cloud environment200can process point cloud data204in point cloud206for area208. In this illustrative example, area208is a geographic area. Point cloud data204in point cloud206can represent information about characteristics of area208. Point cloud data204can be stored in a set of files209. Point cloud data204and files209can be in a number of different formats. For example, the formats can be LAS (laser), FLS (faro), PCD (point cloud data), and other suitable formats for point cloud data204. As used herein, a “set of” when used with reference to items means one or more items. For example, a set of files209is one or more of files209. As depicted, point cloud processing system202comprises computer system210and data manager212. Data manager212is located in computer system210. Data manager212can be implemented in software, hardware, firmware or a combination thereof. When software is used, the operations performed by data manager212can be implemented in program instructions configured to run on hardware, such as a processor unit. When firmware is used, the operations performed by data manager212can be implemented in program instructions and data and stored in persistent memory to run on a processor unit. When hardware is employed, the hardware can include circuits that operate to perform the operations in data manager212. In the illustrative examples, the hardware can take a form selected from at least one of a circuit system, an integrated circuit, an application specific integrated circuit (ASIC), a programmable logic device, or some other suitable type of hardware configured to perform a number of operations. With a programmable logic device, the device can be configured to perform the number of operations. The device can be reconfigured at a later time or can be permanently configured to perform the number of operations. Programmable logic devices include, for example, a programmable logic array, a programmable array logic, a field programmable logic array, a field programmable gate array, and other suitable hardware devices. Additionally, the processes can be implemented in organic components integrated with inorganic components and can be comprised entirely of organic components excluding a human being. For example, the processes can be implemented as circuits in organic semiconductors. Computer system210is a physical hardware system and includes one or more data processing systems. When more than one data processing system is present in computer system210, those data processing systems are in communication with each other using a communications medium. The communications medium can be a network. The data processing systems can be selected from at least one of a computer, a server computer, a tablet computer, or some other suitable data processing system. As depicted, computer system210includes a number of processor units214that are capable of executing program instructions216implementing processes in the illustrative examples. As used herein, a processor unit in the number of processor units214is a hardware device and is comprised of hardware circuits such as those on an integrated circuit that respond and process instructions and program code that operate a computer. When a number of processor units214execute program instructions216for a process, the number of processor units214is one or more processor units that can be on the same computer or on different computers. In other words, the process can be distributed between processor units on the same or different computers in a computer system. Further, the number of processor units214can be of the same type or different type of processor units. For example, a number of processor units can be selected from at least one of a single core processor, a dual-core processor, a multi-processor core, a general-purpose central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or some other type of processor unit. As depicted, data manager212can provide a process for rasterizing point cloud data204. In one illustrative example, data manager212rasterizes point cloud data204into rasterized layers218based on classes220. Each rasterized layer in rasterized layers218corresponds to a class in classes220. A rasterized layer in rasterized layers218can be stored as an image comprising a matrix of pixels. In this illustrative example, in rasterizing point cloud data204, data manager212identifies a set of files209containing point cloud data204. Data manager212rasterize is point cloud data204in each file in the set of files209into a set of rasterized layers218in rasterized layers218based on classes220for point cloud data204in which each rasterized layer in rasterized layers218corresponds to a class in classes220. In other words, responsive to point cloud data204in a file in the set of files209having more than one class, the rasterization of that file results in more than one rasterization layer in which each rasterization layer corresponds to a class in classes220. Further, a class in classes220can have more than one rasterized layer in rasterized layers218when the rasterization is performed on the set of files209. For example, two files in the set of files209can have point cloud data204of the same class in classes220. The rasterization of these two files results in two rasterized layers218being created for that class. In another illustrative example, the set of files can be combined into a single grouping of point cloud data204for rasterization. In this example, each class in classes220can have a single rasterization layer in rasterized layers218. Further, rasterized layers218have resolutions222. In one illustrative example, resolutions222can be different for different rasterized layers in rasterized layers218. In other words, resolutions222can be multiple resolutions. As a result, point cloud data204of different classes in classes220can be stored at different resolutions. For example, street signs can be stored at a higher resolution as compared to a body of water. Data manager212creates key value pairs224from rasterized layers218; data manager212stores key values pairs224in key value store226. In this illustrative example, the creation of key value pairs224is performed such that only portions of a rasterization layer having data points for that class are used to create key value pairs224. Other portions of the rasterization layer in which data points are absent for that class will have null values for similar indicators indicating that a particular portion of the rasterization layer does not contain data for the class. As result, this conversion of rasterized layers218into key value pairs224can reduce the amount of data stored in key value store226as compared to storing rasterized layers218. With the storage of key value pairs224in key value store226, this representation of rasterized layers218generated from the rasterization point cloud data204can also be queried or searched in addition to using less storage space. For example, data manager212can receive query228from requestor230. In response to receiving query228, data manager212can search key value pairs224in key value store226using query228. Data manager212can return result232from searching key value pairs224in key value store226to requestor230. In this example, data manager212can implement database management system processes that enable querying key value pairs224and key value store226. As a result, the search capability can enable combining data from key value pairs224for different rasterized layers in rasterized layers218when searching key value pairs224to generate result232. For example, result232generated in response to query228can include a combination of data from key value pairs224. These results can be, for example, used to generate a digital training model (DTM), a canopy height model (CHM), a digital surface model (DSM), or other suitable model depending on classes220. In this illustrative example, data manager212can receive updated data234for selected class236in classes220. Data manager212can update key value pairs224having selected class236using updated data234. In this illustrative example, updated data234can take a number of different forms. For example, updated data234can be selected from at least one of new point cloud data, a street map, a vegetation index, satellite imagery, satellite data, or data from other sources for area208. As a result, updated data234can enhance key value pairs224generated from rasterized layers218from rasterization of point cloud data204in its original form. Further, this updating can be performed selectively for different classes allowing for increasing the resolution or information for various classes in classes220for key pair values224. For example, the increased resolution can come from using different LIDAR technologies or sensor settings that can increase the resolution for data and key value pairs224. Turning next toFIG.3, an illustration key value pair creation for a rasterized layer is depicted in accordance with an illustrative embodiment. In this illustrative example, rasterized layer300is an example of a rasterized layer in rasterized layers218inFIG.2. Rasterized layer300takes the form of matrix302with indices304and values306for class308. Matrix302comprises cells303that can represent data points in a point cloud. These data points from the point cloud are converted to rasters or pixels. In other words, rasterizing involves converting the point cloud data into a matrix representation in matrix302in this example. In this illustrative example, indices304comprises longitudes310and latitudes312for x and y coordinates. Values306take the form of heights314for z coordinates. Heights314can also be referred to as elevations. In this illustrative example, null values316are present in values306for heights314when the particular longitudes and latitudes do not have a value for class308. For example, if class308is a building and the point cloud value at that particular longitude and latitude is for a road or vegetation, the value is a null value indicating that the data point does not represent a building. As a result, matrix302can comprise values306in the form of heights314in which these values include nulls values316to indicate that data is not present for class308. As depicted, rasterized layer300can be converted into key value pairs320. In this illustrative example, a one-to-one correspondence between values306and key value pairs320is absent when null values316are present in values306. As result, a given cell in cells303converted into a key value pair in matrix302is in key value pairs320only when a null value in null values316is not present for that cell. In this manner, sparse storage of values306can be achieved by storing key value pairs320generated from matrix302. In other words, cells303containing null values316can be eliminated from conversion into key value pairs320, resulting in increased efficiency in the amount of data stored. Further, rasterized layer300does not have to be stored in a file in storage if sufficient memory is present for processing and converting rasterized layer300into key value pairs320. In this illustrative example, selection of the matrix size can depend on the point density of the data points in the point cloud. The coordinates of the data points may not precisely match cells303in matrix302. In that case, a nearest neighbor approach or other interpolation approaches can be applied. In case multiple data points per cell are present, the average value of the data points can represent the height and respective class. In the matrix representation most values are empty and can be represented using a null value. If matrix302for rasterized layer300is stored as a file, the amount of overhead can be great because of the number of empty cells. Although rasterization of point cloud data can be useful, the overhead of empty cells makes this type of class-by-class raster approach less desirable. As a result, matrix302for rasterization layer300can be converted into key value pairs. In this illustrative example, key value pairs320comprise keys322and values324. Keys322and values324in key value pairs320can take a number of different forms. For example, key326in keys322can be longitude328and latitude330. As another example, key326can be longitude328, latitude330, and class identifier309. In other words, key326does not have to be just longitude328and latitude330. As depicted, values324takes the form of heights314in this illustrative example. In one illustrative example, one or more illustrative examples are present that overcome an issue with at least one of storing or searching point cloud data. As a result, one or more illustrative examples can enable reducing the amount of storage needed to store point cloud data and can enable searching point cloud data. In one or more illustrative of examples, the point cloud data is processed to form rasterized layers based on classes. These rasterized layers are converted to key value pairs in which key value pairs orally generated for portions of the rasterized layers that have data and are not empty were represented by null values. Computer system210inFIG.2can be configured to perform at least one of the steps, operations, or actions described in the different illustrative examples using software, hardware, firmware or a combination thereof. As a result, computer system210operates as a special purpose computer system in which data manager212in computer system210enables rasterizing data points and point clouds into rasterized layers based on class and further creating key value pairs for the rasterized layers. As a result, the key value pairs224can represent point cloud data204in point cloud206. In particular, data manager212transforms computer system210into a special purpose computer system as compared to currently available general computer systems that do not have data manager212. In the illustrative example, the use of data manager212in computer system210integrates processes into a practical application for method rasterizing point cloud data that increases the performance of computer system210. In other words, data manager212in computer system210is directed to a practical application of processes integrated into data manager212in computer system210that rasterizes the point cloud data into rasterized layers based on classes in which each rasterized layer in the rasterized layers corresponds to a class in the classes; creates key value pairs from the rasterized layers; and stories the key value pairs in a key value store. In this illustrative example, data manager212in computer system210—more steps that results in improvement at least one of reducing storage space needed or enabling searching of the data. The illustration of point cloud environment200in the different components inFIGS.2-3is not meant to imply physical or architectural limitations to the manner in which an illustrative embodiment can be implemented. Other components in addition to or in place of the ones illustrated may be used. Some components may be unnecessary. Also, the blocks are presented to illustrate some functional components. One or more of these blocks may be combined, divided, or combined and divided into different blocks when implemented in an illustrative embodiment. For example, values306in rasterized layer300and values324in key value pairs320can take other forms other than heights314. For example, a value in rasterized layer300and in a key value pair in key value pairs320can be selected from at least one of a value in the key value pair is selected from at least one of a height, a temperature, a surface moisture, a reflectivity value, or some other value. In other words, the value in a key value pair can include more than one parameter. Turning next toFIG.4, a dataflow diagram for processing point cloud data is depicted in accordance with an illustrative embodiment. This dataflow can be implemented using point cloud processing system202inFIG.2. As depicted, rasterization program400receives point cloud data402in.las file1404, .las file2406though.las file N408. Rasterization program400can be implemented in data manager212in point cloud processing system202inFIG.2. Rasterization program400converts point cloud data402in these files into rasterized layers410. Rasterization program400performs the rasterization on a class by class basis. In other words, rasterization program400creates a rasterization layer for each class in the files containing point cloud data402. In this illustrative example, rasterized layers410are located in class1.tiff file1412, class2.tiff file2414, through class m.tiff file k416. In this depicted example, m classes are present resulting in k files being generated for rasterized layers410. Rasterized layers410is then converted by key value pair generator420into key value pairs422and stored in key value store424. In this illustrative example, key value pairs422can be organized in key value store424based on classes. For example, class1key value pairs426through class m key value pairs428are present in key value pairs422. In this illustrative example, m groups of key value pairs422are present in which each group of key value pairs422is for a particular class from class1through class m. Turning next toFIG.5, a dataflow diagram for querying a key value store is depicted in accordance with an illustrative embodiment. In the illustrative examples, the same reference numeral may be used in more than one figure. This reuse of a reference numeral in different figures represents the same element in the different figures. In this illustrative example, key value store424generated by the dataflow shown inFIG.4can be searched using search engine500. Search engine500can be implemented in data manager212inFIG.2. As depicted, requester502extends user query504to search engine500. User query504can be used by search engine500to search key value pairs422in key value store424. The search can identify results506from key value pairs422in key value store424, which are returned to requester502. With reference now toFIG.6, a dataflow diagram for updating point cloud data in key value pairs is depicted in accordance with an illustrative embodiment. In this illustrative example, key value store600stores key value pairs602in groups of key value pairs that correspond to2classes in this example. As depicted, key value pairs602include m sets of key value pairs for m classes from class1key value pairs604through class m key value pairs606. Each set of key value pairs corresponds to a class in this example This illustrative example, other types of information can also be stored in key value store600in addition to key value pairs602. As depicted, key value store600also stores pre-existing layer1(vegetation map)610and pre-existing layer2(rasterized street map)612. In this illustrative example, pre-existing layer1(vegetation map)610can be a hyperspectral satellite map in which vegetation is identified by its greenness. Pre-existing layer2(rasterized street map)612can be a map in which the pixels represents a classification of roads, buildings or other structures. As depicted, analyzer614can identify classes of information corresponding to classes1-mfrom pre-existing layer1(vegetation map)610and pre-existing layer2(rasterized street map)612. This information can be analyzed with class1key value pairs604through class m key value pairs606to generate revised class1layer620through revised class m layer622. These revised class layers can have enhanced quality as compared to class1key value pairs604through class m key value pairs606. For example, if class1key value pairs604is for vegetation, analyzer614can create revised class1layer620that is improved vegetation layer. Revised class1layer620can include points where both the satellite image in pre-existing layer1(vegetation map)610and the point cloud data in class1key value pairs604indicate the presence of vegetation. In other words, revised class1layer620will indicate the presence of vegetation for locations in which both of these sources also indicate the presence of vegetation. In another example, if class m is roadways, the point cloud data generated for class m key value pairs606can have high spatial accuracy. However, data can be missing when the Lidar does not penetrate to the ground. As a result, pre-existing layer2(rasterized street map)612can be used to fill in gaps in the point cloud data represented in class m key value pairs606to form revised class m layer622. These revised class layers can contain missing data and can be used to update the key value pairs for different classes in key value store600. The updating of key value pairs for different classes can increase the resolution for one or more classes. In other illustrative examples, these revised class layers can be revised class layers in a rasterized form that can be converted into key value pairs and replace the corresponding class key value pairs. As a result, increased quality of key value pairs602can be created by analyzer614. Analyzer614can be implemented in data manager212inFIG.2. Analyzer614can be implemented using a number of different analytic, inference, and model analysis processes. In some illustrative examples, analyzer614can be an artificial intelligence system in the form of a machine learning model. Turning now toFIG.7, an illustration of point cloud data processing is depicted in accordance with an illustrative embodiment. In this illustrative example, point cloud data700comprises data points in which each data point has a longitude, a latitude, and an elevation value. When point cloud data700is rasterized into rasterized layers each rasterized layer corresponds to a class and these layers can be converted into key value pairs704. As depicted, key value pairs704comprises low vegetation key value pairs706, medium vegetation key value pairs708, high vegetation key value pairs710, buildings key value pairs712, and road key value pairs714. Key value pairs704served to form derived layers716. Key value pairs704can be queried or analyzed in a number different ways. For example, different classes in key value pairs704can be analyzed to identify a mean, a maximum, a minimum, or other statistical features. Further, functions such as addition, subtraction, division, multiplication, and other operations can be performed on key value pair704. These and other functions can be performed on different classes of key value pairs704to create different types of models in derived layers716. In this example, derived layers716includes digital terrain model (DTM)718, digital surface model (DSM)720, and canopy height model minus road722. The illustration of point cloud data processing inFIG.7is provided as an example of one implementation and not meant to limit the manner in which other lists of examples can be implemented. For example, in another illustrative example, other classes can be used in addition to or in place of the classes depicted. For example, unclassified, never classified, utility pole, transmission tower, and other types of classes can be used. Turning next toFIG.8, a flowchart of a process for rasterizing point cloud data is depicted in accordance with an illustrative embodiment. The process inFIG.8can be implemented in hardware, software, or both. When implemented in software, the process can take the form of program instructions that are run by one or more processor units located in one or more hardware devices in one or more computer systems. For example, the process can be implemented in data manager212in computer system210inFIG.2. The process begins by rasterizing the point cloud data into rasterized layers based on classes in which each rasterized layer in the rasterized layers corresponds to a class in the classes (step800). The process creates key value pairs from the rasterized layers (step802). The process stores the key value pairs in a key value store (step804). The process terminates thereafter. With reference toFIG.9, a flowchart of a process for searching a key value store is depicted in accordance with an illustrative embodiment. The steps in this flowchart are examples of additional steps that they performed with the process inFIG.8. The process receives a query (step900). The process searches the key value pairs in the key value store using the query (step902). The process returns a result from searching the key value pairs in the key value store (step904). The process terminates thereafter. Turning toFIG.10, a flowchart of a process for updating key value pairs is depicted in accordance with an illustrative embodiment. The steps in this flowchart are examples of additional steps that they performed with the process inFIG.8. The process identifies updated data for a selected class (step1000). The process updates the key value pairs having the selected class using the updated data (step1002). The process terminates thereafter. InFIG.11, a flowchart of a process for rasterizing point cloud plan data is depicted in accordance with an illustrative embodiment. The steps in this flowchart are an example of an implementation of step800inFIG.8. The process identifies a set of files containing the point cloud data (step1100). The process rasterizes the point cloud data in each file into a set of rasterized layers in the rasterized layers based on the classes for the point cloud data in which each rasterized layer in the rasterized layers corresponds to a class in the classes (step1102). The process terminates thereafter. The flowcharts and block diagrams in the different depicted embodiments illustrate the architecture, functionality, and operation of some possible implementations of apparatuses and methods in an illustrative embodiment. In this regard, each block in the flowcharts or block diagrams may represent at least one of a module, a segment, a function, or a portion of an operation or step. For example, one or more of the blocks can be implemented as program instructions, hardware, or a combination of the program instructions and hardware. When implemented in hardware, the hardware may, for example, take the form of integrated circuits that are manufactured or configured to perform one or more operations in the flowcharts or block diagrams. When implemented as a combination of program instructions and hardware, the implementation may take the form of firmware. Each block in the flowcharts or the block diagrams can be implemented using special purpose hardware systems that perform the different operations or combinations of special purpose hardware and program instructions run by the special purpose hardware. In some alternative implementations of an illustrative embodiment, the function or functions noted in the blocks may occur out of the order noted in the figures. For example, in some cases, two blocks shown in succession can be performed substantially concurrently, or the blocks may sometimes be performed in the reverse order, depending upon the functionality involved. Also, other blocks can be added in addition to the illustrated blocks in a flowchart or block diagram. Turning now toFIG.12, a block diagram of a data processing system is depicted in accordance with an illustrative embodiment. Data processing system1200can be used to implement server computer104, server computer106, client devices110, inFIG.1. Data processing system1200can also be used to implement computer system210inFIG.2. In this illustrative example, data processing system1200includes communications framework1202, which provides communications between processor unit1204, memory1206, persistent storage1208, communications unit1210, input/output (I/O) unit1212, and display1214. In this example, communications framework1202takes the form of a bus system. Processor unit1204serves to execute instructions for software that can be loaded into memory1206. Processor unit1204includes one or more processors. For example, processor unit1204can be selected from at least one of a multicore processor, a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a digital signal processor (DSP), a network processor, or some other suitable type of processor. Further, processor unit1204can be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit1204can be a symmetric multi-processor system containing multiple processors of the same type on a single chip. Memory1206and persistent storage1208are examples of storage devices1216. A storage device is any piece of hardware that is capable of storing information, such as, for example, without limitation, at least one of data, program instructions in functional form, or other suitable information either on a temporary basis, a permanent basis, or both on a temporary basis and a permanent basis. Storage devices1216may also be referred to as computer-readable storage devices in these illustrative examples. Memory1206, in these examples, can be, for example, a random-access memory or any other suitable volatile or non-volatile storage device. Persistent storage1208may take various forms, depending on the particular implementation. For example, persistent storage1208may contain one or more components or devices. For example, persistent storage1208can be a hard drive, a solid-state drive (SSD), a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage1208also can be removable. For example, a removable hard drive can be used for persistent storage1208. Communications unit1210, in these illustrative examples, provides for communications with other data processing systems or devices. In these illustrative examples, communications unit1210is a network interface card. Input/output unit1212allows for input and output of data with other devices that can be connected to data processing system1200. For example, input/output unit1212may provide a connection for user input through at least one of a keyboard, a mouse, or some other suitable input device. Further, input/output unit1212may send output to a printer. Display1214provides a mechanism to display information to a user. Instructions for at least one of the operating system, applications, or programs can be located in storage devices1216, which are in communication with processor unit1204through communications framework1202. The processes of the different embodiments can be performed by processor unit1204using computer-implemented instructions, which may be located in a memory, such as memory1206. These instructions are referred to as program instructions, computer usable program instructions, or computer-readable program instructions that can be read and executed by a processor in processor unit1204. The program instructions in the different embodiments can be embodied on different physical or computer-readable storage media, such as memory1206or persistent storage1208. Program instructions1218is located in a functional form on computer-readable media1220that is selectively removable and can be loaded onto or transferred to data processing system1200for execution by processor unit1204. Program instructions1218and computer-readable media1220form computer program product1222in these illustrative examples. In the illustrative example, computer-readable media1220is computer-readable storage media1224. Computer-readable storage media1224is a physical or tangible storage device used to store program instructions1218rather than a medium that propagates or transmits program instructions1218. Computer readable storage media1224, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Alternatively, program instructions1218can be transferred to data processing system1200using a computer-readable signal media. The computer-readable signal media are signals and can be, for example, a propagated data signal containing program instructions1218. For example, the computer-readable signal media can be at least one of an electromagnetic signal, an optical signal, or any other suitable type of signal. These signals can be transmitted over connections, such as wireless connections, optical fiber cable, coaxial cable, a wire, or any other suitable type of connection. Further, as used herein, “computer-readable media1220” can be singular or plural. For example, program instructions1218can be located in computer-readable media1220in the form of a single storage device or system. In another example, program instructions1218can be located in computer-readable media1220that is distributed in multiple data processing systems. In other words, some instructions in program instructions1218can be located in one data processing system while other instructions in program instructions1218can be located in one data processing system. For example, a portion of program instructions1218can be located in computer-readable media1220in a server computer while another portion of program instructions1218can be located in computer-readable media1220located in a set of client computers. The different components illustrated for data processing system1200are not meant to provide architectural limitations to the manner in which different embodiments can be implemented. In some illustrative examples, one or more of the components may be incorporated in or otherwise form a portion of, another component. For example, memory1206, or portions thereof, may be incorporated in processor unit1204in some illustrative examples. The different illustrative embodiments can be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system1200. Other components shown inFIG.12can be varied from the illustrative examples shown. The different embodiments can be implemented using any hardware device or system capable of running program instructions1218. Thus, illustrative embodiments provide a computer implemented method, computer system, and computer program product for rasterizing point cloud data. One more illustrative examples can rasterize point cloud data based on the class of the data points to generate rasterized layers in which each layer corresponds to a class. The rasterized layers can be matrices with indices that represent longitude and latitude the data points with a pixel value that can represent a height of the point in response to the point cloud data being generated using a lidar laser system. These rasterized layers can then be converted into key value pairs. In creating key value pairs, empty portions of the rasterized layers are not converted into key value pairs in the illustrative examples, resulting in a reduction of the amount of data that is stored. Further, the use of key value pairs also enables searching and analysis of the point cloud data represented by the key value pairs. As a result, new layers or models can be created from the key value pairs through at least one of querying or analyzing the point cloud data represented in the key value pairs. The description of the different illustrative embodiments has been presented for purposes of illustration and description and is not intended to be exhaustive or limited to the embodiments in the form disclosed. The different illustrative examples describe components that perform actions or operations. In an illustrative embodiment, a component can be configured to perform the action or operation described. For example, the component can have a configuration or design for a structure that provides the component an ability to perform the action or operation that is described in the illustrative examples as being performed by the component. Further, to the extent that terms “includes”, “including”, “has”, “contains”, and variants thereof are used herein, such terms are intended to be inclusive in a manner similar to the term “comprises” as an open transition word without precluding any additional or other elements. The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Not all embodiments will include all of the features described in the illustrative examples. Further, different illustrative embodiments may provide different features as compared to other illustrative embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiment. The terminology used herein was chosen to best explain the principles of the embodiment, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed here.
56,462
11861781
DETAILED DESCRIPTION Embodiments of the present disclosure relate to graphics processing units (GPUs) configured to transition from an active state a low-power state between frame rendering operations to save power. While in the low-power state, GPU state information is stored in retention hardware such as retention random-access memories (RAMs) or retention flip-flops. A small amount of power is applied to the retention hardware, sufficient to allow the retention hardware to retain the data stored thereon while the GPU is in the low-power state. By using such retention hardware for the storage and restoration of GPU state information while the GPU is in the low-power state between rendering of frames, the GPU can transition between its low-power state and its active state relatively quickly, thereby reducing overall latency at the GPU and improving device performance. To further illustrate, a conventional GPU typically remains in an active (or “on”) state throughout the time period in which image frame rendering is performed and, consequently, consumes a relatively high amount of power throughout this time period. An electronic device incorporating the GPU (e.g., a mobile phone) implements a power management scheme, wherein the GPU is configured to transition into a low-power state and remain in the low-power state during the time period between rendering consecutive image frames (sometimes referred to herein more simply as “frames”). In some embodiments, the GPU transitions to the low-power state upon completion of rendering a first frame and transitions back into the active state (sometimes referred to herein as the GPU being “woken up”). The low-power state ends when the next consecutive frame is ready to be processed by the GPU. In some embodiments, a driver associated with the GPU queues up a graphics workload for the next consecutive frame and, in response, the GPU initiates its transition from the low-power state to the active state, where the driver internally maintains a timer to recognize frame boundaries. This method of transitioning the GPU to the low-power state between frames is referred to herein as Inter Frame Power Off (IFPO). In some embodiments, IFPO is used to achieve better sustained power usage (e.g., less power consumption, longer effective battery life, etc.) during regular usage, instances of which are sometimes referred to as Day-Of-Use (DOU) applications, of a mobile device such as a smartphone or tablet computing device. Two aspects of IFPO that govern overall power reduction are the transition time from the active state to the low-power state (i.e., the low-power state transition time) and the transition time from the low-power state to the active state (i.e., the active state transition time) of the GPU. In some cases, restoring the GPU to its active state within the timing limitations of a frame is challenging. For example, frame times tend to be around 8.3 ms for a 120 Hz refresh rate. As another example, during the low-power state transition, various information (referred to herein as “GPU state information”) regarding the state of the GPU and its constituent components is stored. For non-retention IFPO systems, the low-power state transition contributes about 610 μs for an 8.3 ms frame with 120 Hz refresh rate. It should be understood that any latencies described herein are intended to be illustrative and not limiting. In practice, system level latencies can be significantly higher than those described herein and are generally dependent on the particular implementation of the system. During the active state transition, the previously stored GPU state information is restored and the GPU components are initialized to process the next frame. For non-retention IFPO systems, the active state transition contributes about 520 μs of latency for an 8.3 ms frame with 120 Hz refresh rate. Thus, more than one-eighth of the frame time is taken up by non-retention IFPO low-power state transition and non-retention active state transition. The retention IFPO processes described herein desirably reduce these latencies by storing GPU state information on retention hardware that is kept in a retention state when the GPU is in its low-power state. By employing the retention hardware in this way, a portion of the system-level latencies typically encountered in non-retention IFPO processes are instead confined to the GPU in the retention IFPO processes described herein. According to aspects of the present disclosure, some or all of the GPU state information is stored in retention hardware that is coupled to or included in the GPU in order to increase the speed with which the state information is stored and restored, and therefore increase the speed with which the GPU can transition between the active state and the low-power state and between the low-power state and the active state during the IFPO process. In some embodiments, the retention hardware is included as part of a system-on-a-chip (SoC) that includes the GPU. Alternatively, in some embodiments, the retention hardware is included on a die separate from that of the GPU. In some embodiments, the retention hardware includes one or more retention RAMs or retention flip-flops. In some embodiments, microcode and/or register settings of the GPU are stored in the retention hardware, as the IFPO steps of microcode loading and saving and restoring register settings typically tend to consume more time than other steps of the IFPO process. By saving and loading the microcode and register settings to and from retention hardware rather than having to perform direct memory accesses (DMAs) to save and load the microcode and register settings to and from dynamic RAM (DRAM) of the device memory106, the latency associated with these operations is reduced (e.g., from milliseconds to microseconds), since such DMAs have comparatively high latency. Generally, by using an IFPO technique in combination with retention hardware, as described herein, the GPU achieves around 240 mW of static power consumption reduction compared to that achieved by non-IFPO techniques, and around 200 mW of static power consumption reduction compared to that achieved by non-retention IFPO techniques. In some embodiments the retention hardware requires additional substrate area compared to non-retention variants (e.g., non-retention RAM, non-retention flip-flops). FIG.1illustrates a processing system100that includes a GPU102configured to perform improved IFPO operations that utilize retention memory hardware130. The processing system100includes a GPU102, a host CPU104, a device memory106utilized by the GPU102, and a system memory108shared by the GPU102and the host CPU104. The memories106,108include any of a variety of random access memories or combinations thereof, such as a double-data-rate dynamic random access memory (DDR DRAM), a graphics DDR DRAM (GDDR DRAM), and the like. In the depicted embodiment, the GPU102includes a command processor (CP)110, a dispatch processor (DP)112, a plurality of compute units (CU)114(numbered114-1,114-2,114-3, and114-N), and a memory management subsystem that includes an a memory controller122for managing address translation operations for one or all of the memories106and108and the retention hardware130. The memory management subsystem further includes a cache hierarchy including one or more levels of data and instruction caching, such as a private level 0 (L0) cache124at each compute unit114, a shared level 1 (L1) cache126, and a shared level 2 (L2) cache128. In some embodiments, the retention hardware130is separate from the system memory108and the device memory106. In some embodiments, the retention hardware130is included as part of the device memory106. For some embodiments in which the retention hardware130is separate from the device memory106, the system memory108and the device memory106are both non-retention memories, meaning that they are operable in active and inactive states, but are not operable in a retention state in which they retain stored data without being capable of read/write functionality. For some embodiments in which the retention hardware is included in the device memory106, a first subset of RAM of the device memory106is non-retention RAM and a second subset of RAM is retention RAM that is included in the retention hardware130and is operable in the retention state. In some embodiments, the retention hardware130is included as part of a system-on-a-chip (SoC) that includes the GPU102. Alternatively, in some embodiments, the retention hardware130is included on a die separate from that of the GPU102. In some embodiments, the retention hardware130is dedicated for use by only the GPU102, (i.e., it is not directly accessible by the host CPU104). The retention hardware130includes one or more types of data storage devices operable in respective retention modes, such as retention RAM or retention flip-flops. For example, in some embodiments the retention hardware130employs a retention RAM that is configured to be operable in two or more modes, including a normal/active mode and a retention mode, sometimes referred to as a standby mode or a sleep mode. In the retention mode, the RAM is placed in a retention state in which the power supply voltages applied to the memory cells of an array of memory cells of the RAM are reduced to voltages below that necessary for access, but above the minimum power supply voltage required for each cell to retain its stored data state, which is sometimes referred to as its data-state retention voltage (DRV). In some embodiments, the retention RAM is powered using a secondary power supply voltage when in the retention state, which allows the data stored on the retention RAM to be retained throughout the time that the retention RAM is in the retention state and to subsequently be retrieved when the main power supply voltage is switched back on (e.g., upon initiating the IFPO active state transition for the GPU102for embodiments in which the retention hardware includes retention RAM). For example, to protect the data stored on the memory cells of a RAM, the RAM is biased, in the retention state, to a secondary power supply voltage that is above the DRV for the memory cell in the array of the retention RAM having the highest (e.g., worst) DRV. For example, the secondary power supply voltage allows the retention RAM to retain the data stored thereon, but is not high enough for read and write operations to be performed at the retention RAM. In some embodiments, the retention RAM is coupled to a single power supply voltage rail that is dynamically biased to the main power supply voltage when the retention RAM is in the normal/active state and biased to the secondary power supply voltage when the retention RAM is in the retention state. In other embodiments, the retention hardware130includes one or more retention flip-flops, wherein each retention flip-flop is a volatile latch circuit that is configured to be operable in two or more modes including a normal/active mode and a retention mode, which is sometimes referred to as a standby mode or a sleep mode. All portions of the retention flip-flop receive power during normal/active modes, and the retention flip-flop functions substantially the same as a normal flip-flop to receive and temporarily store data bits during logic operations performed by its host circuit (e.g., the GPU102). When the retention flip flop is instructed to switch from the normal/active mode into the retention mode, the retention flip-flop retains the last-received data bit value in way that facilitates switching off a main power supply voltage to selected portions of the retention flip-flop in order to conserve power during the retention mode, and that allows the last-received data bit value to be output by the retention flip-flop when the main power supply voltage is switched back on (e.g., upon initiating the IFPO active state transition for the GPU102for embodiments in which the retention hardware130includes retention flip-flops). Specifically, a portion of the retention flip-flop utilizes a secondary power supply voltage to remain active while the GPU102is in the powered-down state in order to retain the last-received data value while the main supply voltage is turned off, while other portions of the retention flip-flop are inactive, thereby facilitating both lower power consumption during standby/sleep modes, and also resumption of operations using last-received data values when normal operations are resumed. For example, the secondary power supply voltage allows the retention flip-flop to retain the data stored thereon, but is not high enough for read and write operations to be performed at the retention flip-flop As an illustrative example, during normal operation the GPU102is tasked by the host CPU104with rendering a set of frames to be displayed at a screen of the mobile device that includes the GPU102. The GPU102acts in accordance with an IFPO procedure (e.g., as illustrated inFIG.2), such that after rendering each frame of the set of frames, the GPU102transitions into a low-power state (i.e., sometimes referred to herein as the “IFPO low-power state transition”) and, upon receiving a subsequent frame for rendering, the GPU102transitions back into an active state (sometimes referred to herein as the “IFPO active state transition”). During the IFPO low-power state transition, GPU state information132, which defines various aspects of the current state of the GPU and its constituent components is stored on the retention hardware130According to various embodiments, the GPU state information132includes state information for one or more microcontrollers, pipelines, queues, state machines, GPU registers), program counters, and/or the like and, in some instances, includes data stored in the GPU registers and/or program counters. In some embodiments, the GPU registers for which data is included in the GPU state information132include configuration registers and/or control registers that store information for configuring or controlling the shaders, the first-in-first out (FIFO) buffers, the virtual memory, the L0, L1, and/or L2 caches, and/or other applicable components or aspects of the GPU102. As an example, the GPU state information132stored at the retention hardware130includes microcode (i.e., hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing, which sometimes involves implementing logic such as scheduling logic) corresponding to one or more microcontrollers of the GPU (e.g., the command processor110), GPU register settings (e.g., of one or more of the various types of configuration registers and/or control registers of the GPU described previously), addresses of the GPU registers, and/or other applicable data. In some embodiments, in the IFPO low-power state transition period, all components of the GPU are turned off, except for the retention hardware130, which is held in its retention state via application of a secondary power supply voltage. Generally, the secondary power supply voltage is sufficient to ensure retention of the data stored on the retention hardware130, but is less than the main power supply voltage required to write data to and read data from the retention hardware130. The retention hardware130generally remains in its retention state for as long as the GPU102remains in the low-power state. Upon receipt of a subsequent frame for processing, the IFPO active state transition is triggered, in response to which the GPU102restores power to its constituent components and switches the power supplied to the retention hardware130from the secondary power supply voltage to the main power supply voltage. In some embodiments, the secondary power supply voltage supplied to the retention hardware130in its retention state is set to the minimum voltage required to retain the data stored in the retention hardware130. In some embodiments, this minimum voltage is selected based on the DRV of the memory cell or flip-flop of the retention hardware130having the highest DRV. In some embodiments, the minimum voltage is directly set to the DRV of this memory cell or flip-flop. Compared to a non-retention IFPO process, embodiments of the retention IFPO process provided herein reduce latencies associated with the low-power state transition and active state transition of the GPU102when entering and exiting the low-power state between rendering consecutive image frames. In some examples, given an 8.3 ms frame period, a non-retention IFPO process generally has a low-power state transition latency of about 610 μs due to GPU quiescence, Performance Monitoring Unit (PMU) handshaking, saving register settings to the speculative register map (SRM), saving microcode and the register settings to the DRAM after saving the register settings to the SRM, and ramping down the power rail. In some examples, given an 8.2 ms frame period, the active state transition latency of the non-retention IFPO process is about 520 μs due to power rail ramp up, run list controller (RLC) microcode loading, loading register settings from DRAM to the SRM, GPU restoration from SRM, and GPU initialization. In contrast, in some embodiments the retention IFPO process described herein has a low-power state transition latency of about 510 μs and an active state transition latency of about 400 μs since the retention hardware130obviates the need for the step of saving GPU state information to the DRAM (e.g., saving about 99 μs) during the low-power state transition and the steps of microcode loading and SRM loading from the DRAM (e.g., saving about 12 μs and about 110 μs, respectively), which translates to a total latency reduction of about 220 μs in such embodiments. FIG.2shows a timing diagram200illustrating IFPO operations of a GPU, described here in the context of the processing system100ofFIG.1. In the present example, a frame period214is depicted, which spans a time period between the receipt of a first image frame for rendering by the GPU102and the receipt of a second image frame for rendering by the GPU102. When rendering a sequence of image frames, the operations of the GPU performed during the frame period214are substantively repeated for each image frame of the sequence. At time period202, the GPU102is in a low-power state in which power is not supplied to most or all components of the GPU102(e.g., the command processor110, the dispatch processor112, the compute units114, the caches L1 and L2, and the memory controller122). In some embodiments, the GPU102is in the low-power state at the time period202due to the GPU102transitioning to the low-power state upon rendering a preceding frame. During an active state transition time period204, the GPU102executes an active state transition sequence to transition from the low-power state to the active state. In some embodiments, the active state transition sequence includes restoring power to the components of the GPU to which power was not supplied while the GPU102was in the low-power state, performing initialization processes at the GPU102, and transitioning the retention hardware130from the retention state to the active state to make the GPU state information132stored thereon, if any, available for use by the GPU102. During an active time period206, the GPU renders the first image frame. For example, the GPU receives instructions and raw image data (e.g., raw vertices and primitives) from the host CPU104and processes the raw image data (e.g., via shading, primitive setup, rasterization, tessellation, pixel processing, and/or the like) according to the instructions using the compute units114to render the first image frame. Once the first image frame is rendered, the GPU102transitions into a low-power state during a low-power state transition time period208. During the low-power state transition time period208, the GPU102stores GPU state information132at the retention hardware130(e.g., as the GPU state information is generated). In some embodiments, the GPU state information132stored at the retention hardware130includes microcode, GPU register settings, and/or other applicable data. During the low-power state transition time period208, the GPU102stops supplying power to most or all of its constituent components, and the retention hardware130transitions into a retention state in which power supplied to the retention hardware is decreased to a level that is sufficient for data retention at the retention hardware130, but that is not sufficient for read/write operations to be performed at the retention hardware130. In some embodiments, the retention hardware130is transitioned into the retention state by changing a power supply voltage supplied to the retention hardware130from a main power supply voltage (e.g., which provides sufficient power to the retention hardware130for read/write operations to be performed) to a secondary power supply voltage, where the second power supply voltage is lower than the main power supply voltage. In some embodiments, retention hardware130includes retention RAMs, and the second power supply voltage corresponds to the DRV of a memory cell of the plurality of retention RAMs having the highest DRV of all memory cells of the retention RAMs. During time period210, the GPU102remains in the low-power state in which power is not supplied to most or all components of the GPU, as indicated above, and the retention hardware130remains in the retention state and continues to store the GPU state information132. At the beginning of the next active state transition time period212, the GPU102receives the next (second) image frame for rendering, which triggers the GPU102to perform its active state process and which marks the start of the next frame period. As during the time period204, the GPU102restores power to its constituent components to transition back into the active state of the GPU and retrieves and restores the GPU state information132from the retention hardware130as part of the active state transition process. The retention hardware130transitions from the retention state to its active state during the active state transition time period212, so that the GPU state information132can be retrieved by the GPU102. In some embodiments, the retention hardware130transitions from the retention state to the active state by switching the power supply voltage supplied to the retention hardware130from the secondary power supply voltage to the main power supply voltage. The main power supply voltage and the secondary power supply voltage levels are generally dependent on the operational and hardware specifications of the processing system100, but it should be understood that the secondary power supply voltage level is less than the main power supply voltage level. FIG.3shows a chart300illustrating instantaneous power consumption over time for a GPU is not configured for IFPO, a GPU that is configured for IFPO without retention hardware (i.e., “non-retention IFPO”), and a GPU that is configured for IFPO with retention hardware (i.e., “retention IFPO”). As shown the chart300includes plots310,320, and330that share a common time axis for the purpose of comparison, but that respectively correspond to different GPU configurations. The non-IFPO plot310represents the instantaneous power consumption over time for a GPU that is not configured for IFPO. During an active time period312, the GPU renders an image frame and has a relatively high instantaneous power consumption. During a time period314(i.e., spanning the end of the time period312to the beginning of the time period316), the GPU transitions into an idle state in which its constituent components are still supplied with power, but are not actively rendering an image frame. The GPU remains in the idle state throughout an idle time period316. During the idle state, the GPU continues to have an instantaneous power consumption of about 240 mW, for example. During a time period318, the GPU transitions from the idle state back into the active state upon receiving the next image frame for rendering. The non-retention IFPO plot320represents the instantaneous power consumption over time for a GPU that is configured for IFPO, but that does not include any retention hardware and instead utilizes non-retention DRAM of the device memory via DMA to store and retrieve GPU state information (e.g., microcode, register settings, and/or the like). During an active time period322, the GPU renders an image frame and has a relatively high instantaneous power consumption. At a low-power state transition time period324(i.e., spanning the end of the time period322to the beginning of the time period326), the GPU transitions into a non-retention low-power state in which components (e.g., compute units, microprocessors, caches, controllers, memory modules, and/or the like) of the GPU are no longer supplied with power. During the low-power state transition time period324, the GPU stores GPU state information (e.g., register settings) for rendering the next frame in the DRAM of the device memory coupled to the GPU. In some examples, storing the GPU state information in the DRAM takes about 99 μs due to latencies associated with DRAM DMA. The GPU remains in the non-retention low-power state throughout a low-power state time period326, so the power consumption of the GPU is substantially zero during the low-power state time period326. During the low-power state, the GPU continues to supply no power to the components mentioned above. During an active state transition time period328, the GPU transitions from the non-retention low-power state back into the active state upon receiving the next image frame for rendering. For example, during the active state transition time period328, the GPU power rail is ramped up, microcode is loaded to the GPU by the RLC, the SRM of the GPU is loaded with GPU state information (e.g., register settings) from the DRAM that were stored there during the low-power state transition time period324, the GPU is restored from the SRM, and the GPU is initialized. The retention IFPO plot330, represents the instantaneous power consumption over time for a GPU that is configured for IFPO and that includes retention hardware on which GPU state information is stored. The GPU represented in the plot330is described here in the context of the GPU102and the processing system100ofFIG.1. During an active time period332, the GPU102renders an image frame and has a relatively high instantaneous power consumption. At a low-power state transition time period334(i.e., spanning the end of the time period332to the beginning of the time period336), the GPU102transitions into a low-power state (i.e., “retention low-power state”) in which components (e.g., compute units, microprocessors, caches, controllers, memory modules, and/or the like) of the GPU102are no longer supplied with power, so instantaneous power consumption decreases during the low-power state transition time period334. During the low-power state transition time period334, the GPU state information132(e.g., register settings and microcode) for the GPU102is stored at the retention hardware130, so there is no need to add latency by storing the GPU state information132in DRAM, which, in some embodiments, reduces latency by about 99 us compared to the non-retention IFPO example of the plot320. Additionally, during the low-power state transition time period334, the retention hardware130transitions from an active state to a retention state in which the power supply voltage supplied to the retention hardware130is changed from a higher main power supply voltage to a lower secondary power supply voltage. In some embodiments, the secondary power supply voltage is set to a maximum DRV among the DRVs of all memory cells of the retention hardware130. The GPU102remains in the low-power state throughout a retention low-power state time period336. During the retention low-power state, the GPU continues to supply no power to the components mentioned above, but some voltage leakage (e.g., contributing about 0.32 mW of power consumption in the present example) is expected to occur at the retention hardware130in its retention state. During an active state transition time period338, the GPU102transitions from the retention low-power state back into the active state upon receiving the next image frame for rendering. For example, during the active state transition time period338, the GPU power rail is ramped up, the retention hardware130transitions to its active state (e.g., by switching from receiving the secondary power supply voltage to receiving the main power supply voltage), the GPU102is restored from GPU state information132(e.g., register settings and microcode) stored at the retention hardware130, and the GPU is initialized. Since the GPU102does not need to access the DRAM to retrieve the GPU state information132, the latency attributable to the active state transition time period338is reduced (e.g., by about 120 μs in some embodiments). As shown, the low-power state and active state transition times are significantly longer in the non-retention IFPO example of the plot320than in the retention IFPO example of the plot330described due to latencies associated with storing and retrieving GPU state information to/from DRAM during these transition time periods, which is are not performed in the retention IFPO example of the plot330due to the inclusion and utilization of the retention hardware130. It should be noted that the roughly 0.32 mW power consumption attributable to the retention hardware130in the retention IFPO example of the plot330is significantly offset by the power consumption reduction achieved by the reduction in latency achieved over the non-retention IFPO example of the plot320. FIG.4is a flow diagram of a method400of implementing an IFPO process for a GPU in a processing system, where the IFPO process utilizes retention hardware to store GPU state information while the GPU is in a low-power state. The method400is implemented in some embodiments of the processing system100shown inFIG.1. At block402, the GPU102renders a first image frame. At block404, the GPU102stores GPU state information132(e.g., register settings and microcode) at the retention hardware130. In some embodiments, the retention hardware130includes retention RAMs, while in other embodiments the retention hardware130includes retention flip-flops. At block406, the GPU102transitions from an active state to a low-power state in which components of the GPU102(e.g., compute units, microprocessors, caches, controllers, memory modules, and/or the like) are no longer supplied with power. Additionally, the retention hardware130transitions into a retention state. In some embodiments, transitioning the retention hardware130into the retention state includes switching a power supply voltage supplied to the retention hardware130from a main power supply voltage to a secondary power supply voltage, where the secondary power supply voltage is lower than the main power supply voltage. In some embodiments, the secondary power supply voltage is set to a maximum DRV among the DRVs of memory cells of the retention hardware130. In some embodiments, the GPU102instructs or otherwise causes the retention hardware130to transition into the retention state. At block408, the GPU102receives an indication that a second image frame is ready for rendering. In some embodiments, the host CPU104sends the indication to the GPU102when raw vector data and primitives for the second frame are ready for rendering by the GPU102, for example. At block410, the GPU102transitions from the low-power state to the active state (e.g., restoring power to the components mentioned above) and the retention hardware130transitions from the retention state to the active state (e.g., switching from the secondary power supply voltage to the main power supply voltage). At block412, the GPU102is restored using the GPU state information132stored on the retention hardware130upon transitioning the retention hardware to the active state. In some embodiments, the GPU102transitions to the active state by restoring microcode from the retention hardware130hardware (e.g., using the RLC of the GPU) and restoring register settings from the retention hardware130. At block414, upon restoration of the GPU102using the GPU state information132, the GPU102renders the second image frame. In some embodiments, the apparatus and techniques described above are implemented in a system including one or more integrated circuit (IC) devices (also referred to as integrated circuit packages or microchips), such as the processing system100described above with reference toFIG.1. Electronic design automation (EDA) and computer aided design (CAD) software tools may be used in the design and fabrication of these IC devices. These design tools typically are represented as one or more software programs. The one or more software programs include code executable by a computer system to manipulate the computer system to operate on code representative of circuitry of one or more IC devices so as to perform at least a portion of a process to design or adapt a manufacturing system to fabricate the circuitry. This code can include instructions, data, or a combination of instructions and data. The software instructions representing a design tool or fabrication tool typically are stored in a computer readable storage medium accessible to the computing system. Likewise, the code representative of one or more phases of the design or fabrication of an IC device may be stored in and accessed from the same computer readable storage medium or a different computer readable storage medium. A computer readable storage medium may include any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium may be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)). In some embodiments, certain aspects of the techniques described above may implemented by one or more processors of a processing system executing software. The software includes one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium may be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors. Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.
37,903
11861782
The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features. DETAILED DESCRIPTION The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments are described by way of example only. As described above, when transformed primitives (e.g. the transformed geometry data related thereto) are stored in primitive blocks the display list for a tile may comprise an entry for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of that tile. Since a primitive block may comprise primitives that fall, at least partially, within the bounds of multiple tiles, there may be a primitive block entry in multiple tiles for the same primitive block. To reduce this repetition of primitive block data the tiles may be divided into group of N×M tiles, wherein N and M are integers greater than or equal to 1, and a per tile group control stream is generated that identifies the primitive blocks (and the primitives thereof) that are relevant to each tile in the group. For example, UK Patent No. 2466576 describes storing for each group of tiles a control stream that comprises a primitive block entry for each primitive block that comprises at least one primitive that falls, at least partially, within the bound of at least one tile in the group. Each primitive block entry comprises: (i) information identifying which tiles are valid for that primitive block, (ii) information identifying the location of the primitive block in memory, and (iii) for each valid tile, information identifying the primitives of that primitive block that fall, at least partially, within the bounds of that tile. A tile is said to be valid for a primitive block if there is at least one primitive in the primitive block that falls, at least partially, within the bounds of that tile. A primitive that falls, at least partially, within the bound of a tile may alternatively be described as a primitive that intersects that tile. FIG.3shows an example of the control stream302of UK Patent No. 2466576 for a tile group comprising four tiles. The control stream302comprises a primitive block entry304,306for each primitive block that is valid for at least one tile in the tile group. Each primitive block entry304,306comprises a primitive block header308and a primitive block pointer310. A primitive block entry304,306may optionally comprise primitive mask data312. The primitive block header308comprises information identifying which tiles in the tile group are valid for the primitive block. For example, as shown inFIG.3the primitive block header308may comprise a primitive mask format field314,316,318and320for each tile in the tile group that indicates whether or not the tile is valid for the primitive block. For example each primitive mask format field314,316,318,320may comprise two bits and ‘00’ may indicate that the tile is invalid for the primitive block; ‘01’ may indicate that the tile has a full primitive mask (i.e. all primitives in the primitive block are valid for the tile); ‘10’ may indicate that the primitive mask is compressed; and ‘11’ may indicate that the primitive mask is uncompressed. The primitive block header308may also comprise other information322such as, but not limited to, the number of vertices in the primitive block, whether or not all the tiles in the tile group are valid and have the full primitive mask (i.e. all the primitives in the primitive block are valid for the tile), and whether or not all of the tiles have the same primitive mask. The primitive block pointer310comprises the address of the primitive block in memory324. In some cases, the primitive block pointer310may also comprise other information326, such as, but not limited to, the number of primitives in the primitive block. The primitive mask data312comprises one or more primitive masks that identify the primitives of the primitive block that fall, at least partially, within the bounds of each of the valid tiles. Each primitive mask may comprise a bit for each primitive in the primitive block that indicates whether that primitive falls, at least partially, within the bounds of the corresponding tile(s). In some cases (e.g. when each valid tile has a different primitive mask), the primitive mask data312may comprise a primitive mask for each valid tile. In other cases (e.g. when all the valid tiles have the same primitive mask), the primitive mask data312may comprise only one mask which applies to all of the valid tiles. Each primitive mask may be in a compressed or uncompressed form. One of the issues with this control stream structure described in UK Patent No. 2466576 is that the primitive mask data is variable in length. This is because the size of the primitive mask data depends on the number of valid tiles, the number of primitives in the primitive block, and whether or not the primitive masks are compressed. As the primitive mask data is variable in length so are the primitive block entries. This means that for the rasterization logic to determine which primitive blocks are relevant to a tile the rasterization logic has to process each primitive block entry in the control stream. Specifically, the rasterization logic has to process each primitive block entry to determine (i) whether the corresponding primitive block is relevant to the tile being processed; and (ii) where the next primitive block entry begins. Where there are large gaps between relevant primitive block entries in the control stream this many affect the performance of the graphics processing system. For example, if there are 2000 primitive block entries and a tile is valid only for the first and last primitive block entries there will be significant gap between when the data related to the first primitive block entry is output and when the data related to the last primitive block entry is output. Accordingly, described herein is a tile group control structure, and methods and tiling engines for generating such a control structure, in which the variable length fields or portions of the primitive block entries are removed therefrom and stored separately so that the primitive block entries have a fixed length. Specifically, the tile group control structures described herein comprise a control stream that comprises a fixed length primitive block entry for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group. Like, the primitive block entries of UK Patent No. 2466576 each primitive block entry comprises information (e.g. a valid tile mask) identifying the valid tiles in the tile group for that primitive block. However, instead of the variable length control data (e.g. primitive masks) for a primitive block being included in the primitive block entry, the variable length control data is stored elsewhere in memory (e.g. in another page) and the primitive block entry comprises a pointer or link to the variable length control data. The fixed length primitive block entries allow the rasterization logic to quickly identify the information (e.g. valid tile mask) identifying the valid tiles for each primitive block without having to process each primitive block entry. Specifically, the rasterization logic no longer has to process each primitive block entry to identify where the next primitive block entry starts. In contrast, the rasterization logic can quickly pull out the information (e.g. valid tile mask) identifying the valid tiles to determine which primitive blocks are relevant to a particular tile. This allows the rasterization logic to quick skip over primitive block entries that are not relevant to a particular tile. Testing has shown this can significantly decrease the time for the rasterization logic to skip over invalid entries (i.e. entries not valid to a tile). Specifically, testing has shown that this can, in some cases, double the rate at which the rasterization logic can skip invalid primitive block entries. Furthermore, the rasterization logic only has to read the variable length control data for a primitive block from memory if that primitive block is relevant to the tile. Reference is now made toFIG.4which illustrates a first example tile group control structure400which comprises a control stream402that includes one or more fixed length control entries which identify primitive blocks that are relevant to the tile group, and a control data block404,406for each relevant primitive block that is stored separately from the control stream402. Specifically, the control stream402comprises a primitive block entry408,410for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group. In the example ofFIG.4the control stream402comprises two primitive block entries408,410indicating that there are two primitive blocks that comprise primitives that fall, at least partially, within the bounds of at least one tile in the group. However, it will be evident to a person of skill in the art that this is an example only and that there may be any number of primitive block entries. Each primitive block entry408,410comprises valid tile information412which indicates which tiles in the tile group are valid for the corresponding primitive block, and a data pointer414that identifies the location of the corresponding control data block in memory. Each primitive block entry408,410may also, optionally, include a primitive block header422which comprises information about the primitive block and/or its relationship to the tiles of the tile group. The valid tile information412may comprise a valid tile mask that comprises a bit for each tile in the tile group that indicates whether or not that tile is valid for the primitive block. For example, the valid tile mask for a tile group comprising four tiles may comprise four bits. In some cases, a ‘0’ may indicate that the tile is not valid for the primitive block and a ‘1’ may indicate that the tile is valid for the primitive block. However, it will be evident to a person of skill in the art that this is an example only and that a ‘0’ may indicate the corresponding tile is valid. As described above, a tile is valid for a primitive block if the primitive block comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group. Reference is now made toFIG.5which illustrates example valid tile masks for an example set of primitive blocks for a tile group500comprising four tiles (i.e. a tile group comprising a 2×2 block of tiles): a first tile502, a second tile504, a third tile506and a fourth tile508. In this example, there are two primitives: primitive A and primitive B which fall within the bounds of the tiles in the tile group as shown inFIG.5. Primitive A forms part of a first primitive block510, and primitive B forms part of a second primitive block512. As primitive A falls, at least partially, within the bounds of the first, second and third tiles502,504,506the valid tile mask for the first primitive block510may be ‘1 1 1 0’ wherein a ‘1’ indicates that the tile is valid for the primitive block. Similarly, as primitive B falls, at least partially, within the bounds of the second, third and fourth tiles504,506and508the valid tile mask for the second primitive block512may be ‘0 1 1 1’. It will be evident to a person of skill in the art that this is a simple example where each primitive block comprises only a single primitive, however, in other examples primitive blocks may comprise a plurality of primitives. Returning toFIG.4, as the control data for a primitive block is stored separately from the primitive block entry for that primitive block, the data pointer414comprises information identifying the location of the corresponding control data block in memory. In some cases, the data pointer414may comprise the address of the control data block in memory. In other cases, as described in more detail below, the data pointer414may comprise an offset, and the address of the tiling data for a particular primitive block may be determined from the offset and a control data base address. The control data404,406for a primitive block is defined herein as data that allows the primitives (e.g. the transformed geometry data related thereto) in the primitive block that are relevant to the rendering of a particular tile in the tile group to be obtained. The control data404,406for a primitive block may comprise a primitive block pointer416which identifies the location of the primitive block in memory. In some cases, the primitive block pointer416may comprise an address in memory at which the primitive block is stored. The control data404,406may also optionally, comprise information418identifying the primitives of the corresponding primitive block that are relevant to each of the valid tiles. The information418identifying the primitives of the corresponding primitive block that are relevant to each of the valid tiles may comprises one or more primitive masks. In some cases, there may be one primitive mask for each valid tile that identifies the primitives in the primitive block that are relevant to that tile. In other cases, there may be a primitive mask that is shared between one or more tiles. For example, where the valid tiles of the tile group all have the same primitive mask then only a single copy of the primitive mask may be stored in the control data block404,406and the control data block404,406may comprise information (e.g. in a primitive block header as described below) indicating that all of the valid tiles have the same primitive mask. In some cases, no primitive mask may be stored in the control data block for a primitive block. For example, in some cases if all of the primitives in the primitive block are relevant to all of the valid tiles, which may be referred to herein as the valid tiles having a full primitive mask, then the control data block404,406may simply comprise information (e.g. in a primitive block header as described below) indicating that each of the valid tiles have a full primitive mask. Each primitive mask may comprise a bit for each primitive in the primitive block that indicates whether that primitive is relevant for rendering the corresponding tile(s) (i.e. whether that primitive falls, at least partially, within the bounds of the corresponding tile(s)). For example, if there are fifty primitives in the primitive block then the primitive mask may comprise fifty bits. In some cases a ‘1’ may indicate that the primitive falls, at least partially, within the bounds of the corresponding tile(s) and a ‘0’ may indicate that the primitive does not fall, at least partially, within the bounds of the corresponding tile(s). However, it will be evident to a person of skill in the art that this is an example only and that in other cases a ‘0’ may indicate that a primitive is relevant to the corresponding tile(s) and a ‘1’ may indicate that the primitive is not relevant to the corresponding tile(s). The primitive mask(s) may be stored in the control data block in compressed or uncompressed form. Any suitable compression technique or algorithm may be used to compress a primitive mask. The primitive block entries408,410and/or control data block404,406may optionally comprise a primitive block header420or422that provides additional information about the primitive block and/or the associated control data. Either or both of the primitive block headers420,422may comprise one or more of the following:Full Mask Information—(e.g. a bit) that indicates that all of the tiles in the tile group have the full primitive mask and thus no primitive masks are stored in the control data blockPrimitive Counter Information—(e.g. seven bits for up to 128 primitives) that indicates the number of primitives in the primitive blocks. This can help the rasterization logic determine the size of a primitive mask stored in the control data block.Same Mask Information—(e.g. a bit) that indicates all the tiles in the tile group have the same primitive mask, so only one primitive mask is stored in the control data blockPrimitive Mask Format Information (per tile in the tile group)—(e.g. two bits per tile for up to 4 primitive mask formats) that indicates the format of the primitive mask for that tile. For example, where the Primitive Mask Format Information comprises two bits per tile, ‘01’ may indicate that the tile has a full primitive mask and thus a primitive mask for the tile is not included in the control data block, ‘10’ may indicate that the primitive mask for the tile is stored in the control data block in compressed form, and ‘11’ may indicate that the primitive mask for the tile is stored in the control data block in uncompressed form.Primitive Mask Start Information (per tile in the tile group)—that indicates the starting location of the primitive mask in the control data block for that tile. It will be evident to a person of skill in the art that this is only an example list of information that may be included in a primitive block header and other example primitive block headers may comprise additional and/or different information. The information that is included in the primitive block header of a primitive block entry408,410may be based on (i) the size or length of the primitive block entry; and (ii) the number of tiles in the tile group. Specifically, the number of tiles in the tile group determines the number of bits required for the valid tile mask and thus dictates how many of the remaining bits are available for the data pointer information and the primitive block header. In one example, each primitive block entry408,410may comprise a primitive block header422that comprises Full Mask Information and/or Same Mask Information, and optionally Primitive Counter Information depending on the number of tiles in the tile group; and each control data block may comprise a primitive block header420that comprises the information listed above that is not included in the primitive block header422of the primitive block entry. It will be evident to a person of skill in the art that this is an example division of the primitive block header information between the primitive block entry and the control data block and the primitive block header information may be divided between the two in any suitable manner. Multiple Control Stream Entry Types In some cases, in addition to the fixed sized primitive block entries the control stream may comprise one or more other types of fixed-sized entries. Each of the other control stream entry types may be used to convey different information. In some cases, each control stream entry may be a 32-bit dword. Reference is now made toFIG.6which illustrates a second example control stream structure600which comprises a control stream602,604and a plurality of control data blocks606,608,610,612,614. In this example the control stream602,604comprises a fixed-length primitive block entry616,618,620,622,624for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group. Each primitive block entry616,618,620,622,624generally corresponds to the primitive block entries408,410ofFIG.4. Specifically, each primitive block entry616,618,620,622,624comprises valid tile information412that identifies which tiles are valid for the corresponding primitive block; a data pointer414which comprises information that identifies the location of the corresponding control data block in memory; and, optionally, a primitive block header422. However, in this example the control stream602,604also comprises other types of entries. Specifically, in this example the control stream602,604may comprise one or more control data base address entries626,628, one or more link entries630and/or a termination entry632. Each of these control stream entry types will be described below. It will be evident to a person of skill in the art that these are examples of other control stream entry types and that in other examples the control stream may comprise: only a subset of these control stream entry types; additional control stream entry types; and/or different types of control stream entries. Where, as inFIG.6, the control stream602,604may comprise multiple types of fixed-sized entries, each entry may have a dedicated field (e.g. a dedicated number of bits) which is used to identify the type of entry. For example, in some cases, K bits of each entry (which may be referred to herein as the entry type bits) may be used to identify the type of entry wherein K is based on the number of different types. For example, in some cases K may be equal to [log2H] wherein H is the number of different control stream entry types. In some cases, the entry type bits may be the first K bits of each entry. Table 1 illustrates an example of how a primitive block entry, a control data base address entry, a link pointer entry and a termination entry may be identified using two entry type bits. Specifically, if the two entry type bits are set to ‘00’ this may identify the entry as a primitive block entry, if the two entry type bits are set to ‘01’ this may identify the entry as a control data base address entry, if the two entry type bits are set to ‘10’ that may identify the entry as a link pointer entry, and if the two entry type bits are set to ‘11’ this may identify the entry as a termination entry. TABLE 1Entry TypeControl Stream Entry TypeBitsPrimitive Block Entry00Control Data Base Address Entry01Link Pointer Entry10Termination Entry11 Where, as inFIG.6, the control stream602,604can comprise multiple types of control stream entries, each entry may comprise valid tile information (e.g. a valid tile mask) that mimics the valid tile information412in the primitive block entries, however the valid tile information for all of the entries, except the primitive block entries, may be configured to indicate that none of the tiles are valid. For example, where each primitive block entry comprises a valid tile mask which comprises a bit for each tile that indicates whether or not that tile is valid for the primitive block, then each of the other control stream entries may comprise a valid tile mask which is set to all zeros. While this reduces the number of bits of the other control stream entries that are available for other information, this may allow the rasterization logic to quickly determine from the valid tile information of a plurality of control stream entries whether the primitive block entries thereof can be skipped for a particular tile. Even if it is determined that the primitive block entries on a set can be skipped for a particular tile, the other entries (e.g. control data base address entries, link entries and termination entries) in the set may be processed as normal. For example, as described in more detail below, the rasterization logic may be configured to receive a set of control stream entries (e.g. a control stream block) and may be able to quickly determine whether it needs to process any of the primitive block entries in the set by OR-ing the relevant bits of each valid tile information/field. Accordingly, by setting the valid tile mask in all of the non-primitive block entries to all zeros the rasterization logic can quickly and easily ignore or skip over a set or group of control stream entries that does not comprise a primitive block entry that is valid for a particular tile without having to analyse or decode the entries. For example,FIG.7illustrates an example set of eight control stream entries700for a tile group comprising four tiles. It can be seen that each control stream entry comprises valid tile information (e.g. a 4-bit valid tile mask wherein a ‘1’ indicates the corresponding tile is valid for the corresponding primitive block). However, only the valid tile information of the primitive block entries indicates a tile as being valid (e.g. only the valid tile masks of the primitive block entries comprise a ‘1’). The valid tile information (e.g. valid tile mask) for each other entry type indicates that none of the tiles are valid (e.g. the valid tile mask is set to all zeros). When the rasterization logic receives this set of control stream entries it may extract the bits of the valid tile information/mask of each entry that relate to the tile of interest (i.e. the tile that the rasterization logic is rendering). For example, when the first tile in the tile group is the relevant tile, the rasterization logic may select the 3rdbit of each entry (i.e. the first bit of each valid tile information/mask); when the second tile in the tile group is the relevant tile, the rasterization logic may select the 4thbit of each entry (i.e. the second bit of each valid tile information/mask); when the third tile in the tile group is the relevant tile, the rasterization logic may select the 5th bit of each entry (i.e. the third bit of each valid tile information/mask); and when the fourth tile in the tile group is the relevant tile, the rasterization logic may select the 6thbit of each entry (i.e. the fourth bit of each valid tile information/mask). The rasterization logic may then OR (i.e. perform an OR operation on) the selected bits to determine whether any of the primitive block entries in the set are valid for the relevant tile. Where, as shown inFIG.7, a ‘1’ in the valid tile mask indicates that the corresponding tile is valid for the primitive block and a ‘0’ in the valid tile mask indicates that the corresponding tile is not valid for the primitive block then if the result of the OR operation is ‘0’ then this indicates that none of the primitive block entries in the set are valid for the relevant tile, and if the result of the OR operation is ‘1’ then this indicates that at least one of the primitive block entries is valid for the relevant tile. If the result of OR-ing the selected bits indicates that none of the primitive block entries in the set are valid for the tile of interest, then the rasterization logic can quickly disregard the primitive block entries in the set without further processing them. Even if it is determined that none of the primitive block entries in a set of entries are not valid for a tile the other entries (e.g. control data base address entries, link entries and termination entries) in the set may be processed as normal. For example, since all of the bits of the valid tile masks that correspond to the first tile are zero OR-ing these bits will result in a ‘0’ which indicates that none of the primitive block entries in the set are valid for the tile. If, however, the result of the OR operation on the selected bits indicates that at least one of the primitive block entries in the set is valid for the relevant tile then the rasterization logic may analyse the entry type of each entry to see what, if any further processing needs to be performed on the control stream entries. For example, if the rasterization logic determines that an entry is a primitive block entry, the rasterization logic may analyse the bit of the valid tile mask corresponding to the relevant tile to determine whether the relevant tile is valid for the primitive block. If the relevant tile is valid for the primitive block, then the rasterization logic may read the data pointer to determine the address for the corresponding control data block in memory; and read the control data block from that address in memory. If the relevant tile is not valid for the primitive block, then the rasterization logic may skip to the next entry. For example, for the set of entries shown inFIG.7when the second tile in the tile group is the relevant tile the rasterization logic can skip primitive block entries 3, 4 and 6; when the third tile in the tile group is the relevant tile the rasterization logic can skip primitive block entries 1, 3 and 5; and when the fourth tile in the tile group is the relevant tile the rasterization logic can skip primitive block entries 4, 5 and 6. If, however, the rasterization logic determines from the entry type bits that an entry is not a primitive block entry then it may perform an action based on the type of entry. For example, as described in more detail below, where the entry is a control data base address entry the rasterization logic may store the control data base address; where the entry is a link entry the rasterization logic may read the next control stream block from memory; and where the entry is a termination entry the rasterization logic may complete the processing of the relevant tile and beginning processing the next tile. The processing of the control stream by the rasterization logic will be described in more detail below. Control Data Base Address Entry As described above, the data pointer414of a primitive block entry may not comprise the full address of the corresponding control data block in memory, but may comprise an offset which, in combination with a base address, can be used to generate the full address of the control data block. For example, the base address may specify the X most significant bits of the address and the offset may specify the Y least significant bits of the address wherein the full address comprises X+Y bits wherein X and Y are integers greater than or equal to 1. In these cases, the control stream may comprise one or more control data base address entries which specify the control data base address (e.g. the X most significant bits of the control data block address). In some cases, the base address may be specified by a single control base address entry. In other words, in some cases the X MSBs may be specified in a single control base address entry.FIG.8illustrates an example configuration of a primitive block entry802and a control data base address entry804in which the X MSBs of the address of the control data block are specified in a single control data base address entry. In this example the address of the control data block comprises 32 bits, each tile group comprises four tiles, and each control stream entry is 32 bits. The example primitive block entry802comprises a two-bit entry type field806which specifies the entry type, a four-bit valid tile information field808(e.g. a four-bit valid tile mask) which specifies which tiles are valid for the corresponding primitive block, an eight-bit primitive block header field810which may specify information about the primitive block and/or primitive masks as described above, and an eighteen-bit data pointer field812which specifies the eighteen LSBs of the address of the corresponding control data block. In this example a control data base address entry804comprises a two-bit entry type field814, a four-bit valid tile information field816(e.g. a four-bit valid tile mask) which is set to indicate that none of the tiles are valid, a fourteen-bit control data base address field818which specifies the fourteen MSBs of the address of the control data block corresponding to any following entry. The remaining twelve bits820of the control data base address entry804may not be used. The address of the control data block822may be generated by using the fourteen-bits from the control data base address field818as the MSBs and the eighteen bits from the data pointer field812as the LSBs. However, depending on the size of the control stream entries and the number of tiles in a tile group (and thus the number of bits in the valid tile mask) it may not be possible to specify all X MSBs in a single control data base address entry. Accordingly, the X MSBs may be specified over several control data base address entries. In these cases, there may be multiple types of control data base entries each which specifies a different portion of the X MSBs. For exampleFIG.9illustrates an example configuration of a primitive block entry902and control data base address entry904, and906in which the X MSBs of the address of control data block are specified over two control data base address entries. In this example a control data block address comprises 32 bits, each tile group comprises sixteen tiles and each control stream entry is 32 bits. The example primitive block entry902comprises a two-bit entry type field908which specifies the entry type, a sixteen-bit valid tile information field910(e.g. a sixteen-bit valid tile mask) which specifies which tiles are valid for the corresponding primitive block, a two-bit primitive block header field912which may specify information about the primitive block and/or primitive masks as described above, and a twelve-bit data pointer field914which specifies the twelve LSBs of the address of the corresponding control data block. In the example ofFIG.9there are two types of control data base address entries—a high control data base address entry904which specifies the highest bits of the X MSBs and a low control data base address entry906which specifies the lowest bits of the X MSBs. Each control data base address entry904,906comprises a two-bit entry type field916,918which specifies the entry type, a sixteen-bit valid tile information field920,922(e.g. a sixteen-bit valid tile mask) which indicates that none of the tiles are valid, and a ten-bit control data base address field924,926which is used to specify ten bits of the base address. The only difference between the two control data base address entries904,906is that in the high control data base address entry the 19thbit930is set to ‘1’ to indicate that the highest ten bits are being specified, and in the low control data base address entry the 19thbit928is set to ‘0’ to indicate that the lower ten bits are being specified. In this example the final three bits932,934of the control data base address entries may not be used. The address of the control data block936for a primitive block entry may be generated by using the bits of the control data base address field926of the high control data base address entry904as the first ten bits, using the bits of the control data base address field924of the low control data base address entry906as the next ten bits, and using the twelve bits from the data pointer field914of the primitive block entry902as the last twelve bits. In the example ofFIG.9each control data base address entry specifies the same number of bits of the base address, however, in other examples the different control data base address entries may specify a different number of bits of the base address. For example, the high control data base address entry may specify the top 12 bits of the address and the low control data base address entry may specify the next 8 bits of the address. In some cases, the control data blocks corresponding to the primitive block entries in a control stream may be packed into memory (e.g. they may be placed back to back in the memory). For example, the control stream for a group of tiles may be allocated a page of memory for storing the related control data blocks. The control data blocks may be written to the allocated page one after another (e.g. back to back) until the page is full. Once the allocated page is full a new page may be allocated and the subsequent control data blocks for the control stream are written to the new page until that page is full and so on. This packing of the control data blocks in memory allows the same base address to be used to calculate the address of multiple control data blocks. Accordingly, once the base address is set by a control data base address entry (or a set of control data base entries) then that base address may be used to calculate the address of control data block for each subsequent primitive block entry until the base address is updated by a subsequent control data base entry (or entries). In some cases, as shown inFIG.6, the base address may be updated when a new page is allocated to the control stream. FIG.10shows an example set of control stream entries1000wherein a single control data base address entry specifies the entire base address (e.g. as described with respect toFIG.8). In this example, control data base address entry 1 sets the control data base address to ‘Base Address 1’ and the control data base address entry 2 sets the control data base address to ‘Base Address 2’. ‘Base Address 1’ may point to a first page in memory and ‘Base Address 2’ may point to a second page in memory. In this example, the address of the control data blocks for primitive block entries 1, 2 and 3 will be calculated from ‘Base Address 1’ because at the time these primitive block entries are processed by the rasterization logic the base address will be set to ‘Base Address 1’; and the address of the control data blocks for primitive block entries 4 and 5 will be calculated from ‘Base Address 2’ because at the time these primitive block entries are processed by the rasterization logic the base address will be set to ‘Base Address 2’. Where the base address is specified by multiple control data base address entries (e.g. as described with respect toFIG.9) then when the base address needs to be updated (e.g. when a new page is allocated to the control stream for control data blocks) only a portion of the address may need to be updated. This may mean that only one type of control data base address entry may be added to the control stream to update the base address. For example, if the new page is close to the previous page only the lower bits of the base address may need to be updated. FIG.11shows an example set of control stream entries1100wherein the base address is specified by two control data base address entries (e.g. as described with respect toFIG.9). In this example, the high control data base address entry 1 sets the top (e.g. MSB) bits of the base address to ‘High Base Address 1’, the low control data base address entry 1 sets the bottom (e.g. LSB) bits of the base address to ‘Low Base Address 1’, and the low control data base address entry 2 sets the bottom (e.g. LSB) bits of the base address to ‘Low Base Address 2’. The combination of ‘High Base Address 1’ and ‘Low Base Address 1’ may specify the address of a first page in memory and the combination of ‘High Base Address 1’ and ‘Low Base Address 2’ may specify a second page in memory. In this example, the address of the control data blocks for primitive block entries 1 and 2 will be calculated from ‘High Base Address 1’+‘Low Base Address 1’ because at the time these primitive block entries are processed by the rasterization logic the base address will be set to ‘High Base Address 1’+‘Low Base Address 1’; and the address of the control data blocks for primitive block entries 3 and 4 will be calculated from ‘High Base Address 1’+‘Low Base Address 2’ because at the time these primitive block entries are processed by the rasterization logic the base address will be set to ‘High Base Address 1’+‘Low Base Address 2’. Link Entry In some cases, as shown inFIG.6, the control stream602,604may be divided into a sequence of control stream blocks602,604which are stored separately in memory. Each control stream block may have a maximum size. In some cases, the maximum size of a control stream block may be an integer multiple of the control stream entry size. For example, in some cases, where each control stream entry is a dword each control stream block may be a maximum size of 32 dwords. Where the control stream is divided into a sequence of control stream blocks, the control stream may comprise one or more link entries630which link the control stream blocks together. Specifically, each link entry may specify an address of, or a pointer to, the next control stream block in the sequence. For example, inFIG.6the link entry630in the first control stream block602(Control Stream Block 0) would include the address of, or a pointer to, the second control stream block604(Control Stream Block 1). When the rasterization logic encounters a link entry630in a control stream block it may trigger the rasterization logic to read, using the specified address or pointer, the next control stream block from memory. In some cases, as shown inFIG.6, the link entry630may be the last entry in each control stream block, other than the last control stream block. In some cases, the rasterization logic may be able to read a whole control stream block at a time. For example, in some cases where a control stream block is 32 dwords the memory may support 4-beat burst reads wherein each burst is 256 bits or 32 bytes. In other cases, the rasterization logic may only be able to read a portion of a control stream block at a time. Termination Entry In some cases, the last entry of a control stream may be a termination entry632which signals to the rasterization logic the end of the control stream. Specifically, when the rasterization logic encounters a termination entry632it may complete the processing of the current control stream for the relevant tile and start processing another control stream for another tile. Tiling Engine Reference is now made toFIG.12which illustrates an example tiling engine1200which is configured to generate a control structure for a group of tiles as described above which identifies which primitives fall within each tile of the tile group. The tile group may comprise any set of N×M tiles in the render space wherein N and M are integers greater than or equal to one. As described above, the control structure comprises a control stream and one or more control data blocks which are linked to the control stream. The tiling engine1200comprises tiling logic1202, a control data block generator1204and a control stream generator1206. The tiling logic1202is configured to (i) receive a plurality of primitive blocks as described above wherein each primitive block comprises one or more primitives (e.g. the transformed geometry data related thereto); (ii) determine, for each received primitive block, which primitives of that primitive block, fall, at least partially, within the bounds of each of the tiles in a tile group (this may be referred to herein as tiling the primitives); and (iii) output the results of the determination. In some cases, the output may be in the form of a set of primitive masks for each primitive block in which the set of primitive masks comprises a primitive mask for each tile in the tile group. As described above, each primitive mask may comprise a bit for each primitive in the primitive block that indicates whether or not that primitive falls, at least partially, within the bounds of the tile. The tiling logic1202may use any suitable method for determining whether a primitive falls, at least partially, within the bounds of a tile. For example, in some cases the tiling logic1202may use a simple, less accurate, method, such as a simple bounding box tiling method, to determine whether a primitive, at least partially, falls within a tile so as to quickly sort the primitives into tiles. As is known to those of skill in the art, in a bounding box method a bounding box that encompasses the primitive is identified (e.g. the smallest axis-aligned bounding box that encompasses the vertices of the primitive). The bounding box may be generated using any suitable method. For example, the tiling logic1202may generate a bounding box by finding the minimum and maximum X and Y coordinates of the vertices of the primitive and forming an axis-aligned bounding box from those coordinates. The bounding box may be generated at any granularity or resolution. For example in some cases, the bounding box may be at the X and Y coordinate resolution (i.e. the bounding box may be defined by the maximum and minimum X and Y coordinates of the vertices). In other cases, the bounding box may be at the tile resolution (i.e. the closest tile edges that encompass the primitive). Once the tiling logic1202has identified a bounding box for a primitive, the tiling logic1202may determine that the primitive, at least partially, falls within tile if the bounding box at least partially overlaps with the tile. In other words, a primitive may be determined to, at least partially, fall within a tile if the bounding box for that primitive, at least partially, falls within the bounds of the tile. While a bounding box method can be used to quickly and efficiently determine whether a primitive, at least partially, falls within a tile, it is not ‘perfect’ tiling as the bounding box is often larger than the primitive which may result in a primitive being determined to be in a tile when in fact it is not in the tile. For example,FIG.13shows an example tile group1300comprising four tiles1302,1304,1306, and1308. If a simple axis-aligned bounding box method is used to determine which of these tiles1302,1304,1306,1308a primitive1310, at least partially, falls within, then a bounding box1312around the primitive1310is generated. Since the bounding box1312at least partially overlaps with all of the tiles1302,1304,1306,1308it may be determined that the primitive1310falls, at least partially, within each of the four tiles1302,1304,1306,1308even though it actually only falls within, or overlaps, with three of the tiles1304,1306,1308. However, determining that a primitive falls within a tile when it does not actually fall within the tiles will not cause an error and the primitive will simply be discarded in the rasterization phase. However, determining that a primitive does not fall within a tile that it does fall within may cause an error in the rasterization phase. Accordingly, it is advantageous for the tiling to be conservative. In other words, it is better to indicate a primitive falls within a tile even though the primitive does not actually fall within the tile than to not include a primitive that actually does fall within the tile. In other cases, however, the tiling logic1202may use a more complicated and/or more accurate method, such as a perfect tiling or near perfect tiling method, to determine whether a primitive falls within a tile. An example perfect tiling method, which may be used by the tiling logic1202, is described in the Applicant's Published GB Patent Application No. 2549789 which is herein incorporated by reference in its entirety. The control data block generator1204receives the results (e.g. primitive masks) output by the tiling logic1202and the address of each primitive block in memory. Then the control data block generator1204is configured to, for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group (i) generate a control data block for that primitive block (e.g. the control data block404,406described above with respect toFIG.4); (ii) store the generated control data block in memory1208; and (iii) output the address of the control data block in memory. As described above, the control data block for a primitive block comprises the address of the primitive block in memory and may comprise one or more of the primitive masks generated by the tiling logic1202. In some cases, the control data block for a primitive block may also comprise a primitive block header that may include additional information about the primitive block and/or the primitive masks such as, but not limited to, the number of primitives in the primitive block, the format of the primitive masks in the control data block etc. As described above, the control data block generator1204may initially be allocated a page of memory to store the control data blocks for the group of tiles and may pack the control data blocks in the allocated page in memory, and once the page of memory is full a new page may be allocated to the group of tiles for storing the control data blocks. The control stream generator1206receives the results (e.g. primitive masks) output by the tiling logic1202and the address of each control data block. Then the control stream generator1206is configured to, for each primitive block that comprises at least one primitive that falls, at least partially, within the bounds of at least one tile in the tile group (i) generate a fixed-sized primitive block entry (e.g. a primitive block entry408,410described above with respect toFIG.4); and (ii) store the generated primitive block entry in memory1210as part of a control stream for the group of tiles. As described above, the primitive block entry for a primitive block comprises valid tile information (e.g. a valid tile mask with a bit per tile in the tile group) identifying which of the tiles in the tile group are valid for the primitive block; and a data pointer that points to the corresponding control data block in memory1208. A primitive block entry may also comprise a primitive block header which may include additional information about the primitive block and/or the corresponding control data block. In some cases, the control stream generator1206may also be configured to interleave other types of entries amongst the primitive block entries in memory1210. For example, as described above, in some cases the data pointer of the primitive block entries may only comprise an offset which can be combined with a base address to generate the address of the corresponding control data block in memory. In these cases the control stream generator1206may be configured to generate and store a base address entry (as described above) in the memory1210when it is determined that the base address for the control data blocks has changed (e.g. when the control data blocks are being written to a new page of memory). Furthermore, as described above, in some cases the control stream may be stored in memory in control stream entry blocks where each block has a maximum number of entries. In these cases, the control stream generator1206may be configured to build a control stream block by adding the entries to the control stream block until the maximum number of entries has been reached. Once the maximum number of entries less one has been reached the control stream generator1206may be configured to add a link entry to the control stream block indicating where the next control stream block will be stored in memory, and then write the control stream block to memory. The control stream generator1206may be configured to continue to build control stream blocks until the last primitive block entry has been generated. In some cases, the control stream generator1206may be configured to, once it has generated the last primitive block entry for the tile group, store a termination entry (as described above) in memory to indicate the end of the control stream for that tile group. Reference is now made toFIG.14which illustrates an example method1400for generating a control structure for a group of tiles which may be implemented by the tiling engine1200ofFIG.12. The method1400begins at step1402where the tiling engine receives a primitive block. As described above, a primitive block comprises one or more primitives (e.g. the transformed geometry related thereto). Once the primitive block has been received at the tiling engine the method1400proceeds to step1404where the tiling engine determines for each tile in the tiling group which of the primitives of the primitive block fall, at least partially, within the bounds of the tile. In other words, it is determined for each tile in the tile group which of the primitives in the primitive block overlap or intersect with the tile. Any tiling method can be used to determine whether a primitive falls, at least partially, within the bounds of a tile. Example tiling methods which may be used to determine whether a primitive, at least partially, within the bounds of a tile were described above. Once it has been determined, for each tile, of the tile group, which primitives fall, at least partially, within the bounds of the tile the method1400proceeds to step1406. At step1406, it is determined at the tiling engine, based on the determinations of step1404, whether there is at least one primitive of the primitive block that falls, at least partially, within the bounds of at least one of the tiles of the tile group. If none of the primitives of the primitive block fall, at least partially, within the bounds of at least one tile in the tile group then the primitive block is not relevant to the rendering of the tiles in the tile group and the method1400proceeds to step1412. If, however, at least one of the primitives of the primitive block fall, at least partially, within the bounds of at least one tile in the tile group then the method1400proceeds to step1408. At step1408, the tiling engine generates a control data block for the primitive block and stores the control data block in a section of memory designated for control data blocks. As described above, the control data block for a primitive block comprise the address of the primitive block in memory and may comprises one or more primitive masks. In some cases, the control data block for a primitive block may also comprise a primitive block header that may include additional information about the primitive block and/or the primitive masks such as, but not limited to, the number of primitives in the primitive block, the format of the primitive masks in the control data block etc. As described above, the tile group may initially be allocated a page of memory to store the control data blocks for the tile group and the control data blocks may be packed in the allocated page in memory until the page is full. Once the page of memory is full a new page may be allocated to the tile group. The method1400then proceeds to step1410. At step1410, the tiling engine generates a primitive block entry for the primitive block and stores the primitive block entry in memory as part of a control stream for the tile group. As described above, the primitive block entry for a primitive block comprises valid tile information (e.g. a valid tile mask with a bit per tile in the tile group) identifying which of the tiles in the tile group are valid for the primitive block; and a data pointer that points to the corresponding control data block in memory. A primitive block entry may also comprise a primitive block header which may include additional information about the primitive block and/or the corresponding control data block. The method1400then proceeds to step1412. At step1412, it is determined whether there any more primitive blocks. If there is at least one more primitive block, then the method1400proceeds back to step1402where the next primitive block is received. If, however, there are no more primitive blocks then the method1400ends. Although in the method1400ofFIG.14the primitive block entry for the current primitive is stored in memory before the next primitive block is processed, in other examples all or a portion of the primitive block entries may be generated before they are stored in memory. For example, in some cases the primitive block entries may be packed into control stream blocks and it is the control stream blocks that are stored in memory. Reference is now made toFIG.15which illustrates an example method to implement step1410wherein the primitive block entries are packed into control stream blocks, each control stream block having a maximum number of entries. The method1410may be implemented by the tiling engine1200ofFIG.12. The method1410begins at step1502where the tiling engine generates a primitive block entry (as described above) for the primitive block. The method1410then proceeds to step1504where the tiling engine adds the primitive block entry to the current control stream block. The method1410then proceeds to step1506where the tiling engine determines whether this is the last primitive block entry. If it is determined that this is the last primitive block entry then the method1410proceeds to steps1508,1510and1512where the tiling engine generates a termination entry (as described above) to indicate the end of the control stream, adds the termination entry to the current control stream block, and stores the current control stream block in memory. For example, inFIG.6, after the tiling engine generates the last primitive block entry624of the control stream, the tiling engine adds the primitive block entry624to the current control stream block604; generates a termination entry632; adds the termination entry632to the current control stream block604; and stores the current control stream block604in memory. If, however, the tiling engine determines at step1506that this is not the last primitive block entry in the control stream the method1410proceeds to step1514where the tiling engine determines whether the current control stream block has the maximum number of entries less one. For example, if each control stream block can have a maximum of 32 entries (including the link entry to the next control stream block, if necessary) then the tiling engine determines whether there are now 31 entries in the current control stream block. If the tiling engine determines that there are less than the maximum number of entries less one in the current control stream block, then the method1410ends. If, however, the tiling engine determines at step1514that the current control stream block comprises the maximum number of entries less one (e.g. 31 entries) then the method1410proceeds to step1516. At step1516, the tiling engine generates a link entry. As described above, a link entry comprises information identifying the location of the next control stream block in memory. The information identifying the location of the next control stream block may be an address of the next control stream block in memory. Generating a link entry may comprise determining the location of the next control stream block in memory by requesting a new chunk of memory for storing the next control stream block. Once the link entry has been generated the method proceeds to steps1518and1520where the link entry is added to the control stream block and the control stream block is stored in memory. The method1410then proceeds to block1522where a new control stream block is generated, and the new control stream block becomes the current control stream block and the method1410ends. For example, inFIG.6after the tiling engine generates primitive block entry622and adds the primitive block entry622to the first control stream block602, the tiling engine may determine that now the control stream block comprises the maximum number of entries less one. The tiling engine then generates a link entry630which identifies the location at which the next control stream block604is to be stored in memory, adds the link entry630to the first control stream block602, and stores the control stream block602in memory. Control Stream Decoder Reference is now made toFIG.16which illustrates an example control stream decoder1600for decoding the control stream ofFIG.4orFIG.6for a group of tiles to identify the primitives to be used to render a current tile of the tile group. The control stream decoder1600comprises a fetch module1602which is configured to fetch a set of control stream entries (e.g. a control stream block) of the control stream from memory1604and a primitive block entry analyser1606which is configured to (i) analyse each primitive block entry thereof to determine whether the corresponding primitive block is relevant to the current tile, and (ii) if it is determined that the corresponding primitive block is relevant to the current tile, fetch the corresponding control data block in memory. For example, the primitive block entry analyser1606may be configured to receive a primitive block entry and examine the bit of the valid tile mask corresponding to the current tile to determine if the current tile is valid for the corresponding primitive block. If it is determined that the current tile is valid for the corresponding primitive block then the primitive block entry analyser1606may identify the address of the control data block for that primitive block from the data pointer information (and optionally from the control data base address) as described above. Once the primitive block entry analyser1606has identified the address of the control data block, the primitive block entry analyser retrieves the control data block from that address of memory1608. An example method for analysing a primitive block entry which may be implemented by the primitive block entry analyser1606is described below with respect toFIG.17. In some cases, where the control stream can comprise different types of entries (e.g. control data base address entries, link entries and/or termination entries) the control stream decoder1600may also comprise an entry type analyser1610, and/or a link entry analyser1612and a control data base address entry analyser1614. The entry type analyser is configured to receive the control stream entries fetched by the fetch module, determine the type of each entry, and forward the entry to the appropriate analyser for processing. For example, if the entry type analyser determines from, for example, the entry type bits of the control stream entry that the control stream entry is a primitive block entry then the entry type analyser1610may forward the control stream entry to the primitive block entry analyser1606, which as described above determines, from the primitive block entry, whether the current tile is valid for the corresponding primitive block and if so, retrieves the corresponding control data block from memory. If, however, the entry type analyser1610determines from, for example, the entry type bits of the control stream entry that the control stream entry is a link entry then the entry type analyser1610may forward the control stream entry to the link entry analyser1612. If, however, the entry type analyser1610determines, from, for example, the entry type bits of the control stream entry that the control stream entry is a control data base address entry then the entry type analyser1610may forward the control stream entry to the control data base address entry analyser1614. If, though, the entry type analyser determines from, for example, the entry type bits of the control stream entry that the control stream entry is a termination entry then the entry may understand that the end of the control stream has been reached. The control stream decoder1600may then start processing another tile by retrieving the control stream entries for the tile group comprising that tile. An example method which may be implemented by the entry type analyser is described below with respect toFIG.18. The control data base address entry analyser1614is configured to receive control data base address entries from the entry type analyser1610and extract the new control data base address, or the new portion of the control data base address, identified therein. For example, as described above with respect toFIGS.8-11in some cases each control data base address entry may identify a complete control data base address (e.g.FIGS.8and10); and in other cases each control data base address entry may only specify a portion (e.g. the top K bit or the bottom K bits) of a control data base address and thus a complete control data base address is specified by multiple control data base address entries. In the former case the control data base address entry analyser1614may be configured to extract the new control data base entry identified therein. In the latter case the control data base address entry analyser1614may be configured to determine which bits of the base address are specified therein and extract the new part of the control data base address identified therein. The control data base address entry analyser1614may then update the stored current control data base address1616to reflect the new base address, or the new part of the base address. Then when the primitive block entry analyser determines that the current tile is valid for a primitive block corresponding to a primitive block entry the primitive block entry analyser1606may be configured to determine the address of the corresponding control data block in memory based on the data pointer portion of the primitive block entry and the current control data base address1616. The link entry analyser1612is configured to receive link entries from the entry type analyser1610and extract the address in memory of the next control stream block therefrom. The link entry analyser1612may then transmit the address of the next control stream block to the fetch module1602which may then retrieve the next control stream block from memory1604using the identified address. In some cases, where each entry comprises valid tile information (e.g. as described above with respect toFIG.7), the control stream decoder1600may further comprise a block skip module1618which is configured to determine from the valid tile information of each entry whether any of the primitive block entries in the group are relevant to the current tile. For example, as described above with respect toFIG.7, where the valid tile information comprises a valid tile mask, the block skip module1618may be configured to select the bit of each valid tile mask that corresponds to the current tile, perform an OR operation on the selected bits, and determine whether any of the primitive block entries in the block or group are relevant to the current tile based on the outcome of the OR operation. If the block skip module1618determines that none of the primitive block entries in the block or group are relevant to the current tile then the block skip module1618may be configured to provide the control stream entries in the group or block to the entry type analyser1610along with a notification that none of the primitive block entries need to be passed to the primitive block analyser. If, however, the block skip module1618determines that at least one primitive block entry in the block or group is relevant to the current tile then the block skip module1618may be configured to provide the control stream entries in the group or block to the entry type analyser1610to be processed as normal. An example method which may be implemented by the block skip module1618is described below with respect toFIG.19. Reference is now made toFIG.17which illustrates an example method1700for processing a primitive block entry which may be implemented by the control stream decoder1600, and specifically the primitive block entry analyser1606thereof. The method1700begins at step1702where the control stream decoder (e.g. primitive block entry analyser1606) receives a primitive block entry. At step1704the control stream decoder (e.g. primitive block entry analyser1606) determines, from the valid tile information in the primitive block entry, whether the current tile is valid for the corresponding primitive block (i.e. whether there are any primitives in the corresponding primitive block that fall, at least partially, within the bounds of the current tile). For example, where the valid tile information comprises a valid tile mask this may comprise identifying the bit of the valid tile mask that corresponds to the current tile and determining from the identified bit whether the current tile is valid for the corresponding primitive block. If it is determined at step1704that the current tile is not valid for the corresponding primitive block, then the primitive block entry is not further processed, and the method ends. The method may then be repeated for the next primitive block entry. If, however, it is determined at step1704that the current tile is valid for the corresponding primitive block then the method1700proceeds to step1706. At step1706, the control stream decoder1600(e.g. the primitive block entry analyser1606) identifies, from the data pointer portion of the primitive block entry the address of the corresponding control data block in memory. In some cases the data pointer portion of a primitive block entry may specify the whole address. In these cases the address may be extract from the primitive block entry. However, in other cases, the data pointer portion of the primitive block entry may specify only an offset and the complete address is generated by combining the offset specified in the data pointer portion of the primitive block entry and the current base address. Once the address of the corresponding control data block in memory has been identified the method1700proceeds to step1708where the control stream decoder1600(e.g. the primitive block entry analyser1606) retrieves the corresponding control data block from memory using the identified address. Once the control data block has been retrieved from memory the method1700proceeds to step1710where the control stream decoder determines, from the control data block, (i) the address of the corresponding primitive block and (ii) the primitives of that primitive block that are relevant to the current tile. Identifying the primitives of that primitive block that are relevant to the current tile may comprise reading the primitive mask corresponding to the current tile from the control data block or reading other information from the control data block. For example, as described above the control data block may comprise information indicating whether a tile in the tile group has a full valid mask. At step1712the control stream decoder1600(e.g. the primitive block entry analyser1606) outputs the address of the primitive block in memory and information identifying the primitives of that primitive block that are relevant to the current tile. The method1700then ends. Reference is now made toFIG.18which illustrates an example method1800for processing the entries of a control stream where there are multiple control stream entry types which may be implemented by the control stream decoder1600, and specifically, the entry type analyser thereof. The method1800begins at step1802where the control stream decoder1600(e.g. entry type analyser1610) receives a control stream entry. At step1804the control stream decoder1600(e.g. entry type analyser1610) analyses the entry type information/bits of the control stream entry to identify the type of control stream entry. If the control stream decoder1600(e.g. entry type analyser1610) determines (step1806) that the control stream entry is a primitive block entry then at step1808the primitive block entry is processed to determine whether the current tile is valid for the corresponding primitive block and if so, retrieves the control data block for that primitive block from memory (e.g. method1700may be executed). If, however, the control stream decoder1600determines (step1806) that the control stream entry is not a primitive block entry then the method1800proceeds to1810. If the control stream decoder1600(e.g. entry type analyser1610) determines (step1810) that the control stream entry is a control data base address entry then the method1800proceeds to step1812where the new control data base address or the new portion of the control data base address is extracted therefrom and the current base address is updated to reflect the new base address or the new portion thereof. If the control stream decoder1600(at step1810) determines that the control stream entry is not a control data base address entry, then the method1800proceeds to1814. If the control stream decoder1600(e.g. entry type analyser1610) determines (step1814) that the control stream entry is a link entry then the method1800proceeds to step1816where the address of the next control stream block is extracted therefrom and the next control stream block is retrieved from the identified address. If the control stream decoder1600determines (step1814) that the control stream entry is not a link entry, then the method1800proceeds to1818. If the control stream decoder1600(e.g. entry type analyser1610) determines (step1818) that the control stream entry is a termination entry then the end of the control stream has been reached and the method1800proceeds to step1820where the control stream decoder1600terminates processing of the control stream. AlthoughFIG.18shows the steps1806,1810,1814and1818in a specific order it will be evident to a person of skill in the art that the steps1806,1810,1814and1818may be performed in any order. Reference is now made toFIG.19which illustrates an example method1900of skipping a group or block of control stream entries wherein each control stream entry comprises valid tile information and the valid tile information of any non-primitive block entry indicates that none of the tiles in the tile group are valid. The method1900may be implemented by the control stream decoder1600, and specifically the block skip module1618thereof. The method1900begins at step1902where the control stream decoder1600(e.g. block skip module1618) receives a group or block of control stream entries. Where the control stream entries are divided into control stream blocks the group of entries may form a control stream block. At step1904, the control stream decoder1600(e.g. block skip module1618) may select the portion (e.g. bits) of the valid tile information of each entry that pertains to the current tile. In some cases, the valid tile information may comprise a valid tile mask that comprises a bit for each tile that indicates whether or not that tile is valid for the corresponding primitive block. For example, as described above with respect toFIG.7if the tile group comprises four tiles the valid tile mask comprises a bit for each tile wherein the first bit corresponds to the first tile, the second bit corresponds to the second tile, the third bit corresponds to the third tile and the fourth bit corresponds to the fourth tile. In this example, if the current tile is the first tile in the group then the control stream decoder1600(e.g. block skip module1618) may select the first bit of the valid tile mask of each of the control stream entries. Then at step1906the control stream decoder1600(e.g. block skip module1618) may perform an operation on, or combine, the selected bits to determine whether the current tile is valid for any primitive block entries in the group. For example, as described above, the control stream decoder1600(e.g. block skip module1618) may perform an OR operation on the selected bits (e.g. may OR all of the selected bits together) to determine whether the current tile is valid for any primitive block entries in the group. Then at step1908it is determined from the result of the operation, or combination, whether the current tile is valid for any of the primitive block entries in the group. If it is determined at step1908that the current tile is not valid for any of the primitive block entries in the group, then the method1900proceeds to step1910where all of the entries in the group except the primitive block entries are processed. This may comprise executing or implementing a modified version of the method1800ofFIG.18for each entry where instead of executing step1808for a primitive block entry, the method1800simply ends. If, however, it is determined at step1908that the current tile is valid for at least one primitive block entry in the group then the method1900proceeds to step1912where all of the entries in the group are processed. This may comprise, for example, executing the method1800ofFIG.18for each control stream entry in the group. Graphics Processing System Reference is now made toFIG.20which illustrates an example tile-based graphics processing system2000which comprises the tiling engine1200ofFIG.12and the control stream decoder1600ofFIG.16. The graphics processing system2000ofFIG.20is similar to the graphics processing system100ofFIG.1in that it comprises geometry processing logic2004and rasterization logic2006; the geometry processing logic2004comprises transformation logic2008and a primitive block generator2010(each of which function as the corresponding components ofFIG.1); and the rasterization logic2006comprises a rasterizer2014, HSR logic2016and texturing/shading logic2018(each of which function as the corresponding components ofFIG.1described above). However, instead of the geometry processing logic comprising a tiling engine that is configured to store a display list per tile, the geometry processing logic2004comprises a tiling engine1200configured to group the tiles into tile groups and store, for each tile group, a control structure that comprises a control stream and one or more control data blocks that are linked to the control stream as described above. The rasterization logic2006ofFIG.20also comprises a control stream decoder1600which is configured to generate a display list for each tile by decoding the corresponding control stream stored in memory2002. Test Results Testing has shown that in most cases storing the variable length control data separate from the control stream reduces the total bandwidth to read and write the tiling data compared to storing a display list per tile (e.g. as described above with respect toFIG.2). For example,FIG.21shows the bandwidth to store a control stream for a group of tiles where the variable control data blocks are stored separately therefrom as a percentage of the bandwidth to store a display list per tile for number of graphics benchmarks. It can be seen that only for the TRex and PUBG benchmarks the total bandwidth is increased and even in these cases the increase is minimal. This increase is due to the two tiered control stream wherein a control stream decoder has to read the control stream and, if necessary, read the control data block. Testing has shown that in most cases storing the variable length control data separate from the control stream reduces the amount of memory used to store the tiling data compared to storing a display list per tile (e.g. as described above with respect toFIG.2) because it allows the control stream itself to be packed in memory more efficiently. The only memory wastage occurs when the control stream is stored in control stream blocks as a whole memory page may be allocated to a control stream block, but the whole page may not be used (e.g. because there are not enough entries to fill the page). For example,FIG.22shows the total pages to store the tiling data when the variable length control data is stored separately from the control stream as a percentage of the total pages to store a display list per tile. It can be seen fromFIG.22that storing the tiling data as described herein (e.g. storing a control stream for each group of tiles wherein the variable length control data is stored separately from the control stream) reduced the total number of pages to store the tiling data, and in some cases, such as Angry Birds, quite significantly (i.e. more than 50%). Testing has also shown that the number of masked writes by the tiling engine and the number of bursts produced by the tiling engine are reduced when storing the tiling data as described herein (e.g. storing a control stream for each group of tiles wherein the variable length control data is stored separately from the control stream) compared to storing a display list per tile (e.g. as described above with respect toFIG.2). Specifically,FIG.23shows the number of bursts produced by the tiling engine when the tiling data is stored as described herein as a percentage of the number of bursts produced by the tiling engine when a display list is stored per tile; andFIG.24shows the number of masked writes performed for the tiling engine when tiling data is stored as described herein as a percentage of the number of masked writes by the tiling engine when a display list is stored per tile for a number of graphics benchmarks. As is known to those of skill in the art, some memories may be configured such that data may only be written to the memory in burst-sized chunks, or memory interface width chunks (e.g. 32 bytes at a time). It may, however, be desirable to write to only a portion of a burst-sized chunk. In these cases, the portion of a burst-sized chunk that is to be written to may be identified by a mask. This is referred to as a masked write. Testing has also shown that the number of bursts produced by the tiling engine are reduced when storing the tiling data as described herein (e.g. storing a control stream for each group of tiles wherein the variable length control data is stored separately from the control stream) compared to storing a control stream per tile group wherein the variable length control data is stored as part of the control stream (e.g. as described above with respect toFIG.3).FIG.25shows the number of bursts produced by the tiling engine when the tiling data is stored as described herein as a percentage of the number of bursts produced by the tiling engine when a display list is stored per tile group wherein the variable length control data is stored as part of the control stream (e.g. as described above with respect toFIG.3). FIG.26shows a computer system in which the tiling engines, control stream decoders and/or graphics processing systems described herein may be implemented. The computer system comprises a CPU2602, a GPU2604, a memory2606and other devices2614, such as a display2616, speakers2618and a camera2620. A processing block2610(which may correspond to a tiling engine, a control stream decoder and/or graphics processing system described herein) is implemented on the GPU2604. In other examples, the processing block2610may be implemented on the CPU2602. The components of the computer system can communicate with each other via a communications bus2622. The tiling engines, control stream decoders, and graphics processing systems ofFIGS.1,12and16are shown as comprising a number of functional blocks. This is schematic only and is not intended to define a strict division between different logic elements of such entities. Each functional block may be provided in any suitable manner. It is to be understood that intermediate values described herein as being formed by the tiling engine, the control stream decoder or the graphics processing system, need not be physically generated by the tiling engine, the control stream decoder or the graphics processing system at any point and may merely represent logical values which conveniently describe the processing performed by the tiling engine, the control stream decoder or graphics processing system between its input and output. The tiling engines, control stream decoders and graphics processing systems described herein may be embodied in hardware on an integrated circuit. The tiling engines, control stream decoders and graphics processing systems described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine. The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code. A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like. A computer or computer system may comprise one or more processors. It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a tiling engine, a control stream decoder or graphics processing system configured to perform any of the methods described herein, or to manufacture a tiling engine, a control stream decoder or graphics processing systems comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description. Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a tiling engine, a control stream decoder or a graphics processing system as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a tiling engine, a control stream decoder or a graphics processing system to be performed. An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining hardware suitable for manufacture in an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS (RTM) and GDSII. Higher level representations which logically define hardware suitable for manufacture in an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit. An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a tiling engine, a control stream decoder or a graphics processing system will now be described with respect toFIG.27. FIG.27shows an example of an integrated circuit (IC) manufacturing system2702which is configured to manufacture a tiling engine, a control stream decoder or a graphics processing system as described in any of the examples herein. In particular, the IC manufacturing system2702comprises a layout processing system2704and an integrated circuit generation system2706. The IC manufacturing system2702is configured to receive an IC definition dataset (e.g. defining a tiling engine, a control stream decoder or a graphics processing system as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a tiling engine, a control stream decoder or a graphics processing system as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system2702to manufacture an integrated circuit embodying a tiling engine, a control stream decoder or a graphics processing system as described in any of the examples herein. The layout processing system2704is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system2704has determined the circuit layout it may output a circuit layout definition to the IC generation system2706. A circuit layout definition may be, for example, a circuit layout description. The IC generation system2706generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system2706may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system2706may be in the form of computer-readable code which the IC generation system2706can use to form a suitable mask for use in generating an IC. The different processes performed by the IC manufacturing system2702may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system2702may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties. In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a tiling engine, a control stream decoder or a graphics processing system without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA). In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect toFIG.27by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured. In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown inFIG.27, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit. The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget. The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
96,093
11861783
DETAILED DESCRIPTION Systems and methods for calculating motion vectors for use in frame interpolation, framerate conversion, or other actions are described herein. As explained previously, motion vectors may be generated which track the difference in position of objects between a current frame (CF) and a previous frame (PF). As explained herein, two types of motion vectors may be utilized to interpolate frames, 1-phase MVs (MV1) and 0-phase MVs (MV0). MV0 represents motion from the PF to the CF and MV1 represents motion from the CF to the PF. The MVs are generated for each pixel (or group of pixels) on the screen, forming a texture, or collection of MVs for the pixels on the screen. As used herein, a texture is defined to be a map from the collection of pixels in a frame to a collection of one or more numbers (e.g. components of a vector or single numbers). If motion vectors are used in a framerate conversion process, typical rendering engines output only the MV1 texture in two dimensions. As such, the texture contains no depth content, and only includes information about changes in the relative screen positions as viewed in the reference frame of the virtual camera. Utilizing depth for the pixelwise motion vectors may inform how to compute the 2D components of block motion vectors. Block motion vectors may represent an average of the motion vectors for a block of pixels (e.g., a five by five block of pixels) and may be utilized for frame interpolation or other image processing tasks in order to reduce processing demands, for example. Areas of the scene within certain ranges of depth are called foreground (close to the camera), background (far from the camera), or mid-range (between foreground and background). It may be desirable in image processing to determine which depth range dominates each block of pixels: either foreground, background, or mid-range. As an example, two objects may be positioned at different distances from a (virtual) camera or viewpoint. If the two objects move in the same direction, in equal world-space distances, the object which is farther away may appear to move a smaller distance in the eye space, creating a parallax effect where objects which are farther away from the viewpoint appear to move less than objects that are closer to the viewpoint. In the case that a majority of pixels in the block are in the background, the majority of pixels will have small MVs, since MVs are evaluated from the perspective of the camera/viewpoint. If a small amount of the pixels in the block are, for example, in the foreground, the foreground pixels will have motion vectors with larger magnitudes. If all motion vectors within the block were to be averaged, the (relatively few) MVs of the foreground would dominate the average MV. This may misrepresent the relatively small apparent motion of the background pixels, favoring the MVs of the foreground pixels instead. By including the depth information in the pixel MVs, the dominant depth range of each block may be resolved: either foreground, background, or mid-range. Motion vector values within the block which do not fall into the dominant range may then be disregarded in favor of evaluating the average of only the pixels within the dominant range. In the case of a block dominated by background pixels, the resulting motion vector may more closely match the motion occurring within the frame. Added depth information may also offer additional flexibility for the image processing module. Depth components of the MVs may be used, for example, as an input to a weighting function which may, in turn, be used to apply varying levels of image correction to areas of different depth. Thus, according to embodiments described herein, a depth texture may be attached to MV1, which allows for more accurate frame interpolation by taking changes in depth into account. In addition, an MV0 texture with depth may similarly be generated. Both MV0 and MV1 can be used as inputs to framerate conversion algorithms, helping in the interpolation step. In addition, each frame may be composed of two kinds of objects: those with motion vectors and those without. Objects featuring motion vectors may include moving characters or other objects, the view of the user, and parts of an in-game map. Objects without motion vectors may include, for example, smoke effects, full- or partial-screen scene transitions (e.g. fades and wipes), and/or particle effects. By separating objects with motion vectors from objects without motion vectors, improved image processing can be performed. Traditionally, algorithms may attempt to exclude screen regions which feature objects without motion vectors. However, this approach is imperfect and may lead to the blending of nearby objects during the process of framerate conversion. Separation of objects with and without motion vectors before transmission to an image processor may then reduce the artifacts caused by the traditionally-known method of exclusion. Traditionally, motion vectors are also sampled on a pixel-by-pixel basis, such each pixel on the screen has an associated MV0 and MV1. However, the sampling resolution for motion vectors can be dynamically reduced or increased. Reducing the resolution may also reduce the computational power required for MV calculation. Since many devices (e.g. smartphones) have limited computational resources and battery life, reductions in computational cost may save on processing power and batter life. As described herein, “pixelwise” or “pixel-by-pixel” may not refer to individual pixels, but may instead refer to collections of pixels in the context of evaluating motion vectors. Low-resolution MV generation may be performed, in some embodiments, by lowering the sampling resolution when calculating the MVs. For example, MVs may only be computed for every fourth pixel in the x-direction and every fourth pixel in the y-direction. The motion vectors described here are first generated on a pixel-by-pixel basis, then translated to a block form with a depth texture. The block motion vectors may then be split into separate channels for objects with motion vectors and objects without to an image processing module. The image processing module may then perform visual enhancements using the block motion vectors, such as framerate conversion, for example. By separating objects with MVs and objects without MVs, exclusion algorithms may not be necessary. Separation may therefore allow for the generation of interpolated frame data, even in regions obscured by particle effects. Including depth information in the pixel MV may allow for more accurate block MV calculation, since blocks dominated by pixels in the background may be better represented than by taking the block average. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to the embodiments disclosed herein. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those of skill in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by computer readable instructions using a wide range of hardware, software, firmware, or virtually any combination thereof. The described systems are exemplary in nature, and may include additional elements and/or omit elements. The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various systems and configurations, and other features, functions, and/or properties disclosed. Thus, the methods may be performed by executing stored instructions on machine readable storage media with one or more logic devices (e.g., processors) in combination with one or more additional hardware elements, such as storage devices, memory, hardware network interfaces/antennas, switches, actuators, clock circuits, etc. The described methods and associated actions may also be performed in various orders in addition to the order described in this application, in parallel, and/or simultaneously. Processors of the logic subsystem may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration. FIG.1schematically depicts an example of a computer system100which may include one or more processors110(e.g. central processing units, (CPUs) and/or graphics processing units (GPUs)), volatile and/or nonvolatile memory120(e.g. random-access memory (RAM) and/or one or more hard disk drives (HDDs)). The computer system may also include one or more displays, such as display130, which may comprise any number of visual interface technologies. In addition, example embodiments may include a user interface140, e.g. keyboards, computer mice, touch screens, controllers, etc. to allow users to provide input to the computer system. In some embodiments, the computer system may be a mobile phone or tablet. FIG.2shows a block diagram200schematically illustrating a pipeline for generating and using MVs, including the components that may generate and process MVs. Block diagram200includes a game engine202which outputs one or more 3D models. The game engine202may be configured to generate and output 3D models204to be rendered, which may specify the desired locations for objects, and possibly any particle effects. The game engine202produces a plurality of image frame data in a sequence, which may include CF data208and PF data206. Ultimately, one or more interpolated frames may be generated between the CF and PF. The 3D models204are output to a 3D motion vector module, which in the current example may be a software development kit and thus referring to as a game SDK210, which uses internal rendering matrices and the information from the 3D models to generate 3D pixel MV0 texture212and a 3D pixel MV1 texture214. The 3D pixel MV1 texture214may include a plurality of 3D MV1s, one for each pixel or group of pixels. Each 3D MV1 (e.g., for a respective pixel) may include a change in a vertical position, a change in a horizontal position, and a change in a depth position of an object at that pixel from a current frame to a previous frame. The 3D pixel MV0 texture212may include a plurality of 3D MV0s, one for each pixel or group of pixels. Each MV0 (e.g., for a respective pixel) may include a change in a vertical position, a change in a horizontal position, and a change in a depth position of an object at that pixel from the previous frame to the current frame. The process of generating the 3D pixel MV0 texture212and the 3D pixel MV1 texture214may include generating a plurality of possible MVs for one or more pixels, due to the non-exclusive projection of the object from PF to the CF. In the 2D domain, one pixel in PF may be projected to a plurality of pixels in the CF. The plurality of possible MVs are then compared to the depth textures and other objects within the 3D scene in order to double-confirm the MVs. Double-confirmation is a process of selecting the correct MV from the plurality of MVs. One example method is to compare the depth buffers of the 3D pixel MV0212and the 3D pixel MV1214to the depth textures of PF and the CF, respectively. The (double-confirmed) 3D pixel MV0 texture212and the (double-confirmed) 3D pixel MV1 texture214may then be output to a block motion vector module, which in the current example may be a software development kit and thus may be referred to as an algorithm SDK216, which may process the input to generate 2.5D block texture MV0218and a 2.5D block MV1 texture220, which are textures that may be utilized in generating interpolated frames. The block textures are generated from a process which averages the motion vector values within each of a plurality of blocks (as explained below with respect toFIG.5). The 2.5D block MV0 texture218and the 2.5D block MV1 texture220are output to an image processing module222, which may or may not be a piece of hardware separate from the other processors110, which also receives the CF data208and the PF data206. During the transfer of the 2.5D block MV0 texture218and the 2.5D block MV1 texture220, objects with MVs are separated from objects without MVs. The separated data may be transmitted on two physically separated or two logically separated channels to the image processing module222. The image processing module222may then perform an interpolation step or a framerate conversion, using the 2.5D block MV0 texture218and the 2.5D block MV1 texture220as inputs, as well as the PF data206and CF data208. The image processing module222may output PF data206, one or more interpolated frames224, and CF data208, which may then be visually displayed (in the order listed) on display130. Generating the interpolated frame224therefore allows for the framerate to be increased. The game engine202, the game SDK210, and the algorithm SDK216may each execute on the same processors110of the computer system100or on different processors according to instructions stored in volatile and/or nonvolatile memory120. The image processing module222may be a separate piece of hardware than the game engine202, the game SDK210, and the algorithm SDK216, at least in some examples. As used herein, the terms “system” or “module” may include a hardware and/or software system that operates to perform one or more functions. For example, a module or system may include a computer processor, controller, or other logic-based device that performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium, such as a computer memory. Alternatively, a module or system may include a hard-wired device that performs operations based on hard-wired logic of the device. Various modules or units shown in the attached figures may represent the hardware that operates based on software or hardwired instructions, the software that directs hardware to perform the operations, or a combination thereof. It should be noted that the techniques discussed herein apply not only to games, but to any animated renderings of 3D models, though the advantages offered by this method may be most noticeable in real-time rendering situations. FIG.3shows a 3D coordinate system300with two three-dimensional points Q_PF302and Q_CF304. Both points have x, y, and z coordinates, with Q_PF302having coordinates of (x0,y0,z0) and Q_CF having coordinates of (x1,y1,z1). The z value is also known as the depth, which, in some examples, is provided by a separate texture or may need to be calculated using, for example a method involving a ray going from the camera location to the points' locations. From these two points in space, two 3D motion vectors may be calculated using, for example, MV0=(x0−x1,y0−y1,z0−z1)=(Δx0,Δy0,Δz0), MV1=(x1−x0,y1−y0,z1−z0)=(Δx1,Δy1,Δz1)=−MV0. According to these definitions, MV0 represents the 3D change experienced by an object going from the Q_PF302to Q_CF304. On the other hand, MV1 represents a change in the opposite direction: from Q_CF304to Q_PF302. Although the 3D motion vectors ofFIG.3show points in 3D space, these points are projected onto a 2D display in order to be viewed. In conventional examples, the motion vectors are generated as a 2D texture, which is then used to generate visual enhancements and create interpolated frames. The embodiments described herein use a depth texture in addition to the conventional MV information, but in a distinct way from a purely 3D MV. This depth texture can be calculated, for example, by projecting a ray from the near clip of the viewing space to the object of each pixel, allowing depth to be evaluated for each point on the screen. The near clip, as defined herein, is the closest plane of the 3D space in which objects are still visible. Objects closer than the near clip are not visible in the scene. Thus, the method of calculating 3D MV0s and 3D MV1s may be applied to generate the 3D pixel MVs discussed above with respect toFIG.2, which are then double-confirmed according to a process described in more detail below. The double-confirmed 3D pixel MV0s and double-confirmed 3D pixel MV1s may be converted to 2.5D block MV0s and 2.5D block MV1s which are then output to an image processing module for use in framerate conversion, for example. FIG.4illustrates how 2.5D motion vectors may be calculated and thus shows a 2D coordinate system400with the projection of Q_PF402and the projection of Q_CF404. These projected points have only x- and y-coordinates. Projections may be performed through the use of rendering matrices applied to points in 3D space, such as the rendering matrices discussed above with respect toFIG.2. The 2D coordinate system400may represent, for example, the apparent positions of pixels on a 2D screen, as viewed from the perspective of a virtual camera. Using the projected coordinates and the depth values of the two points, the 2.5D motion vectors can be calculated. Note that these are distinct from the 3D motion vectors. The 2.5D motion vectors may be computed as, for example, MV02.5D=(Δx0,Δy0,z0), MV12.5D=(Δx1,Δy1,z1)≠−MV02.5D. Note that the change in depth between the frames is not recorded in either case, and that the raw depth values z1and z0are used. FIG.5shows a flowchart for a method500for generating 2.5D block motion vectors. Method500may be carried out according to instructions stored in computer memory, such as volatile and/or nonvolatile memory120, and executed on one or more processors, such as processors110. In some examples, method500may be carried out by the game SDK210and/or the algorithm SDK216ofFIG.2. At502, CF and PF frame pixels and view projection matrices (e.g., rendering matrices) are acquired. In some examples, the CF and PF frame pixels and view projection matrices are received from the game SDK210or the game engine202. The depth textures of CF and PF are calculated at504. The depth textures may include a change in depth of a respective object for each pixel from the CF to the PF (for the MV1 texture) and a change in depth of a respective object for each pixel from the PF to the CF (for the MV0 texture). At506, a double-confirmation process is applied to double-confirm each 3D pixel MV1 and MV0 of the textures. Double confirmation selects the correct motion vectors from one or more possible choices, depending on factors such as occlusion of pixels and may help to render more accurate motion vectors. To double-confirm an MV1 or MV0 for a selected pixel, the method may confirm that the coordinates of the selected pixel in the current frame match coordinates of that pixel mapped back to the previous frame using the MV1 or MV0 for the selected pixel, where the coordinates of the selected pixel include a depth coordinate. For example, in the 2D domain, one pixel in PF could be projected by several pixels in CF. In the 3D domain, there is a one-to-one correspondence for each pixel. The depth buffer may then be used to decide which MV is correct: Depth_buffer[i]=cf_mv_z_i+depth_i; if Depth_PF=Depth_buffer[j],then buffered MV_jis correct MV0. Additionally or alternatively, a MV of a pixel in PF (with coordinates X0,Y0,depth0) is (delta_x0, delta_y0, delta_z0). The corresponding pixel in CF may have coordinates (X1,Y1,depth1) If X1=X0+delta_x; Y1=Y0+delta_y; and depth1=depth0+delta_z, then this pixel is called double confirmed. Put another way, if the MV of a pixel in CF (with coordinates X1, Y1, depth1) is (delta_x1, delta_y1, delta_z1), if delta_x0=−delta_x1, delta_y0=−delta_y1, and delta_z0=−delta_z1, the pixel Q_CF and Q_PF are double-confirmed. Anything that is not double-confirmed is outside a double-confirmed region. If a pixel/MV is determined to be in an unconfirmed region, any non-confirmed MV1s and/or MV0s may be adjusted based on a nearest double-confirmed MV1 and/or MV0 (or nearest double-confirmed MV1 or MV0 of the same object) to transform the one or more non-confirmed MV1s and/or MV0s into one or more double-confirmed MV1s and/or MV0s. For example, for a given pixel P0, the nearest pixel with a double-confirmed MV may be PF. To adjust the MV for P0, the MV0 for P0 may be the sum of the MV1 for PF and a delta MV for PF. The delta MV for PF may be calculated as MV0 for PF plus MV1 for P0. The double confirmed 3D pixel MV0s may be output at508and the double-confirmed 3D pixel MV1s may be output at510(e.g., to the algorithm SDK). Using the double-confirmed 3D pixel MV0s, a 2.5D block MV0 is generated at512and the 2.5D block MV0 is output at514. Similarly, the double-confirmed 3D pixel MV1s are converted to a block MV1 at513and the block MV1 is output at516. To convert the 3D pixel MV1s to 2.5D block MV1s, for each pixel block of a plurality of pixel blocks, a predominant pixel type in that pixel block is identified. If the predominant pixel type is background pixels, an average MV1 is calculated over all pixels in that pixel set. Otherwise, the average MV1 is calculated over only foreground pixels in that pixel set. The average MV1 is then set as the MV1 for that pixel block, wherein the set of MV1 blocks comprises the average MV1 for each pixel block. A similar process may be performed for the conversion of the 3D pixel MV0s to 2.5D block MV0s. Further, rather than include the change in depth in the MVs, the change in depth is replaced with the depth value for that pixel. The 3D MVs represent motion of objects between frames in three dimensions. 2.5D MVs represent motion of objects within the 2D screen, with a depth value added to represent occlusion of objects. That is, the depth value represents which objects are in front of others, which may help in the generation of interpolated frames. At518, objects with MVs and objects without MVs are separated into two channels. In some embodiments, the channels are physically separate. When physically separate channels are infeasible, logically separate channels (either in space or in time) may be used. Separating the objects with MVs from objects without MVs may remove the need to exclude regions from the screen when performing framerate conversion. Exclusion algorithms may produce visual artifacts at the boundaries of regions with particle effects. Separating these objects allows them to be processed differently, improving the result. Various information is sent to the image processing module at526. The information that is sent to the image processing module includes block MV0 and MV1, as indicated at520, as well as image data, as indicated at522, and frame-level metadata, as indicated at524. The image data522may include the objects discussed above and thus the image data522may be sent over more than one physical or logical channel. FIG.6Ashows an example diagram of a rendering pipeline600executed on a game engine, such as game engine202ofFIG.2. As referred to herein, a rendering pipeline is a set of internal spaces within the game engine which provide 3D rendering data in different forms. The rendering pipeline600illustrates a process for rendering a 3D scene to a 2D screen. The 3D scene may be based on a 3D model created as part of a video game or other 3D animation, such that the rendering pipeline600depicts the process of turning the 3D model into 2D frames for display on a display device. At least some of the information from the rendering pipeline (e.g., the rendering matrices and depth textures) may be output to the game and/or algorithm SDKs for use in calculating motion vectors, as explained in more detail below. The rendering pipeline600starts with data represented in a local space602, which specifies the locations of vertices in a 3D model relative to an origin of the 3D model. The local space602therefore captures the geometry of the 3D model and may also be referred to as model space. The 3D model may include one or more objects (e.g., characters, buildings, etc.) represented as meshes each having a local origin. Using a model matrix604, the 3D model may be embedded within a world space608, which may represent the locations of the objects in the 3D model relative to an origin within a 3D world. The 3D world may be a virtual world, e.g., the 3D world of the game. The term “matrix,” as used herein, refers to any collection of numbers indexed by two natural numbers. The matrices discussed herein are all nonsingular, so inverses may be computed for each matrix. Objects in the world space608may be transformed to an eye space614via a view matrix610. Eye space positions represent the apparent locations of 3D objects, relative to a virtual camera positioned at an origin of the eye space614. In this way, via the eye space614, the objects in the world space may be viewed from a particular camera location. A projection matrix616maps the points in the eye space614to a homogeneous clip space620. The homogeneous clip space620may represent locations within a square-pyramid shaped view frustum, and thus the projection matrix may scale the xyz locations of the eye space into the frustum shape. FIG.6Bshows a view frustum650, acting as the visible region of the 3D scene. The coordinate system in the view frustum650space has a virtual camera660at the origin, which represents the perspective of the viewer. One or more objects652may be represented in the view frustum650. In order to produce a 2D view of the scene, objects are projected to the near clip654. For example, if object652is a sphere, its projection662may be a circle. The screen may comprise an array of pixels, with each pixel corresponding to one or more objects within the scene. For example, the 3D object652is represented by its projection662on the near clip654. Pixel664represents a location on the near clip654corresponding to the projection662. Computing the eye space position of an object involves drawing a ray658from the origin at the virtual camera660to each point on the projected object, such as pixel664. The ray, which may be a vector quantity with one or more values described in further detail below, is multiplied by the linear depth information from the game engine. The resulting ray may have a magnitude and direction reflective of the world space position of the object. FIG.6Cshows an enlargement of the near clip654, with the virtual camera660at the origin of the space. Corner670, corner672, corner674, and corner676represent the corners (starting from the upper-left and going clockwise) of the near clip654. Rays may be drawn from the origin (e.g., virtual camera660) to each of the corners. The corner ray of corner670is ray671, the corner ray of corner672is ray673, the corner ray of corner674is ray675, and the corner ray of corner676is ray677. The ray658representing pixel664may be chosen to be the corner ray associated with the nearest corner of the near clip654(e.g., the corner nearest pixel664). For example, if pixel664is located in the lower left quadrant of the near clip654, it would be closest to corner676, so the ray677would represent the corner ray of pixel664. Determination of the corner ray, e.g. ray677, may be performed in a vertex shader. The ray677is then interpolated using a rasterization algorithm, yielding the ray658, which points from the virtual camera660to the pixel664. The ray658may be interpolated based on the relative positions of the virtual camera660and the pixel664. Turning back toFIG.6A, composing (e.g. multiplying) the view matrix610with the projection matrix616yields a view projection matrix622, which transforms positions in the world space608to the homogeneous clip space620directly (e.g., without forming an intermediate eye space614). To transform from the homogeneous clip space620to a normalized device space628, homogeneous division626may be performed. In computer graphics, positions may be represented with four-dimensional vectors (collections of numbers indexed by a single natural number). In many examples, the first three coordinates may constitute the x, y, and z (depth) values of the position. The fourth coordinate may then represent the scale factor (or w) value, as used in projective geometry, which is applied to scale the eye space locations to the frustum shape of the homogenous clip space. Perspective division is the process of normalizing a four-dimensional vector by dividing all entries by the w-value (therefore creating a “correct” position vector with w=1). The final step in the rendering pipeline600is the transformation to a viewport space630. This involves projecting onto the specific display device (such as display130) being used to show the 3D scene. The final projection to the viewport space630creates a 2D image to be displayed on the screen (e.g., display130), with the x, y locations of the normalized device space being preserved and the z locations being applied to sort (e.g., resize) the objects to give the appearance of depth on the 2D image. Inverses processes of aspects of the rendering pipeline600may be performed in order to perform certain actions, such as mapping user input device pointer (e.g., mouse pointer) location to a particular object, and may also be used to generate MV0s, as will be explained below. For example, an inverse view projection matrix624may be obtained by multiplying an inverse projection matrix618(applied to transform the homogeneous clip space620back to the eye space614) by an inverse view matrix612(applied to transform the eye space614back to the world space608). Further, an inverse model matrix606may be applied to transform objects in the world space608to the local space602. FIGS.7A and7Bshow a method700and a method750for generating pixelwise MV1 and MV0 textures, respectively. Methods700and750create MVs which are sums of object motion vectors and camera motion vectors. Method700and method750may be performed as part of method500, for example in order to generate the pixelwise MV0 and MV1 textures at step505of method500. The textures created by method700and method750may be either 2.5D or fully 3D, with method700being adjustable to generate either one. The generation of the pixelwise MV0s and MV1s may be performed for each pixel or group of pixels on the screen/2D image that will be displayed, and method700and method750are described herein as generating an MV1 and an MV0, respectively, for a selected pixel or group of pixels. It will be appreciated that the processes of method700and method750may be applied to each vertex within a frame, then the pixelwise MV0 and the pixelwise MV1 textures may be generated by finding pixel locations corresponding to each of the vertices. Lowering the sampling resolution of the MVs may result in vertices being assigned to groups of pixels rather than individual pixels. For example, MVs may be assigned within 2×2 collections of pixels. Referring first to method700, it includes at702, determining a camera MV1 for each vertex of the current frame. The camera MV1 may represent the motion of a vertex of an object within the current frame (CF) to the previous/last frame (PF) due to a change in the position of the virtual camera. To determine the camera MV1, the world space position of a selected vertex is calculated and the motion vector is computed from the world space position. Thus, determining the camera MV1 for the selected vertex includes obtaining a CF eye space position (vPos) based on a depth and a ray for the selected vertex, as indicated at704. In order to obtain the world space position of the selected vertex, the eye space position of that vertex is first determined. The eye space position is calculated from the depth value of the selected vertex as sampled from a depth buffer (e.g., stored depth texture) generated by the game engine, where the depth buffer stores the non-linear depth information for each vertex. The xy screen position of the selected vertex is used to sample the depth buffer. The ray is a vector from the origin (e.g., the virtual camera position) to that vertex on the near clip of the view frustum, as explained above. The vPos may be a position in xyz of the eye space determined by multiplying the depth by the ray. At706, a world space position (wPos) of the vertex is calculated based on the vPos. Using the vPos of the vertex, the position in world space of that vertex may be determined. To calculate the wPos, the vPos determined at706is multiplied by an inversed view matrix (such as the inverse view matrix612ofFIG.6A), which is the inverse of the view matrix used to transform the world space positions to the eye space positions, as explained above with respect toFIG.6A. The inversed view matrix may be obtained from the game engine. At707, a position (Pos) of the vertex in the homogenous clip space of the current frame is determined and a previous position (Prepos) of the vertex in the homogenous clip space of the previous frame are determined based on the wPos. To determine the Pos, the wPos is multiplied by the view matrix (generated/output by the game engine). To determine the Prepos, the wPos is multiplied by the view matrix of the previous frame. The view matrix of the previous frame may be generated/output by the game engine during the generation/rendering of the previous frame and may be cached for use in calculating the camera MV1. The view matrix of the current frame may be cached for use in calculating the camera MV1 of the next frame. At708, a homogenous division is performed to scale the Pos and the Prepos, such that xyz positions in the current and previous homogenous clip space are normalized by the scaling factor (w). In some examples, scaling results in values of Pos and Prepos being in the [−1,1] range. At710, the camera MV1 is returned based on the scaled Pos and Prepos. The MV1 may include a velocity that is calculated as the Pos minus the Prepos, in the x, y, and z axes. In some examples, the velocity may be multiplied by 0.5 to get the motion vector in the [0,1] range. Further, the depth value in the normalized device space (e.g., after scaling) of the current frame is also included as part of the camera MV1. The camera MV1 may thus include the change in position of that pixel (in x, y, and z) from the current frame to the previous frame and a depth value of that vertex in the current frame. The vertices within the frame may be assigned pixel values based on where they appear in the xy screen space. Note that if the sampling resolution is lowered, the pixel assigned to each vertex for the purposes of MV generation may not be exactly the closest pixel. Note that the camera MV1 calculation assumes that the selected vertex of the object did not move from the previous frame to the current frame, since the same wPos is used. For moving objects, the wPos will not be the same in the previous frame and the current frame. Accordingly, an object motion vector (object MV1) is also determined for each vertex, as indicated at712. To determine the object MV1, the eye space position of each vertex in the current frame (CF vPos) and the eye space position of each vertex in the previous frame (PF vPos) are determined at714. The eye space position for each vertex of the current frame is determined by multiplying each vertex's current frame position in the model/local space by the model matrix, and then multiplying that product by the view projection matrix. The eye space position for each vertex of the previous frame is determined by multiplying each vertex's previous frame position in the model/local space (which is obtained by caching the vertex's current frame position in a prior MV1 calculation, e.g., going from the frame before PF to PF) by the previous frame model matrix, and then multiplying that product by the previous frame view projection matrix, which is also cached (e.g., from the prior MV1 calculation). At716, the CF vPos and the PF vPos are interpolated into the homogenous clip space to generate an input Pos and an input Prepos. Thus, the eye space positions are interpolated into the homogenous clip space. For each fragment, a homogenous division is performed, as indicated at718, to generate a scaled input Pos and a scaled input Prepos (e.g., normalized based on the scale factor w) in the normalized device space. The scaled input Pos and scaled input Prepos may each include xyz values. At720, the object MV1 for the selected pixel is returned based on the scaled input Pos and scaled input Prepos for the selected vertex. The object MV1 may include a velocity that is calculated as the input Pos minus the input Prepos, in the x, y, and z axes. In some examples, the velocity may be multiplied by 0.5 to get the motion vector in the [0,1] range. Further, the depth value in the normalized device space (e.g., after scaling) of the current frame is also included as part of the object MV1. The object MV1 may thus include the change in position of that vertex (in x, y, and z) from the current frame to the previous frame and a depth value of that vertex in the current frame. At722, the MV1 for that pixel is output based on the camera MV1 and the object MV1. Step722includes finding the pixel associated with the vertex, for the purposes of generating the MV1 texture. The pixel selected for each vertex may also depend on the sampling resolution of the motion vectors. The camera MV1 and the object MV1 may be summed to produce the MV1. Method700returns. Referring next to method750, it includes at752, determining a camera MV0 for each vertex within the current frame. The camera MV0 may represent the motion of a vertex within that object from the previous/last frame (PF) to the current frame (CF) due to a change in the position of the virtual camera. To determine the camera MV0, the world space position of a selected vertex in the previous frame is calculated and the motion vector is computed from the world space position. Thus, determining the camera MV0 for the selected vertex includes obtaining a PF eye space position (vPos) based on a previous depth and a ray for the selected vertex, as indicated at754. In order to obtain the previous frame world space position of the selected vertex, the eye space position of that vertex in the previous frame is first determined. The PF eye space position is calculated from the previous depth value of the selected vertex as sampled from a previous frame depth buffer generated by the game engine, where the previous frame depth buffer stores the non-linear depth information for each vertex. The xy screen position of the selected pixel is used to sample the previous depth buffer. The ray is a vector from the origin (e.g., the virtual camera position) to that vertex on the near clip of the view frustum, as explained above. The vPos may be a position in xyz of the eye space determined by multiplying the previous depth by the ray. At756, a world space position (wPos) of the vertex is calculated based on the vPos. Using the vPos of the vertex, the position in world space of that vertex may be determined. To calculate the wPos, the vPos determined at756is multiplied by an inversed view matrix of the previous frame. The inversed view matrix may be initially obtained from the game engine and then cached (e.g., the previous frame's camera to world matrix is cached and then obtained to determine the wPos). At757, a position (Pos) of the vertex in the homogenous clip space of the current frame is determined and a previous position (Prepos) of the vertex in the homogenous clip space of the previous frame are determined based on the wPos determined at756. To determine the Pos, the wPos is multiplied by the view matrix (generated/output by the game engine). To determine the Prepos, the wPos is multiplied by the view matrix of the previous frame. The view matrix of the previous frame may be generated/output by the game engine during the generation/rendering of the previous frame and may be cached for use in calculating the camera MV0. The view matrix of the current frame may be cached for use in calculating the camera MV0 of the next frame. At758, a homogenous division is performed to scale the Pos and the Prepos, such that xyz positions in the current and previous homogenous clip space are normalized by the scaling factor (w). At760, the camera MV0 is returned based on the scaled Pos and Prepos. The MV0 may include a velocity that is calculated as the Pos(xyz) minus the Prepos(xyz), in the x, y, and z axes. In some examples, the velocity may be multiplied by 0.5 to get the motion vector in the [0,1] range. Further, the depth value in the normalized device space (e.g., after scaling, such that the depth value is the z value of the Prepos(xyz)) of the previous frame is also included as part of the camera MV0. The camera MV0 may thus include the change in position of that pixel (in x, y, and z) from the previous frame to the current frame and a depth value of that pixel in the previous frame. Note that the camera MV0 calculation assumes that the selected vertex of the object did not move from the previous frame to the current frame, since the same wPos is used. For moving objects, the wPos will not be the same in the previous frame and the current frame. Accordingly, an object motion vector (object MV0) is also determined for each vertex, as indicated at762. To determine the object MV0, the eye space position of each vertex in the current frame (CF vPos) and the eye space position of each vertex in the previous frame (PF vPos) are determined at764, similar to the eye space position determination performed when calculating the object MV1. The eye space position for each vertex of the current frame is determined by multiplying each vertex's current frame position in the model/local space by the model matrix, and then multiplying that product by the view projection matrix. The eye space position for each vertex of the previous frame is determined by multiplying each vertex's previous frame position in the model/local space (which is obtained by caching the vertex's current frame position in a prior MV0 calculation, e.g., going from the frame before PF to PF) by the previous frame model matrix, and then multiplying that product by the previous frame view projection matrix, which is also cached (e.g., from the prior MV0 calculation). At766, the CF vPos and the PF vPos are interpolated into the homogenous clip space to generate an input Pos and an input Prepos. Thus, the eye space positions are interpolated into the homogenous clip space. For each fragment, a homogenous division is performed, as indicated at768, to generate a scaled input Pos and a scaled input Prepos (e.g., normalized based on the scale factor w) in the normalized device space. The scaled input Pos and scaled input Prepos may each include xyz values. At770, the object MV0 for the selected vertex is returned based on the scaled input Pos and scaled input Prepos for the selected vertex. The object MV0 may include a velocity that is calculated as the input Pos(xyz) minus the input Prepos(xyz), in the x, y, and z axes. In some examples, the velocity may be multiplied by 0.5 to get the motion vector in the [0,1] range. Further, the depth value in the interpolated homogenous clip space (e.g., after scaling) of the previous frame is also included as part of the object MV0 (e.g., the z value of the input Prepos(xyz)). The object MV0 may thus include the change in position of that vertex (in x, y, and z) from the previous frame to the current frame and a depth value of that vertex in the previous frame. The vertex may then be ascribed a pixel (or group of pixels) based on the vertex's position on the screen and the sampling rate of the motion vectors. The object MV0 may thus include the change in position of that pixel (in x, y, and z) from the previous frame to the current frame and a depth value of that pixel in the previous frame. At772, the MV0 for that pixel is output based on the camera MV0 and the object MV0. Step772includes finding the pixel associated with the vertex, for the purposes of generating the MV0 texture. The pixel selected for each vertex may also depend on the sampling resolution of the motion vectors. The camera MV0 and the object MV0 may be summed to produce the MV0. Method750returns. FIG.8shows a method800for choosing a correct MV0 from a plurality of possible MV0 inputs. Choosing the correct MV0 may be used alone, or in addition to, a double-confirmation process. When a current frame pixel Q_CF and a previous frame pixel Q_PF are double confirmed, the values of the MVs of Q_CF and Q_PF have the same magnitudes but opposite signs. This property may improve the interpolated pixel represented generated between the CF and PF. Depth can represent the occlusion relationship between objects. When projecting from CF to PF, several pixels may be projected to the same location. A depth buffer can help to select which MVs are double confirmed. Method800may resolve the correct motion vector in the 2D domain, as projecting from PF to the CF may have several possible resulting pixels. The method800takes as input a depth value and begins at804where a pixel in PF is projected to a plurality of pixels in the CF. For each pixel in the CF, the potential MV0 is calculated and stored at808. Next, the depth buffer is calculated at810by, for example, DepthBuffer=MV1z+Depth, where MV1zis the z-component of the 3D MV1 and Depth refers to the depth texture value in the current frame. The PF depth may be compared to the PF depth at812. If the depth buffer of this pixel matches the PF depth of the pixel, the MV0 corresponding to the PF pixel is correct, as indicated814, and is returned. If the MV0 is not correct, the loop continues with the other possible projected pixels. FIGS.9A and9Bshow an optional method for generating MV0 from MV1 by way of projecting pixels from the CF to the PF, and from the PF to the CF. FIG.9Ashows the process of projecting from the CF902to PF920by way of MV1. Since there exists a one-to-one correspondence from the CF902to PF920, pixels may be uniquely projected. In the example shown, CF pixel904, CF pixel906, CF pixel908, and CF pixel910are shown to project to PF pixel922, PF pixel924, PF pixel926, and PF pixel928, respectively. FIG.9Bshows the process of projecting from PF to the CF, the opposite direction of projection. A pixel932in PF930may be mapped in the 2D domain to several pixels in the CF, such as, for example,934,936,938, and more. Multiple projections may be the result of projection from 3D to 2D when projecting to the near clip. Method800may be implemented to choose the correct projection in the CF. FIG.9Cshows the interpolation of a PF pixel940in the CF950. Pixel954, pixel956, pixel958, and pixel960represent the four possible corresponding pixels within a possible projection, such as934. Since the projected pixel952may be between several pixels, such as pixel954, pixel956, pixel958, and pixel960, the projection may be initially assigned a value of any such corner, then interpolated in a rasterization process, as detailed with respect toFIGS.6A-6C. In addition to evaluating the correct MV0, a double-confirmation process may be performed. One example is the method1000ofFIG.10, which may be carried out according to instructions stored in computer memory, such as volatile and/or nonvolatile memory120, and executed on one or more processors, such as processors110. In some examples, method1000may be carried out by the game SDK210and/or the algorithm SDK216ofFIG.2. The double-confirmation method1000begins with computing the MV0 values of a selected pixel in PF at1002. At1004, the corresponding pixel is found in the CF, for example by using method800. The corresponding pixel in the CF is compared to the selected pixel in PF at1006, where one or more criteria are compared. If the criteria are met, the pixel is called double-confirmed at1008. Otherwise, the pixel is non double-confirmed at1010. The criteria that may be applied to double-confirm the selected pixel may include confirming that the motion vector of the pixel in PF, if added to the coordinates of the pixel in the current frame, would result in the coordinates of that pixel in the current frame. For example, a MV of a pixel in PF having a position(X0,Y0,Z0) is (delta_x0, delta_y0, delta_z0). The corresponding pixel in the CF is determined to have a position (X1,Y1,Z1) If X1=X0+delta_x; Y1=Y0+delta_y; and Z1=Z0+delta_z, then this pixel is called double confirmed. FIG.11shows a method1100for adjusting MVs of pixels in non-double-confirmed regions. At1102, a selected pixel is evaluated to determine if that pixel is in double-confirmed region. If the pixel is in the double-confirmed region (e.g., the pixel has been double-confirmed by the process described above with respect toFIG.10), a delta MV for that pixel is computed at1104. The delta MV for that pixel may be computed by: ΔMV=MV1PF+MV0PF. The value of delta MV may then be returned. If the pixel is not in a double confirmed region, a search is performed to identify the nearest double-confirmed pixel on the same object at1106. Additional details about identifying a nearest double-confirmed pixel are presented below. At1108, the MV0 of the selected pixel is set to the MV1 of the selected pixel added to the delta MV of the nearest pixel, where the delta MV is calculated as explained above at1104(e.g., because the nearest pixel is double-confirmed), such that the delta MV of the nearest pixel is the MV0 of the nearest pixel plus the MV1 of the nearest pixel. In some examples, the delta MV of a double-confirmed pixel may only be calculated once that pixel has been identified as a nearest pixel to a non-double-confirmed pixel, and may not necessarily be calculated for all double-confirmed pixels. At1110, the delta MV of the selected pixel may be set to the MV0 of the nearest pixel plus the MV1 of the nearest pixel. FIG.12Ashows an example set of pixels1200of a frame (e.g., a current frame) including pixels in a double-confirmed region1202and pixels in a non-double-confirmed region1204. A sample input pixel1206(e.g., a selected pixel) is shown with two nearby pixels, a first pixel1208and a second pixel1210. Both the first pixel1208and the second pixel1210are in the double-confirmed region1202. In this example, the second pixel1210is closer to the input pixel1206than the first pixel1208. However, the second pixel1210is part of a different object than the input pixel1206while the first pixel1208is part of the same object as the input pixel1206, so the first pixel1208may be used as the nearest pixel in step1106of method1100. FIG.12Bshows an example set of pixels1221with which a window search method may be executed for locating the nearest pixel to a non-double confirmed input pixel1224. The non-double confirmed input pixel1224is in a non-double confirmed region1222. The other pixels shown inFIG.12B, such as first pixel1226, second pixel1228, third pixel1230, and fourth pixel1232, are in the double-confirmed region1220. The search for the closest pixel starts with the input pixel1224at the center, then moves left (e.g., identifying the fourth pixel1232), right (e.g., identifying the second pixel1228), up (e.g., identifying the third pixel1230), and down (e.g., identifying the first pixel1226) to locate nearest double-confirmed pixels. In the example shown, the third pixel1230and the fourth pixel1232are part of the same object as the input pixel1224, while the first pixel1226and the second pixel1228are part of different object(s). The total number of vertical and horizontal moves to reach the third pixel1230from the input pixel1224and to reach the fourth pixel1232from the input pixel1224may then be computed to determine the distance from the input pixel1224to each identified nearest pixel, and the pixel with the shortest distance may be selected as the nearest pixel. The method of counting the total number of vertical and horizontal moves between the input pixel and a nearest double-confirmed pixel may be referred to as the “chess distance.” In some examples, a nearest double-confirmed pixel belonging to the same object as the input pixel/non-double-confirmed pixel may not be identified. For example, all pixels of a given object may be non-double-confirmed and thus all double-confirmed pixels surrounding the non-double-confirmed region may belong to a different object. In such examples, the non-double-confirmed region is exhibiting linear motion (e.g., a ball moving in the virtual world) and the MV0 for the pixels in the non-double-confirmed region may be set to the inverse of the MV1 for those pixels. FIG.13shows a method1350to compute a block MV from a pixel-by-pixel MV. The method1350occurs during the pixel MV to block MV conversion performed at512and513of method500, which may be computed using the processors110and instructions in volatile and/or nonvolatile memory120. At1351, the pixels on the screen are divided into a plurality of blocks, e.g. finitely sized, rectangular collections of pixels. In one example (seeFIG.14), the blocks may be four-by-four squares of pixels. In general, the blocks do not need to be equally-sized or square. At1352, two depth thresholds (depth threshold 1 and depth threshold 2) are calculated. The depth thresholds may be given, for example, by DepthThreshold1=DepthMax-DepthMax-DepthMin4,DepthThreshold2=DepthMin+DepthMax-DepthMin4, where DepthMaxis the maximum depth value for the block and DepthMinis the minimum depth value in the block. In this example, a greater depth corresponds to an object further away from the camera or viewer. At1355, foreground (FG), background (BG), and mid-range (MID) bins are created and each given initial values of 0. The sizes of each bin may be stored as NFG, NBG, and NMID. As indicated at1356, for each pixel in the block, the depth value of the pixel (and therefore the depth value of the 2.5D motion vector) is compared to the two thresholds at1370. If the depth is greater than depth threshold 1, the BG bin is incremented at1372. If the depth is less than depth threshold 2, the FG bin is incremented at1374. Otherwise, the MID bin is incremented at1376. Note that for each pixel within the block, only one bin should be incremented. Once each pixel within the block has been compared, the values of the FG bin, the BG bin, and the MID bin are compared at1380to identify a distribution of pixel types. Pixel type distribution identification may be performed to determine whether or not the depth components of the MVs exhibit a bimodal distribution; a bimodal distribution may indicate the presence of two objects within the block: a foreground object and a background object. If a bimodal distribution is not detected, disregarding the MVs with extreme depth components may result in a more stable distribution. In such a case, the mid-range pixels should be averaged. However, since the disclosed methods create only three bins to classify pixels, and the number of pixels in each block may be small (e.g. 16), a bimodal distribution may appear to be skewed towards either the foreground and background bins. Either case may indicate the presence of a foreground object. The size of the predominant bin, herein labelled as N, may be given, for example, by a process specified by the following pseudo-code: IF NMID< NFGTHENIF NFG> K1NBGTHENSET N = NFGELSE SET N = NBGEND IFELSE IF NMID< NBGTHENIF NFG> K2NBGTHENSET N = NFGELSE SET N = NBGEND IFELSESET N = NMIDEND IF Note the two constants, K1and K2, may be chosen such that 0<K1<2 and K1<K2. Both constants may be determined empirically to achieve stable distributions in the depth components of the block MVs. In some embodiments, K1=K2=0. In this way, when a bimodal distribution is detected (e.g., where at least one foreground pixel and at least one background pixel are included in a block) such that a foreground object and a background object are detected in the block, only the MVs for the foreground pixels are averaged and set as the block MV for the block (even if more background pixels are present in the block than foreground pixels), which may allow for the preservation of the foreground object in the interpolated frame(s) that may otherwise be missed. When a bimodal distribution is not detected, only one object is detected in the block (whether foreground or background) and only the MVs for the mid-range pixels are averaged and set as the block MV for the block. At1382, method1350includes a step to average the MVs based on the distribution of pixel types. For example, the block MV may be given by a formula such as (MVx,MVy,depth)block=1N⁢∑i=1N(MVx,MVy,depth)pixel⁢i, where N represents which bin is being averaged over, as determined by step680above. The sum is performed over all pixelwise MVs (either MV0 or MV1) within the bin corresponding to N, e.g. if N=NBG, the sum is performed over all background pixels, etc. Addition here is performed according to standard vector addition, e.g. (x1,y1,z1)+(x2,y2,z2)=(x1+x2,y1+y2,z1+z2). Method1350then returns, using the averaged MV as its return value. FIG.14shows an example of processing the pixel MVs into the block MVs. In this case, a block of pixel MVs1440includes a collection of foreground range pixel MVs1454(shown inFIG.14with diagonal lines), a collection of background range pixel MVs1450(shown inFIG.14as a dotted pattern) and a collection of mid-range pixel MVs1452(shown inFIG.14as being cross-hatched) are shown. Depth is used to decide which range each pixel belongs to, e.g., pixel MVs within a first depth range are assigned as being background pixels, pixels within a second depth range are assigned as being mid-range pixels, and pixels of a third depth range are assigned as being foreground pixels. Since the majority (10 of 16) of pixel MVs within the block of pixel MVs1440fall within the foreground depth range, the depth values of the foreground range pixel MVs1454are averaged to produce a single depth value that is applied to all pixel MVs of the block MV, thereby generating a block MV1456. In addition, the x- and y-components (not shown) of the foreground MVs are also averaged and output in the block MV. The block MV is therefore comprised of three semi-independent textures: the x-component, the y-component, and the depth component, each independently averaged within the pixels fitting within the depth range. This technique applies to both MV1 and MV0. The technical effect of generating an MV1 texture and an MV0 texture of a current frame, where the MV1 texture and the MV0 texture each include depth values, is that the MV1 texture and MV0 texture may be converted to MV1 and MV0 blocks, respectively, using the depth values so that the resulting motion vector may more closely match the motion occurring within the frame. Another technical effect is the block MV1 and block MV0 with depth values may be used to perform frame interpolation, thereby increasing a framerate. The disclosure also provides support for a method, comprising: generating, for each pixel of one or more objects to be rendered in a current frame, a 1-phase motion vector (MV1) and a 0-phase motion vector (MV0), each MV1 and MV0 having an associated depth value, to thereby form an MV1 texture and an MV0 texture, each MV0 determined based on a camera MV0 and an object MV0, converting the MV1 texture to a set of MV1 pixel blocks and converting the MV0 texture to a set of MV0 pixel blocks, and outputting the set of MV1 pixel blocks and the set of MV0 pixel blocks for image processing. In a first example of the method represents a change in a position of a vertex at that pixel from the current frame to a previous frame, wherein each MV0 represents a change in a position of the vertex at that pixel from the previous frame to the current frame, wherein the camera MV0 represents a change in a position of the vertex at that pixel from the previous frame to the current due to a change in position or orientation of a virtual camera, and wherein the object MV0 represents a change in a position of the vertex at that pixel from the previous frame to the current due to a change in position of the object in a world space. In a second example of the method, optionally including the first example, the method further comprises: for a selected pixel, determining the camera MV0 for the selected pixel based on a world space position of the selected pixel's corresponding vertex in the previous frame. In a third example of the method, optionally including one or both of the first and second examples, determining the camera MV0 for the selected pixel based on the world space position of the selected pixel's corresponding vertex in the previous frame comprises: determining an eye space position of the vertex based on a depth of the selected vertex in the previous frame and a ray pointing from the virtual camera to the selected vertex on a near clip of a homogenous clip space and applying an inversed view matrix from the previous frame to the eye space position to determine the world space position, applying a first view matrix of the current frame to the world space position to generate a position of the selected vertex in the homogenous clip space, applying a second view matrix of the previous frame to the world space position to generate a preposition of the selected vertex in the homogenous clip space, and calculating the camera MV0 as a difference between the position and the preposition, in each of a vertical axis, a horizontal axis, and a depth axis. In a fourth example of the method, optionally including one or more or each of the first through third examples, the method further comprises: for a selected pixel, determining the object MV0 for the selected pixel based on an eye space position of a selected vertex corresponding to the selected pixel in the previous frame. In a fifth example of the method, optionally including one or more or each of the first through fourth examples, determining the object MV0 for the selected pixel based on the eye space position of the selected vertex in the previous frame comprises: calculating the eye space position of the selected vertex in the previous frame and calculating an eye space position of the selected vertex in the current frame, each based on a corresponding model matrix and view projection matrix, interpolating the eye space position in the previous frame and the eye space position in the current frame into a homogenous clip space to generate an input position and an input preposition, respectively, calculating the object MV0 as a difference between the input position and the input preposition, in each of a vertical axis, a horizontal axis, and a depth axis, and assigning the object MV0 to the selected pixel based on the on-screen location of the selected vertex. In a sixth example of the method, optionally including one or more or each of the first through fifth examples, converting the MV1 texture to the set of MV1 pixel blocks comprises identifying, for each pixel block of a plurality of pixel blocks of the MV1 texture, a distribution of pixel types in that pixel block and converting the MV1 texture to the set of MV1 pixel blocks based on the distribution of pixel types for each pixel block. In a seventh example of the method, optionally including one or more or each of the first through sixth examples, converting the MV1 texture to the set of MV1 pixel blocks based on the distribution of pixel types for each pixel block comprises: if the distribution of a selected pixel block is bimodal such that at least one background pixel and at least one foreground pixel are present in the selected pixel block, calculating an average MV1 over only foreground pixels in the selected pixel block, otherwise calculating the average MV1 over only mid-range pixels in the selected pixel block. In an eighth example of the method, optionally including one or more or each of the first through seventh examples, converting the MV0 texture to the set of MV0 pixel blocks comprises identifying, for each pixel block of a plurality of pixel blocks of the MV0 texture, a distribution of pixel types in that pixel block and converting the MV0 texture to the set of MV0 pixel blocks based on the distribution of pixel types for each pixel block. In a ninth example of the method, optionally including one or more or each of the first through eighth examples, converting the MV0 texture to the set of MV0 pixel blocks based on the distribution of pixel types for each pixel block comprises: if the distribution of a selected pixel block is bimodal such that at least one background pixel and at least one foreground pixel are present in the selected pixel block, calculating an average MV0 over only foreground pixels in the selected pixel block, otherwise calculating the average MV0 over only mid-range pixels in the selected pixel block. In a tenth example of the method, optionally including one or more or each of the first through ninth examples, the method further comprises: applying a double-confirm process to each MV0 and each MV1, wherein the double-confirm process includes confirming each associated depth value. In a eleventh example of the method, optionally including one or more or each of the first through tenth examples, applying the double-confirm process includes, for a selected pixel in the current frame, double confirming the MV1 or MV0 for the selected pixel responsive to coordinates of the selected pixel in the current frame matching coordinates of that pixel mapped back to the previous frame using the MV1 or MV0 for the selected pixel, the coordinates of the selected pixel including a depth coordinate. The disclosure also provides support for a system for rendering 3D graphics, comprising: one or more processors and non-transitory memory allocated to form: a motion vector module configured to receive 3D model information from a game engine and configured to output a plurality of 1-phase motion vectors (MV1s) and a plurality of 0-phase motion vectors (MV0s) based on the 3D model information, each MV1 comprising a change in a vertical position, a change in a horizontal position, and a change in a depth position of an object at a respective vertex from a current frame to a previous frame, each MV0 comprising a change in a vertical position, a change in a horizontal position, and a change in a depth position at a respective vertex from the previous frame to the current frame, wherein the motion vector module caches a subset of the 3D model information in order to calculate each MV0, including caching a depth buffer and an inversed view matrix from the previous frame, and a block motion vector module configured to generate a block MV1 texture and a block MV0 texture from the plurality of MV1s and the plurality of MV0s, respectively, and output the block MV1 texture and the block MV0 texture for image processing in order to form an image to be displayed on a display, wherein the block MV1 texture comprises a plurality of MV1 blocks each formed from a respective subset of the plurality of MV1s, and wherein the block MV0 texture comprises a plurality of MV0 blocks each formed from a respective subset of the plurality of MV0s. In a first example of the system, the block MV1 texture and the block MV0 texture are usable to interpolate a frame between the current frame and the previous frame. In a second example of the system, optionally including the first example is determined based on a respective camera MV0 and a respective object MV0, where a camera MV0 for a selected vertex is determined based on a world space position of the selected vertex in the previous frame determined based on the cached depth buffer and inversed view matrix, where an object MV0 for the selected vertex is determined based on an eye space position of the selected vertex in the previous frame and an eye space position of the selected vertex in the current frame. In a third example of the system, optionally including one or both of the first and second examples is assigned a respective pixel based on a location of a corresponding vertex. In a fourth example of the system, optionally including one or more or each of the first through third examples is assigned a respective pixel based on a location of a corresponding vertex. In a fifth example of the system, optionally including one or more or each of the first through fourth examples block comprises an average change in horizontal position for that respective subset of the plurality of MV1s, an average change in vertical position for that respective subset of the plurality of MV1s, and an average depth value for that respective subset of the plurality of MV1s, and wherein each MV0 block comprises an average change in horizontal position for that respective subset of the plurality of MV0s, an average change in vertical position for that respective subset of the plurality of MV0s, and an average depth value for that respective subset of the plurality of MV0s. The disclosure also provides support for a method, comprising: receiving, from a game engine, 3D model information usable to render a first current frame on a 2D screen, caching a subset of the 3D model information, including caching a depth texture and an inversed view projection matrix, receiving, from the game engine, updated 3D model information usable to render a second current frame on the 2D screen, including an updated depth texture and an updated inversed view projection matrix, calculating a plurality of 0-phase motion vectors from the updated 3D model information and the cached depth texture and inversed view projection matrix, outputting the plurality of 0-phase motion vectors as block 0-phase motion vectors for image processing. In a first example of the method, the 3D model information includes the first current frame and the updated 3D model information includes the second current frame, and further comprising calculating a plurality of 1-phase motion vectors from the updated 3D model information, the updated depth texture, and the inversed view projection matrix, and outputting the plurality of 1-phase motion vectors as block 1-phase motion vectors for image processing. As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising,” “including,” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property. The terms “including” and “in which” are used as the plain-language equivalents of the respective terms “comprising” and “wherein.” Moreover, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements or a particular positional order on their objects. This written description uses examples to disclose the invention, including the best mode, and also to enable a person of ordinary skill in the relevant art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
72,429
11861784
DETAILED DESCRIPTION In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. In the drawings, specific arrangements or orderings of schematic elements, such as those representing devices, modules, instruction blocks and data elements, are shown for ease of description. However, it should be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some embodiments. Further, in the drawings, where connecting elements, such as solid or dashed lines or arrows, are used to illustrate a connection, relationship, or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship, or association can exist. In other words, some connections, relationships, or associations between elements are not shown in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element is used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents a communication of signals, data, or instructions, it should be understood by those skilled in the art that such element represents one or multiple signal paths (e.g., a bus), as may be needed, to affect the communication. Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments. Several features are described hereafter that can each be used independently of one another or with any combination of other features. However, any individual feature may not address any of the problems discussed above or might only address one of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Although headings are provided, information related to a particular heading, but not found in the section having that heading, may also be found elsewhere in this description. Embodiments are described herein according to the following outline: 1. General Overview 2. System Overview 3. Autonomous Vehicle Architecture 4. Autonomous Vehicle Inputs 5. Autonomous Vehicle Planning 6. Autonomous Vehicle Control 7. Environment for Determining Optimal Sensors 8. Architecture for Determining Optimal Sensors 9. Examples of Determining Optimal Sensors 10. Processes for Determining Optimal Sensors General Overview An autonomous vehicle (AV) uses sensors to detect objects and determine distances from objects during navigation within an environment. The sensors include visual sensors such as cameras and LIDAR. A LIDAR is a remote sensing device that uses a grid of pulsed laser beams to measure a distance from an object to the device, a direction in which the object lies, as well as a reflectance of a surface of the object. In embodiments herein, different types of sensors and LIDAR devices are simulated using a model of a virtual AV in a controlled environment to improve the accuracy of LIDAR operation, increase the fidelity of the simulation scenarios, and derive more meaningful results from simulation miles driven by the AV. The method generates a simulated point cloud of a spinning LIDAR using image rasterization. One or more processors define a size of a rectangular output viewport in pixels. The viewport has a height corresponding to a number of rays emitted by the LIDAR. The viewport has a width corresponding to a density of the LIDAR simulation. A viewing range of the LIDAR is subdivided into frustums. A number of the frustums is defined corresponding to an accuracy of the simulation. The viewport is also subdivided by the number of frustums. The method maps a near plane of each frustum onto a corresponding section on the viewport. World coordinate positions of the environmental geometry are interpolated and rendered onto the viewport using rasterization. The resulting render is a 360° cylindrical view wrapped onto a 2D surface. The resulting render contains the simulated LIDAR point cloud values, which are used for determination of an optimal spatiotemporal sensor configuration for navigation of the AV. System Overview FIG.1illustrates an example of an autonomous vehicle100having autonomous capability. As used herein, the term “autonomous capability” refers to a function, feature, or facility that enables a vehicle to be partially or fully operated without real-time human intervention, including without limitation fully autonomous vehicles, highly autonomous vehicles, and conditionally autonomous vehicles. As used herein, an autonomous vehicle (AV) is a vehicle that possesses autonomous capability. As used herein, “vehicle” includes means of transportation of goods or people. For example, cars, buses, trains, airplanes, drones, trucks, boats, ships, submersibles, dirigibles, etc. A driverless car is an example of a vehicle. As used herein, “trajectory” refers to a path or route to navigate an AV from a first spatiotemporal location to second spatiotemporal location. In an embodiment, the first spatiotemporal location is referred to as the initial or starting location and the second spatiotemporal location is referred to as the destination, final location, goal, goal position, or goal location. In some examples, a trajectory is made up of one or more segments (e.g., sections of road) and each segment is made up of one or more blocks (e.g., portions of a lane or intersection). In an embodiment, the spatiotemporal locations correspond to real world locations. For example, the spatiotemporal locations are pick up or drop-off locations to pick up or drop-off persons or goods. As used herein, “sensor(s)” includes one or more hardware components that detect information about the environment surrounding the sensor. Some of the hardware components can include sensing components (e.g., image sensors, biometric sensors), transmitting and/or receiving components (e.g., laser or radio frequency wave transmitters and receivers), electronic components such as analog-to-digital converters, a data storage device (such as a RAM and/or a nonvolatile storage), software or firmware components and data processing components such as an ASIC (application-specific integrated circuit), a microprocessor and/or a microcontroller. As used herein, a “scene description” is a data structure (e.g., list) or data stream that includes one or more classified or labeled objects detected by one or more sensors on the AV vehicle or provided by a source external to the AV. As used herein, a “road” is a physical area that can be traversed by a vehicle, and may correspond to a named thoroughfare (e.g., city street, interstate freeway, etc.) or may correspond to an unnamed thoroughfare (e.g., a driveway in a house or office building, a section of a parking lot, a section of a vacant lot, a dirt path in a rural area, etc.). Because some vehicles (e.g., 4-wheel-drive pickup trucks, sport utility vehicles, etc.) are capable of traversing a variety of physical areas not specifically adapted for vehicle travel, a “road” may be a physical area not formally defined as a thoroughfare by any municipality or other governmental or administrative body. As used herein, a “lane” is a portion of a road that can be traversed by a vehicle and may correspond to most or all of the space between lane markings, or may correspond to only some (e.g., less than 50%) of the space between lane markings. For example, a road having lane markings spaced far apart might accommodate two or more vehicles between the markings, such that one vehicle can pass the other without traversing the lane markings, and thus could be interpreted as having a lane narrower than the space between the lane markings or having two lanes between the lane markings. A lane could also be interpreted in the absence of lane markings. For example, a lane may be defined based on physical features of an environment, e.g., rocks and trees along a thoroughfare in a rural area. “One or more” includes a function being performed by one element, a function being performed by more than one element, e.g., in a distributed fashion, several functions being performed by one element, several functions being performed by several elements, or any combination of the above. It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact. The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “includes,” and/or “including,” when used in this description, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context. As used herein, an AV system refers to the AV along with the array of hardware, software, stored data, and data generated in real-time that supports the operation of the AV. In an embodiment, the AV system is incorporated within the AV. In an embodiment, the AV system is spread across several locations. For example, some of the software of the AV system is implemented on a cloud computing environment similar to cloud computing environment300described below with respect toFIG.3. In general, this document describes technologies applicable to any vehicles that have one or more autonomous capabilities including fully autonomous vehicles, highly autonomous vehicles, and conditionally autonomous vehicles, such as so-called Level 5, Level 4 and Level 3 vehicles, respectively (see SAE International's standard J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems, which is incorporated by reference in its entirety, for more details on the classification of levels of autonomy in vehicles). The technologies described in this document are also applicable to partially autonomous vehicles and driver assisted vehicles, such as so-called Level 2 and Level 1 vehicles (see SAE International's standard J3016: Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems). In an embodiment, one or more of the Level 1, 2, 3, 4 and 5 vehicle systems may automate certain vehicle operations (e.g., steering, braking, and using maps) under certain operating conditions based on processing of sensor inputs. The technologies described in this document can benefit vehicles in any levels, ranging from fully autonomous vehicles to human-operated vehicles. Referring toFIG.1, an AV system120operates the AV100along a trajectory198through an environment190to a destination199(sometimes referred to as a final location) while avoiding objects (e.g., natural obstructions191, vehicles193, pedestrians192, cyclists, and other obstacles) and obeying rules of the road (e.g., rules of operation or driving preferences). In an embodiment, the AV system120includes devices101that are instrumented to receive and act on operational commands from the computer processors146. In an embodiment, computing processors146are similar to the processor304described below in reference toFIG.3. Examples of devices101include a steering control102, brakes103, gears, accelerator pedal or other acceleration control mechanisms, windshield wipers, side-door locks, window controls, and turn-indicators. In an embodiment, the AV system120includes sensors121for measuring or inferring properties of state or condition of the AV100, such as the AV's position, linear velocity and acceleration, angular velocity and acceleration, and heading (e.g., an orientation of the leading end of AV100). Example of sensors121are GNSS, inertial measurement units (IMU) that measure both vehicle linear accelerations and angular rates, wheel speed sensors for measuring or estimating wheel slip ratios, wheel brake pressure or braking torque sensors, engine torque or wheel torque sensors, and steering angle and angular rate sensors. In an embodiment, the sensors121also include sensors for sensing or measuring properties of the AV's environment. For example, monocular or stereo video cameras122in the visible light, infrared or thermal (or both) spectra, LIDAR123, RADAR, ultrasonic sensors, time-of-flight (TOF) depth sensors, speed sensors, temperature sensors, humidity sensors, and precipitation sensors. In an embodiment, the AV system120includes a data storage unit142and memory144for storing machine instructions associated with computer processors146or data collected by sensors121. In an embodiment, the data storage unit142is similar to the ROM308or storage device310described below in relation toFIG.3. In an embodiment, memory144is similar to the main memory306described below. In an embodiment, the data storage unit142and memory144store historical, real-time, and/or predictive information about the environment190. In an embodiment, the stored information includes maps, driving performance, traffic congestion updates or weather conditions. In an embodiment, data relating to the environment190is transmitted to the AV100via a communications channel from a remotely located database134. In an embodiment, the AV system120includes communications devices140for communicating measured or inferred properties of other vehicles' states and conditions, such as positions, linear and angular velocities, linear and angular accelerations, and linear and angular headings to the AV100. These devices include Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication devices and devices for wireless communications over point-to-point or ad hoc networks or both. In an embodiment, the communications devices140communicate across the electromagnetic spectrum (including radio and optical communications) or other media (e.g., air and acoustic media). A combination of Vehicle-to-Vehicle (V2V) Vehicle-to-Infrastructure (V2I) communication (and, in some embodiments, one or more other types of communication) is sometimes referred to as Vehicle-to-Everything (V2X) communication. V2X communication typically conforms to one or more communications standards for communication with, between, and among autonomous vehicles. In an embodiment, the communication devices140include communication interfaces. For example, wired, wireless, WiMAX, Wi-Fi, Bluetooth, satellite, cellular, optical, near field, infrared, or radio interfaces. The communication interfaces transmit data from a remotely located database134to AV system120. In an embodiment, the remotely located database134is embedded in a cloud computing environment200as described inFIG.2. The communication interfaces140transmit data collected from sensors121or other data related to the operation of AV100to the remotely located database134. In an embodiment, communication interfaces140transmit information that relates to teleoperations to the AV100. In some embodiments, the AV100communicates with other remote (e.g., “cloud”) servers136. In an embodiment, the remotely located database134also stores and transmits digital data (e.g., storing data such as road and street locations). Such data is stored on the memory144on the AV100, or transmitted to the AV100via a communications channel from the remotely located database134. In an embodiment, the remotely located database134stores and transmits historical information about driving properties (e.g., speed and acceleration profiles) of vehicles that have previously traveled along trajectory198at similar times of day. In one implementation, such data may be stored on the memory144on the AV100, or transmitted to the AV100via a communications channel from the remotely located database134. Computing devices146located on the AV100algorithmically generate control actions based on both real-time sensor data and prior information, allowing the AV system120to execute its autonomous driving capabilities. In an embodiment, the AV system120includes computer peripherals132coupled to computing devices146for providing information and alerts to, and receiving input from, a user (e.g., an occupant or a remote user) of the AV100. In an embodiment, peripherals132are similar to the display312, input device314, and cursor controller316discussed below in reference toFIG.3. The coupling is wireless or wired. Any two or more of the interface devices may be integrated into a single device. Example Cloud Computing Environment FIG.2illustrates an example “cloud” computing environment. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services). In typical cloud computing systems, one or more large cloud data centers house the machines used to deliver the services provided by the cloud. Referring now toFIG.2, the cloud computing environment200includes cloud data centers204a,204b, and204cthat are interconnected through the cloud202. Data centers204a,204b, and204cprovide cloud computing services to computer systems206a,206b,206c,206d,206e, and206fconnected to cloud202. The cloud computing environment200includes one or more cloud data centers. In general, a cloud data center, for example the cloud data center204ashown inFIG.2, refers to the physical arrangement of servers that make up a cloud, for example the cloud202shown inFIG.2, or a particular portion of a cloud. For example, servers are physically arranged in the cloud datacenter into rooms, groups, rows, and racks. A cloud datacenter has one or more zones, which include one or more rooms of servers. Each room has one or more rows of servers, and each row includes one or more racks. Each rack includes one or more individual server nodes. In some implementation, servers in zones, rooms, racks, and/or rows are arranged into groups based on physical infrastructure requirements of the datacenter facility, which include power, energy, thermal, heat, and/or other requirements. In an embodiment, the server nodes are similar to the computer system described inFIG.3. The data center204ahas many computing systems distributed through many racks. The cloud202includes cloud data centers204a,204b, and204calong with the network and networking resources (for example, networking equipment, nodes, routers, switches, and networking cables) that interconnect the cloud data centers204a,204b, and204cand help facilitate the computing systems'206a-faccess to cloud computing services. In an embodiment, the network represents any combination of one or more local networks, wide area networks, or internetworks coupled using wired or wireless links deployed using terrestrial or satellite connections. Data exchanged over the network, is transferred using any number of network layer protocols, such as Internet Protocol (IP), Multiprotocol Label Switching (MPLS), Asynchronous Transfer Mode (ATM), Frame Relay, etc. Furthermore, in embodiments where the network represents a combination of multiple sub-networks, different network layer protocols are used at each of the underlying sub-networks. In some embodiments, the network represents one or more interconnected internetworks, such as the public Internet. The computing systems206a-for cloud computing services consumers are connected to the cloud202through network links and network adapters. In an embodiment, the computing systems206a-fare implemented as various computing devices, for example servers, desktops, laptops, tablet, smartphones, Internet of Things (IoT) devices, autonomous vehicles (including, cars, drones, shuttles, trains, buses, etc.) and consumer electronics. In an embodiment, the computing systems206a-fare implemented in or as a part of other systems. Computer System FIG.3illustrates a computer system300. In an implementation, the computer system300is a special purpose computing device. The special-purpose computing device is hard-wired to perform the techniques or includes digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. In various embodiments, the special-purpose computing devices are desktop computer systems, portable computer systems, handheld devices, network devices or any other device that incorporates hard-wired and/or program logic to implement the techniques. In an embodiment, the computer system300includes a bus302or other communication mechanism for communicating information, and a hardware processor304coupled with a bus302for processing information. The hardware processor304is, for example, a general-purpose microprocessor. The computer system300also includes a main memory306, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus302for storing information and instructions to be executed by processor304. In one implementation, the main memory306is used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor304. Such instructions, when stored in non-transitory storage media accessible to the processor304, render the computer system300into a special-purpose machine that is customized to perform the operations specified in the instructions. In an embodiment, the computer system300further includes a read only memory (ROM)308or other static storage device coupled to the bus302for storing static information and instructions for the processor304. A storage device310, such as a magnetic disk, optical disk, solid-state drive, or three-dimensional cross point memory is provided and coupled to the bus302for storing information and instructions. In an embodiment, the computer system300is coupled via the bus302to a display312, such as a cathode ray tube (CRT), a liquid crystal display (LCD), plasma display, light emitting diode (LED) display, or an organic light emitting diode (OLED) display for displaying information to a computer user. An input device314, including alphanumeric and other keys, is coupled to bus302for communicating information and command selections to the processor304. Another type of user input device is a cursor controller316, such as a mouse, a trackball, a touch-enabled display, or cursor direction keys for communicating direction information and command selections to the processor304and for controlling cursor movement on the display312. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x-axis) and a second axis (e.g., y-axis), that allows the device to specify positions in a plane. According to one embodiment, the techniques herein are performed by the computer system300in response to the processor304executing one or more sequences of one or more instructions contained in the main memory306. Such instructions are read into the main memory306from another storage medium, such as the storage device310. Execution of the sequences of instructions contained in the main memory306causes the processor304to perform the process steps described herein. In alternative embodiments, hard-wired circuitry is used in place of or in combination with software instructions. The term “storage media” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media includes non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, solid-state drives, or three-dimensional cross point memory, such as the storage device310. Volatile media includes dynamic memory, such as the main memory306. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NV-RAM, or any other memory chip or cartridge. Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that include the bus302. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infrared data communications. In an embodiment, various forms of media are involved in carrying one or more sequences of one or more instructions to the processor304for execution. For example, the instructions are initially carried on a magnetic disk or solid-state drive of a remote computer. The remote computer loads the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to the computer system300receives the data on the telephone line and use an infrared transmitter to convert the data to an infrared signal. An infrared detector receives the data carried in the infrared signal and appropriate circuitry places the data on the bus302. The bus302carries the data to the main memory306, from which processor304retrieves and executes the instructions. The instructions received by the main memory306may optionally be stored on the storage device310either before or after execution by processor304. The computer system300also includes a communication interface318coupled to the bus302. The communication interface318provides a two-way data communication coupling to a network link320that is connected to a local network322. For example, the communication interface318is an integrated service digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface318is a local area network (LAN) card to provide a data communication connection to a compatible LAN. In some implementations, wireless links are also implemented. In any such implementation, the communication interface318sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information. The network link320typically provides data communication through one or more networks to other data devices. For example, the network link320provides a connection through the local network322to a host computer324or to a cloud data center or equipment operated by an Internet Service Provider (ISP)326. The ISP326in turn provides data communication services through the world-wide packet data communication network now commonly referred to as the “Internet”328. The local network322and Internet328both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on the network link320and through the communication interface318, which carry the digital data to and from the computer system300, are example forms of transmission media. In an embodiment, the network320contains the cloud202or a part of the cloud202described above. The computer system300sends messages and receives data, including program code, through the network(s), the network link320, and the communication interface318. In an embodiment, the computer system300receives code for processing. The received code is executed by the processor304as it is received, and/or stored in storage device310, or other non-volatile storage for later execution. Autonomous Vehicle Architecture FIG.4illustrates an example architecture400for an autonomous vehicle (e.g., the AV100shown inFIG.1). The architecture400includes a perception module402(sometimes referred to as a perception circuit), a planning module404(sometimes referred to as a planning circuit), a control module406(sometimes referred to as a control circuit), a localization module408(sometimes referred to as a localization circuit), and a database module410(sometimes referred to as a database circuit). Each module plays a role in the operation of the AV100. Together, the modules402,404,406,408, and410may be part of the AV system120shown inFIG.1. In some embodiments, any of the modules402,404,406,408, and410is a combination of computer software (e.g., executable code stored on a computer-readable medium) and computer hardware (e.g., one or more microprocessors, microcontrollers, application-specific integrated circuits [ASICs]), hardware memory devices, other types of integrated circuits, other types of computer hardware, or a combination of any or all of these things). In use, the planning module404receives data representing a destination412and determines data representing a trajectory414(sometimes referred to as a route) that can be traveled by the AV100to reach (e.g., arrive at) the destination412. In order for the planning module404to determine the data representing the trajectory414, the planning module404receives data from the perception module402, the localization module408, and the database module410. The perception module402identifies nearby physical objects using one or more sensors121, e.g., as also shown inFIG.1. The objects are classified (e.g., grouped into types such as pedestrian, bicycle, automobile, traffic sign, etc.) and a scene description including the classified objects416is provided to the planning module404. The planning module404also receives data representing the AV position418from the localization module408. The localization module408determines the AV position by using data from the sensors121and data from the database module410(e.g., a geographic data) to calculate a position. For example, the localization module408uses data from a GNSS (Global Navigation Satellite System) sensor and geographic data to calculate a longitude and latitude of the AV. In an embodiment, data used by the localization module408includes high-precision maps of the roadway geometric properties, maps describing road network connectivity properties, maps describing roadway physical properties (such as traffic speed, traffic volume, the number of vehicular and cyclist traffic lanes, lane width, lane traffic directions, or lane marker types and locations, or combinations of them), and maps describing the spatial locations of road features such as crosswalks, traffic signs or other travel signals of various types. The control module406receives the data representing the trajectory414and the data representing the AV position418and operates the control functions420a-c(e.g., steering, throttling, braking, ignition) of the AV in a manner that will cause the AV100to travel the trajectory414to the destination412. For example, if the trajectory414includes a left turn, the control module406will operate the control functions420a-cin a manner such that the steering angle of the steering function will cause the AV100to turn left and the throttling and braking will cause the AV100to pause and wait for passing pedestrians or vehicles before the turn is made. Autonomous Vehicle Inputs FIG.5illustrates an example of inputs502a-d(e.g., sensors121shown inFIG.1) and outputs504a-d(e.g., sensor data) that is used by the perception module402(FIG.4). One input502ais a LIDAR (Light Detection and Ranging) system (e.g., LIDAR123shown inFIG.1). LIDAR is a technology that uses light (e.g., bursts of light such as infrared light) to obtain data about physical objects in its line of sight. A LIDAR system produces LIDAR data as output504a. For example, LIDAR data is collections of 3D or 2D points (also known as a point clouds) that are used to construct a representation of the environment190. Another input502bis a RADAR system. RADAR is a technology that uses radio waves to obtain data about nearby physical objects. RADARs can obtain data about objects not within the line of sight of a LIDAR system. A RADAR system502bproduces RADAR data as output504b. For example, RADAR data are one or more radio frequency electromagnetic signals that are used to construct a representation of the environment190. Another input502cis a camera system. A camera system uses one or more cameras (e.g., digital cameras using a light sensor such as a charge-coupled device [CCD]) to obtain information about nearby physical objects. A camera system produces camera data as output504c. Camera data often takes the form of image data (e.g., data in an image data format such as RAW, JPEG, PNG, etc.). In some examples, the camera system has multiple independent cameras, e.g., for the purpose of stereopsis (stereo vision), which enables the camera system to perceive depth. Although the objects perceived by the camera system are described here as “nearby,” this is relative to the AV. In use, the camera system may be configured to “see” objects far, e.g., up to a kilometer or more ahead of the AV. Accordingly, the camera system may have features such as sensors and lenses that are optimized for perceiving objects that are far away. Another input502dis a traffic light detection (TLD) system. A TLD system uses one or more cameras to obtain information about traffic lights, street signs, and other physical objects that provide visual navigation information. A TLD system produces TLD data as output504d. TLD data often takes the form of image data (e.g., data in an image data format such as RAW, JPEG, PNG, etc.). A TLD system differs from a system incorporating a camera in that a TLD system uses a camera with a wide field of view (e.g., using a wide-angle lens or a fish-eye lens) in order to obtain information about as many physical objects providing visual navigation information as possible, so that the AV100has access to all relevant navigation information provided by these objects. For example, the viewing angle of the TLD system may be about 120 degrees or more. In some embodiments, outputs504a-dare combined using a sensor fusion technique. Thus, either the individual outputs504a-dare provided to other systems of the AV100(e.g., provided to a planning module404as shown inFIG.4), or the combined output can be provided to the other systems, either in the form of a single combined output or multiple combined outputs of the same type (e.g., using the same combination technique or combining the same outputs or both) or different types type (e.g., using different respective combination techniques or combining different respective outputs or both). In some embodiments, an early fusion technique is used. An early fusion technique is characterized by combining outputs before one or more data processing steps are applied to the combined output. In some embodiments, a late fusion technique is used. A late fusion technique is characterized by combining outputs after one or more data processing steps are applied to the individual outputs. FIG.6illustrates an example of a LIDAR system602(e.g., the input502ashown inFIG.5). The LIDAR system602emits light604a-cfrom a light emitter606(e.g., a laser transmitter). Light emitted by a LIDAR system is typically not in the visible spectrum; for example, infrared light is often used. Some of the light604bemitted encounters a physical object608(e.g., a vehicle) and reflects back to the LIDAR system602. (Light emitted from a LIDAR system typically does not penetrate physical objects, e.g., physical objects in solid form.) The LIDAR system602also has one or more light detectors610, which detect the reflected light. In an embodiment, one or more data processing systems associated with the LIDAR system generates an image612representing the field of view614of the LIDAR system. The image612includes information that represents the boundaries616of a physical object608. In this way, the image612is used to determine the boundaries616of one or more physical objects near an AV. FIG.7illustrates the LIDAR system602in operation. In the scenario shown in this figure, the AV100receives both camera system output504cin the form of an image702and LIDAR system output504ain the form of LIDAR data points704. In use, the data processing systems of the AV100compares the image702to the data points704. In particular, a physical object706identified in the image702is also identified among the data points704. In this way, the AV100perceives the boundaries of the physical object based on the contour and density of the data points704. FIG.8illustrates the operation of the LIDAR system602in additional detail. As described above, the AV100detects the boundary of a physical object based on characteristics of the data points detected by the LIDAR system602. As shown inFIG.8, a flat object, such as the ground802, will reflect light804a-demitted from a LIDAR system602in a consistent manner. Put another way, because the LIDAR system602emits light using consistent spacing, the ground802will reflect light back to the LIDAR system602with the same consistent spacing. As the AV100travels over the ground802, the LIDAR system602will continue to detect light reflected by the next valid ground point806if nothing is obstructing the road. However, if an object808obstructs the road, light804e-femitted by the LIDAR system602will be reflected from points810a-bin a manner inconsistent with the expected consistent manner. From this information, the AV100can determine that the object808is present. Path Planning FIG.9illustrates a block diagram900of the relationships between inputs and outputs of a planning module404(e.g., as shown inFIG.4). In general, the output of a planning module404is a route902from a start point904(e.g., source location or initial location), and an end point906(e.g., destination or final location). The route902is typically defined by one or more segments. For example, a segment is a distance to be traveled over at least a portion of a street, road, highway, driveway, or other physical area appropriate for automobile travel. In some examples, e.g., if the AV100is an off-road capable vehicle such as a four-wheel-drive (4WD) or all-wheel-drive (AWD) car, SUV, pick-up truck, or the like, the route902includes “off-road” segments such as unpaved paths or open fields. In addition to the route902, a planning module also outputs lane-level route planning data908. The lane-level route planning data908is used to traverse segments of the route902based on conditions of the segment at a particular time. For example, if the route902includes a multi-lane highway, the lane-level route planning data908includes trajectory planning data910that the AV100can use to choose a lane among the multiple lanes, e.g., based on whether an exit is approaching, whether one or more of the lanes have other vehicles, or other factors that vary over the course of a few minutes or less. Similarly, in some implementations, the lane-level route planning data908includes speed constraints912specific to a segment of the route902. For example, if the segment includes pedestrians or un-expected traffic, the speed constraints912may limit the AV100to a travel speed slower than an expected speed, e.g., a speed based on speed limit data for the segment. In an embodiment, the inputs to the planning module404includes database data914(e.g., from the database module410shown inFIG.4), current location data916(e.g., the AV position418shown inFIG.4), destination data918(e.g., for the destination412shown inFIG.4), and object data920(e.g., the classified objects416as perceived by the perception module402as shown inFIG.4). In some embodiments, the database data914includes rules used in planning. Rules are specified using a formal language, e.g., using Boolean logic. In any given situation encountered by the AV100, at least some of the rules will apply to the situation. A rule applies to a given situation if the rule has conditions that are met based on information available to the AV100, e.g., information about the surrounding environment. Rules can have priority. For example, a rule that says, “if the road is a freeway, move to the leftmost lane” can have a lower priority than “if the exit is approaching within a mile, move to the rightmost lane.” FIG.10illustrates a directed graph1000used in path planning, e.g., by the planning module404(FIG.4). In general, a directed graph1000like the one shown inFIG.10is used to determine a path between any start point1002and end point1004. In real-world terms, the distance separating the start point1002and end point1004may be relatively large (e.g., in two different metropolitan areas) or may be relatively small (e.g., two intersections abutting a city block or two lanes of a multi-lane road). In an embodiment, the directed graph1000has nodes1006a-drepresenting different locations between the start point1002and the end point1004that could be occupied by an AV100. In some examples, e.g., when the start point1002and end point1004represent different metropolitan areas, the nodes1006a-drepresent segments of roads. In some examples, e.g., when the start point1002and the end point1004represent different locations on the same road, the nodes1006a-drepresent different positions on that road. In this way, the directed graph1000includes information at varying levels of granularity. In an embodiment, a directed graph having high granularity is also a subgraph of another directed graph having a larger scale. For example, a directed graph in which the start point1002and the end point1004are far away (e.g., many miles apart) has most of its information at a low granularity and is based on stored data, but also includes some high granularity information for the portion of the graph that represents physical locations in the field of view of the AV100. The nodes1006a-dare distinct from objects1008a-bwhich cannot overlap with a node. In an embodiment, when granularity is low, the objects1008a-brepresent regions that cannot be traversed by automobile, e.g., areas that have no streets or roads. When granularity is high, the objects1008a-brepresent physical objects in the field of view of the AV100, e.g., other automobiles, pedestrians, or other entities with which the AV100cannot share physical space. In an embodiment, some or all of the objects1008a-bare static objects (e.g., an object that does not change position such as a street lamp or utility pole) or dynamic objects (e.g., an object that is capable of changing position such as a pedestrian or other car). The nodes1006a-dare connected by edges1010a-c. If two nodes1006a-bare connected by an edge1010a, it is possible for an AV100to travel between one node1006aand the other node1006b, e.g., without having to travel to an intermediate node before arriving at the other node1006b. (When we refer to an AV100traveling between nodes, we mean that the AV100travels between the two physical positions represented by the respective nodes.) The edges1010a-care often bidirectional, in the sense that an AV100travels from a first node to a second node, or from the second node to the first node. In an embodiment, edges1010a-care unidirectional, in the sense that an AV100can travel from a first node to a second node, however the AV100cannot travel from the second node to the first node. Edges1010a-care unidirectional when they represent, for example, one-way streets, individual lanes of a street, road, or highway, or other features that can only be traversed in one direction due to legal or physical constraints. In an embodiment, the planning module404uses the directed graph1000to identify a path1012made up of nodes and edges between the start point1002and end point1004. An edge1010a-chas an associated cost1014a-b. The cost1014a-bis a value that represents the resources that will be expended if the AV100chooses that edge. A typical resource is time. For example, if one edge1010arepresents a physical distance that is twice that as another edge1010b, then the associated cost1014aof the first edge1010amay be twice the associated cost1014bof the second edge1010b. Other factors that affect time include expected traffic, number of intersections, speed limit, etc. Another typical resource is fuel economy. Two edges1010a-bmay represent the same physical distance, but one edge1010amay require more fuel than another edge1010b, e.g., because of road conditions, expected weather, etc. When the planning module404identifies a path1012between the start point1002and end point1004, the planning module404typically chooses a path optimized for cost, e.g., the path that has the least total cost when the individual costs of the edges are added together. Autonomous Vehicle Control FIG.11illustrates a block diagram1100of the inputs and outputs of a control module406(e.g., as shown inFIG.4). A control module operates in accordance with a controller1102which includes, for example, one or more processors (e.g., one or more computer processors such as microprocessors or microcontrollers or both) similar to processor304, short-term and/or long-term data storage (e.g., memory random-access memory or flash memory or both) similar to main memory306, ROM1308, and storage device210, and instructions stored in memory that carry out operations of the controller1102when the instructions are executed (e.g., by the one or more processors). In an embodiment, the controller1102receives data representing a desired output1104. The desired output1104typically includes a velocity, e.g., a speed and a heading. The desired output1104can be based on, for example, data received from a planning module404(e.g., as shown inFIG.4). In accordance with the desired output1104, the controller1102produces data usable as a throttle input1106and a steering input1108. The throttle input1106represents the magnitude in which to engage the throttle (e.g., acceleration control) of an AV100, e.g., by engaging the steering pedal, or engaging another throttle control, to achieve the desired output1104. In some examples, the throttle input1106also includes data usable to engage the brake (e.g., deceleration control) of the AV100. The steering input1108represents a steering angle, e.g., the angle at which the steering control (e.g., steering wheel, steering angle actuator, or other functionality for controlling steering angle) of the AV should be positioned to achieve the desired output1104. In an embodiment, the controller1102receives feedback that is used in adjusting the inputs provided to the throttle and steering. For example, if the AV100encounters a disturbance1110, such as a hill, the measured speed1112of the AV100is lowered below the desired output speed. In an embodiment, any measured output1114is provided to the controller1102so that the necessary adjustments are performed, e.g., based on the differential1113between the measured speed and desired output. The measured output1114includes measured position1116, measured velocity1118, (including speed and heading), measured acceleration1120, and other outputs measurable by sensors of the AV100. In an embodiment, information about the disturbance1110is detected in advance, e.g., by a sensor such as a camera or LIDAR sensor, and provided to a predictive feedback module1122. The predictive feedback module1122then provides information to the controller1102that the controller1102can use to adjust accordingly. For example, if the sensors of the AV100detect (“see”) a hill, this information can be used by the controller1102to prepare to engage the throttle at the appropriate time to avoid significant deceleration. FIG.12illustrates a block diagram1200of the inputs, outputs, and components of the controller1102. The controller1102has a speed profiler1202which affects the operation of a throttle/brake controller1204. For example, the speed profiler1202instructs the throttle/brake controller1204to engage acceleration or engage deceleration using the throttle/brake1206depending on, e.g., feedback received by the controller1102and processed by the speed profiler1202. The controller1102also has a lateral tracking controller1208which affects the operation of a steering controller1210. For example, the lateral tracking controller1208instructs the steering controller1204to adjust the position of the steering angle actuator1212depending on, e.g., feedback received by the controller1102and processed by the lateral tracking controller1208. The controller1102receives several inputs used to determine how to control the throttle/brake1206and steering angle actuator1212. A planning module404provides information used by the controller1102, for example, to choose a heading when the AV100begins operation and to determine which road segment to traverse when the AV100reaches an intersection. A localization module408provides information to the controller1102describing the current location of the AV100, for example, so that the controller1102can determine if the AV100is at a location expected based on the manner in which the throttle/brake1206and steering angle actuator1212are being controlled. In an embodiment, the controller1102receives information from other inputs1214, e.g., information received from databases, computer networks, etc. Architecture for Determination of an Optimal Spatiotemporal Sensor Configuration FIG.13illustrates a block diagram of an architecture1300for determination of an optimal spatiotemporal sensor configuration for navigation of an AV1308using simulation of virtual sensors, in accordance with one or more embodiments. The architecture1300includes an environment1304within which the AV1308and objects1316,1320are located. The architecture1300also includes a remote server1324communicably coupled to the AV1308. In other embodiments, the architecture1300includes additional or fewer components than those described herein. Similarly, the functions can be distributed among the components and/or different entities in a different manner than is described here. The server1324performs computations used by the AV1308and other vehicles located within the environment1304and also stores data accessed by the AV1308and the other vehicles. The server1324may be an example of the server136shown inFIG.1. In one embodiment, the server1308may be a “cloud” server as described in more detail above with respect to server136inFIGS.1and2. Portions of the server1308may be implemented in software or hardware. For example, the server1308or a portion of the server1308may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. In one embodiment, illustrated inFIG.13, the server1324contains an AV sensor configurator1328. In other embodiments, the AV sensor configurator1328is located within the AV1308or within a component of the AV1308, such as the perception module1336, the control module1340, or the planning module404illustrated and described above with reference toFIG.4. Portions of the AV sensor configurator1328may be implemented in software or hardware. For example, the AV sensor configurator1328or a portion of the AV sensor configurator1328may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The AV sensor configurator1328builds a model of a virtual AV for simulation in a controlled environment to determine an optimal spatiotemporal sensor configuration for navigation of the AV1308. The AV sensor configurator1328determines the optimal spatiotemporal sensor configuration by simulating virtual models of the visual sensors1344and odometry sensors1348of the AV1308. In one embodiment, the AV sensor configurator1328generates and segregates a virtual viewing range of a virtual sensor of the virtual AV into several frustums. An example virtual viewing range1530of a virtual sensor is illustrated and described below with reference toFIG.15B. The virtual viewing range of the virtual sensor corresponds to a viewing range of a real sensor, such as a visual sensor1344or an odometry sensor1348, of the AV1308. The AV sensor configurator1328generates a geometric viewport to simulate the virtual sensor. An example geometric viewport1500of a virtual sensor is illustrated and described below with reference toFIG.15A. A height of the geometric viewport, expressed in pixels, corresponds to a number of rays emitted from the virtual sensor. The AV sensor configurator1328segregates the geometric viewport into a number of sections, wherein each section corresponds to one of the frustums. The example geometric viewport1500is illustrated segregated into sections below with reference toFIG.15C. The AV sensor configurator1328renders a virtual point cloud of the virtual sensor. The rendering of the virtual point cloud is described in detail below with reference toFIG.14. The virtual point cloud includes coordinate positions representing a portion of the environment1304that is located within the virtual viewing range of the virtual sensor. The AV sensor configurator1328determines, based on the virtual point cloud of the virtual sensor, an optimal spatiotemporal configuration of the visual sensors1344and odometry sensors1348of the AV1308. The structure and operation of the AV sensor configurator1328is described in detail below with reference toFIG.14. The sensor data store1332stores virtual sensor data generated by the AV sensor configurator1328as well as visual sensor data1352and odometry data1356generated by the visual sensors1344and the odometry sensors1348of the AV1308. The sensor data store1332is communicatively coupled to the AV sensor configurator1328. The data stored by the sensor data store1332is used by the AV sensor configurator1328for computation as well as by modules on the AV1308, such as the planning module404(inFIG.4) and the control module1340for navigation of the AV1308. The sensor data store1332may be organized as a database or table of images stored on one or more of removable or non-removable memory cards, tape cassettes, and computer hard drives. In one embodiment, the sensor data store1332may include multiple data fields, each describing one or more attributes of sensor data. For example, the sensor data store1332stores virtual point cloud data of a virtual sensor generated by the AV sensor configurator1328, sensor data1352generated by the visual sensors1344representing coordinate positions of the objects1316,1320, pixels representing the objects1316,1320, raster images representing the coordinate positions of the objects1316,1320, pixels of a geometric viewport (illustrated below inFIG.15), two-dimensional cylindrical representations of a surface of the objects1316,1320, reflectance values of a surface of the objects1316,1320, LIDAR point cloud data generated by the LIDAR123or LIDAR system602(illustrated and described above inFIGS.1and6), or odometry data1356representing the AV1308's position, velocity, acceleration, and orientation generated by the odometry sensors1348. The environment1304may be an example of the environment190illustrated and described above with reference toFIG.1. The environment1304represents a geographical area, such as a state, a town, a neighborhood, or a road network or segment. The environment1304includes the AV1308, and one or more objects1316,1320. The objects are physical entities external to the AV1308. In other embodiments, the environment1304includes additional or fewer components than those described herein. Similarly, the functions can be distributed among the components and/or different entities in a different manner than is described here. The objects1316,1320are located within the environment1304external to the AV1308and are examples of the objects416shown inFIGS.4and5. In one embodiment, the object1316is a static portion or aspect of the environment1304, such as a road segment, a traffic signal a building, a parking space located on a road segment, a highway exit or entrance ramp, a plurality of lanes of a drivable area of the environment1304orientated in the same direction, an elevation of the drivable area, a curb located adjacent to the drivable area, or a median separating two lanes of the drivable area. Static objects have more permanent characteristics of the environment1304that do not change every day. In driving mode, once sensor data representing static characteristics is mapped, the AV1308can focus on navigating and mapping other sensor data representing more dynamic characteristics, such as another vehicle. In one embodiment, the object1320is a more-dynamic object, such as another vehicle, a pedestrian, or a cyclist. The sensor data representing the dynamic characteristics of the object1320instructs the AV1308to perform collision prediction and reduce driving aggressiveness if needed. The objects1316,1320are described above in more detail with reference to the physical object608, boundaries616of a physical object608, the physical object706, the ground802, and the object808inFIGS.6,7, and8. In one embodiment, the objects1316,1320are other vehicles such as other AVs, semi-autonomous vehicles, or non-autonomous vehicles navigating or parked outside or within the environment1304. For example, a vehicle1316can enter and exit the environment1304during navigation as well as navigate within other environments. The vehicle1316may be part of the traffic experienced on roadways of the environment1304by the AV1308. In some embodiments, the vehicles1316,1320belong to one or more AV fleets. The AV1308is a partly-autonomous or fully autonomous vehicle that uses its visual sensors1344, odometry sensors1348, and control module1340to navigate around objects1316,1320while following a trajectory, for example, the trajectory198shown inFIG.1, within the environment1304. The AV1308includes a communication device1360, the perception module1336, the control module1340, a display device1364, the visual sensors1344, and the odometry sensors1348. The AV1308is communicatively coupled to the AV sensor configurator1328. The AV1308may be an example of the AV100inFIG.1. In other embodiments, the AV1308includes additional or fewer components than those described herein. Similarly, the functions can be distributed among the components and/or different entities in a different manner than is described here. The communication device1360communicates data such as sensor data1352generated by the visual sensors1344representing coordinate positions of the objects1316,1320, pixels representing the objects1316,1320, or raster images representing the coordinate positions of the objects1316,1320. In one embodiment, the communication device1360communicates data such as reflectance values of a surface of the objects1316,1320, LIDAR point cloud data generated by the LIDAR123or LIDAR system602(illustrated and described above inFIGS.1and6), or odometry data1356representing the AV1308's position, velocity, acceleration, and orientation generated by the odometry sensors1348. In one embodiment, the communication device1360communicates data such as measured or inferred properties of the AV1308's states and conditions with the server1324, a passenger within the AV1308, or other vehicles. The communication device1360may be an example of the communication device140shown inFIG.1. The communication device1360is communicatively coupled to the AV sensor configurator1328across a network. In an embodiment, the communication device1360communicates across the Internet, electromagnetic spectrum (including radio and optical communications) or other media (e.g., air and acoustic media). Portions of the communication device1360may be implemented in software or hardware. For example, the communication device1360or a portion of the communication device1360may be part of a PC, a tablet PC, an STB, a smartphone, an IoT appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The communication device1360is described in more detail above with respect to communication device140inFIG.1. The perception module1336receives the visual sensor data1352from the visual sensors1344and the odometry data1356from the odometry sensors1348and performs object recognition and classification functions for the objects1316,1320. The perception module1336may be an example of the perception module402illustrated and described above with reference toFIG.4. The perception module1336is coupled to the AV sensor configurator1328to transmit the visual sensor data1352and the odometry data1356to the AV sensor configurator1328. Portions of the perception module1336may be implemented in software or hardware. For example, the perception module1336or a portion of the perception module1336may be part of a PC, a tablet PC, an STB, a smartphone, an IoT appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. In one embodiment, the perception module1336determines a reflectance of a surface of the object1316using the visual sensor data1352. The reflectance of the surface is the effectiveness of the surface in reflecting light. The reflectance of surfaces of the objects1316,1320is used by the planning module404, perception module1336, or control module1340to navigate the AV1308around the objects1316,1320. The control module1340uses inputs from the planning module404and the perception module1336to operate the brakes420c, steering420a, and throttle420b(illustrated and described above with reference toFIG.4) to navigate the AV1308within the environment1304. The control module1340may be an example of the control module406illustrated and described above with reference toFIG.4. The control module1340is coupled to the perception module1336. Portions of the control module1340may be implemented in software or hardware. For example, the control module1340or a portion of the control module1340may be part of a PC, a tablet PC, an STB, a smartphone, an IoT appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. To simulate a virtual sensor arranged in a given spatiotemporal configuration, the AV sensor configurator1328renders a virtual LIDAR point cloud of the virtual sensor. In one embodiment, the AV sensor configurator1328uses a real LIDAR sensor as a model to simulate the virtual sensor. The AV sensor configurator1328simulates the virtual sensor using parameters of the real LIDAR sensor, including a number of virtual lasers, a position and angle of each virtual laser, or a rotational speed of each virtual laser. The virtual LIDAR point cloud of the virtual sensor includes a plurality of coordinate positions representing a portion of the environment1304that is located within a virtual viewing range of the virtual sensor. The virtual LIDAR point cloud is a dataset of points that represent one or more 3D shapes or features of the portion of the environment1304located within the virtual viewing range of the virtual sensor. Each point in the virtual LIDAR point cloud has its own set of X, Y and Z coordinates and other attributes described below. The AV sensor configurator1328renders, using the virtual LIDAR point cloud of the virtual sensor, a raster image representing coordinate positions of an object, for example, object1316. To render the raster image, the AV sensor configurator1328uses attributes of the dataset of points. The attributes represent time, flight line, intensity (the amount of light returning back from a coordinate position), or color of the object1316, etc. In one embodiment, the AV sensor configurator1328determines, using the coordinate positions of the object1316in the raster image, a distance from the AV1308to the object1316. In other embodiments, the perception module1336or the planning module404determines, using the coordinate positions of the object1316in the raster image, a distance from the AV1308to the object1316. For example, the distance from the AV1308to the object1316may be determined as follows. If the coordinate position of the AV1308in the environment1304is (p1, q1) and the coordinate position of the object1316is (p2, q2), the distance is determined by a square root of (p2−p1)2+(q2−q1)2. In one embodiment, the perception module1336determines the distance from the AV1308to the object1316based on a distance of each cell in the raster image from a set of environmental features. The perception module1336may also determine a shortest path across a surface between the AV1308to the object1316. In another embodiment, the perception module1336measures the distance in terms of a cost, for example, energy expenditure of traveling to the object1316. The control module1340of the AV1308navigates the AV1308to avoid collisions with the object1316based on the determined distance. In one embodiment, the control module1340navigates a discretized drivable area while performing collision checking or randomized planning, such as probabilistically exploring the drivable area around the object1316. In another embodiment, the control module1340follows a collision-free trajectory determined by the planning module404to avoid the object1316. In another embodiment, if the object1316is a moving object such as another vehicle, the control module1340infers the object1316's intention from its motion, such as giving way or acting aggressively. The control module1340triggers the steering control102, brakes103, gears, accelerator pedal or other acceleration control mechanisms if a predicted time to collision with the object1316falls below a threshold. In one embodiment, the AV sensor configurator1328or the perception module1336determines, using the coordinate positions of the object1316in the raster image, a reflectance of a surface of the object1316. The reflectance of a surface of the object1316is the effectiveness of the surface in reflecting light. The reflectance is determined by the AV sensor configurator1328or the perception module1336as a fraction of incident light that is reflected by the surface. In one embodiment, the AV sensor configurator1328or the perception module1336determines a reflectance spectrum or spectral reflectance curve, which is a plot of the reflectance as a function of wavelength. In embodiments, the AV sensor configurator1328or the perception module1336may determine the hemispherical reflectance or directional reflectance of the surface of the object1316for use in navigation by the control module1340. The control module1340navigates the AV1308to avoid a collision of the AV1308with the object1316based on the determined reflectance. In one embodiment, the environment1304is modeled as a probabilistic grid in which each grid cell is represented by a Gaussian distribution over reflectance values. The control module1340or planning module404uses Bayesian inference to preferentially weight grid cells most likely to be stationary within the environment1304to avoid collisions while driving the AV1308. In another embodiment, the control module1340or planning module404uses a reflectance-based inference grid based on the variations in reflectance introduced by laser source, angle of incidence, range, etc. The control module1340navigates the AV1308based on the appearance of the surface of the object1316from the reflectance-based inference grid. For example, a wet surface tends to reflect less infrared laser light than do dry surfaces. The display device1364provides data to a passenger riding in the AV1308. The data may represent the trajectory198of the AV1308, passenger comfort settings, or operational metrics such as speed or acceleration, etc. The display device1364may be an example of the display312illustrated and described above with reference toFIG.3. The display device1364is coupled to the communication device1360and one or more other modules of the AV1308to receive the data to be displayed to the passenger. In one embodiment, the communication device1360transmits a raster image1368generated by the AV sensor configurator1328to the display device1364for display. The raster image represents coordinate positions of an object such as1316located within the environment1304, as described below with reference toFIG.14. The display device1364displays individual pixels as squares and constructs colors by adding the values for red, green and blue. In one embodiment, the raster image includes a dot matrix data structure that represents a generally rectangular grid of pixels (points of color), viewable via the display device1364. The raster images are stored in image files in the sensor data store1332. The one or more visual sensors1344sense a state of the environment1304, such as the presence and structure of the objects1316,1320, and transmit the sensor data1352and semantic data representing the state to the perception module1336. The visual sensors1344may be an example of the sensors122-123illustrated and described above with reference toFIG.1. The visual sensors1344are communicatively coupled to the perception module1336to transmit the sensor data1352and semantic data. The visual sensors1344include one or more monocular or stereo video cameras in the visible light, infrared or thermal (or both) spectra, LIDAR, RADAR, ultrasonic sensors, time-of-flight (TOF) depth sensors, and may include temperature sensors, humidity sensors, or precipitation sensors. The visual sensors1344are arranged in a spatiotemporal configuration on the AV1308. In one embodiment, the visual sensors1344include LIDARs. The AV1308may be equipped with a single 360 degree LIDAR installed on the roof of the AV1308. In another embodiment, the AV1308includes a number of LIDARs. One or more LIDARs may be arranged on each side of the roof. The spatiotemporal configuration includes the pitch, roll, and heading of the LIDARs. The pitch of a LIDAR refers to the angular motion or angular orientation of the LIDAR about a transverse axis. The roll of a LIDAR refers to the rotational displacement or orientation of the LIDAR about a longitudinal axis. The heading of a LIDAR refers to the directional orientation of the LIDAR. An optimal spatiotemporal configuration is based on optimizing the information generated about the environment1304by the one or more LIDARs and the cost incurred in generating the information. In one embodiment, the visual sensors1344include 3D cameras. A 3D camera of the AV1308is used to acquire a larger field of view of the environment1304through a camera configuration. In this configuration, the environment1304is segmented into cubes and an optimal 3D camera configuration is determined by a number of cubes in the observation range of the cameras. In one embodiment, the sensor data1352includes LIDAR point cloud data. For example, the LIDAR sensors1344of the AV1308are used to illuminate a target object1316with pulsed laser light and measure the reflected pulses. Differences in laser return times and wavelengths can then be used to generate the sensor data1352and create a digital 3-D representation (feature) of the target object1316. In one embodiment, the LIDAR point cloud data is stored as a multidimensional occupancy grid. The LIDAR point cloud data is pre-processed at the signal level and then processed at a higher level to extract features of the objects1316,1320. In some embodiments, a combination two- and three-dimensional grid structure is used and the space in these structures is tessellated into several discrete cells. The structure of the LIDAR point cloud data allows a large amount of raw measurement data to be handled by the perception module1336. The sensor data1352represents coordinate positions of the object1316. In one embodiment, the measurement points in the sensor data1352are stored as a three-dimensional grid. Each grid cell of the three-dimensional grid has an associated probability. The probability refers to a likelihood that the grid cell is occupied by a portion of an object, for example, object1316. The grid cells that are occupied by a portion of the object1316have a probability greater than 0.5. The cells that are not occupied possess a probability less than 0.5 (white space). The grid coordinate system uses the spatiotemporal configuration of the visual sensors1344and the vehicle position (for example, determined using egomotion estimation) to represent the coordinate positions of the object1316. The perception module1336determines the spatial characteristics of the object1316using the sensor data1352. In one embodiment, the visual sensors1344include spatially distributed smart camera or LIDAR devices capable of processing and fusing the sensor data1352of the environment1304from a variety of viewpoints into a more useful form of data than individual images. For example, the sensor data1352includes LIDAR point cloud data reflected from a target object1316. In another example, the sensor data1352includes an image of the environment1304. The sensor data1352is transmitted to the perception module1336for image processing, communication, and storage functions. The visual sensors1344are described above in more detail with reference to inputs502a-d, LIDAR system602, light604a-c, light emitter606, light detectors610, field of view614, and light804a-dinFIGS.6,7, and8. The sensor data1352is described above in more detail with reference to outputs504a-d, image612, and LIDAR data points704inFIGS.6,7, and8. The one or more odometry sensors1348sense a state of the AV1308with respect to the environment1304and transmit odometry data1356representing the state of the AV1308to the perception module1336. The odometry sensors1348may be an example of the sensors121illustrated and described above with reference toFIG.1. The odometry sensors1348are communicatively coupled to the perception module1336to transmit the odometry data1356. The odometry sensors1348include one or more GNSS sensors, IMUs that measure both vehicle linear accelerations and angular rates, wheel speed sensors for measuring or estimating wheel slip ratios, wheel brake pressure or braking torque sensors, engine torque or wheel torque sensors, or steering angle and angular rate sensors. An IMU is an electronic device that measures and reports the AV's specific force, angular rate, or the magnetic field surrounding the AV. The IMU uses a combination of accelerometers, gyroscopes, or magnetometers. The IMU is used to maneuver the AV. The IMU allows a GNSS receiver on the AV to work when GNSS-signals are unavailable, such as in tunnels, or when electronic interference is present. The odometry measurements include a speed, an acceleration, or a steering angle. The AV uses the odometry data to provide a uniquely identifying signature for distinguishing between different spatiotemporal locations within the environment. In one embodiment, the odometry sensors1348measure and report the AV1308's spatiotemporal location, specific force, angular rate, or a magnetic field surrounding the AV1308, using a combination of accelerometers, gyroscopes, or magnetometers. In another embodiment, the odometry sensors1348generate odometry data1356including a speed, a steering angle, a longitudinal acceleration, or a lateral acceleration. The odometry sensors1348utilize the raw IMU measurements to determine attitude, angular rates, linear velocity, and position relative to a global reference frame. In one embodiment, the odometry data1356reported by the IMU is used to determine attitude, velocity, and position by integrating angular rate from a gyroscope to calculate angular position. The perception module1336or the AV sensor configurator1328integrates and correlates the odometry data1356with the sensor data1352to derive the coordinates of the AV1308and the objects1316,1320. The AV sensor configurator1328uses the odometry data1356to determine an optimal spatiotemporal configuration for the visual sensors1344based on the variation in odometry measurements with variation in spatiotemporal configuration for the visual sensors1344. The control module1340uses the odometry data1356to navigate the AV1308to avoid collisions with the objects1316,1320. Among the benefits and advantages of the embodiments disclosed herein are that many different and complex self-driving scenarios can be simulated in a safe and cost-effective manner. The disclosed embodiments obviate driving millions of miles in a physical AV to analyze and verify different sensor configurations. In embodiments, sensors such as LIDARs, RADARs, and cameras can be simulated in dangerous and costly scenarios such as collisions of vehicles, which would be expensive using traditional methods on physical roads. Moreover, certain traditional sensors that rely on lasers, such as LIDARs are sometimes suboptimal when encountering reflective surfaces such as puddles of water or glass-fronted buildings. The disclosed embodiments can analyze and verify such scenarios and determine sensor accuracy. Furthermore, the disclosed embodiments can also be used to test the range and effectiveness of different sensors. AVs operating in different environmental conditions may require certain sensors. For example, dense urban environments may require spinning LIDARs whereas a highway or freeway environment may require solid state LIDARs. Other embodiments disclose performing blind spot analysis for various sensors to determine optimal sensor configuration involving the number, type, and spatial arrangement of the sensors. Further benefits and advantages are that the virtual sensor simulations increase the usefulness of physically driving the AV1308through the environment by comparing the virtual point cloud of the simulation scenarios with the sensor data1352. The disclosed embodiments enable realistic sensor simulation at reduced cost and increased accuracy. Rendering the lines in the raster images does not affect the simulation performance and is useful in verifying the virtual point cloud data. For example, the raster images can be used to determine whether the virtual lasers and virtual point cloud are matched visually. Solid-state LIDARs can have blind spots because the LIDARs emit rays in only a single direction. Moreover, the LIDARs are typically unable to sense heat. Therefore, traditional LIDARs can sometimes miss children or pets on a roadway. The embodiments disclosed herein provide an improved spatiotemporal configuration of LIDAR sensors that reduces blind spots and improves the detection of children and pets. Navigation of an AV using the optimal spatiotemporal configuration for the visual sensors1344obtained from the simulation of virtual sensors disclosed herein is more accurate and computationally less expensive than traditional methods. The AV is also able to efficiently determine localization for real-time navigation. Navigating the AV using the optimal spatiotemporal configuration for the visual sensors1344results in increased passenger and pedestrian safety, lower wear and tear on the AV, reduced travel time, a reduced travel distance, etc. Increased safety for other vehicles on the road network is also achieved. Block Diagram of an AV Sensor Configurator FIG.14illustrates a block diagram of an AV sensor configurator1328for determination of an optimal spatiotemporal sensor configuration for navigation of the AV1308using simulation of virtual sensors, in accordance with one or more embodiments. The AV sensor configurator1328builds a model of a virtual AV for simulation in a controlled environment to determine the optimal spatiotemporal sensor configuration by simulating virtual models of the visual sensors1344and odometry sensors1348of the AV1308. The AV sensor configurator1328includes a virtual AV model generator1400, a virtual viewing range segregator1404, a geometric viewport generator1408, a geometric viewport segregator1412, a virtual point cloud generator1416, a raster image generator1420, a sensor configuration generation module1424, and a communication interface1428. In other embodiments, the AV sensor configurator1328includes additional or fewer components than those described herein. Similarly, the functions can be distributed among the components and/or different entities in a different manner than is described here. The AV sensor configurator1328may be located on the server1324, as illustrated above inFIG.13, or on the AV1308. In an embodiment, the AV sensor configurator1328is part of the planning module404. The virtual AV model generator1400generates a model of a virtual vehicle based on the AV1308to perform the sensor simulation. The virtual AV model generator1400is communicatively coupled to the virtual viewing range segregator1404, the virtual point cloud generator1416, and the sensor configurator generation module1424to generate the optimal spatiotemporal configuration of the visual sensors1344. Portions of the virtual AV model generator1400may be implemented in software or hardware. For example, the virtual AV model generator1400or a portion of the virtual AV model generator1400may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The virtual AV model generator1400generates a model of a virtual AV operating in the environment1304. In one embodiment, the model of the virtual AV includes a predictive model as well as a functional model of the AV1308's components and sensors. The model is designed to be flexible to define different driving scenarios for the AV1308. In one embodiment, the virtual AV model generator1400uses photorealistic simulation to model the AV1308's visual sensors1344, including cameras, LIDAR, and RADAR. The model of the virtual AV is used to simulate driving conditions, such as rainstorms, snowstorms, and glare on road surfaces. The model of the virtual AV includes a virtual sensor, for example a virtual radar, virtual LIDAR, or virtual camera having a virtual viewing range. In one embodiment, the virtual sensor includes a number of virtual lasers separated into a number of groups. Each group of virtual lasers is angled and spaced from each other group as well as from the virtual viewing range. The individual virtual lasers are angled based on the virtual viewing range and the number of virtual lasers. Thus, different sensors and different sets of virtual lasers and angles are modeled. To simulate the virtual sensor, the virtual lasers are rotated in horizontal angular steps within a specific time frame and a hit position of each virtual laser is recorded. The virtual sensor within the model of the virtual AV is configured using parameters, such as a number of the virtual lasers, a virtual laser range, a rotation speed of the virtual sensor, a rotation angle between scans, a vertical offset between groups of virtual lasers, or a vertical viewing range. The virtual viewing range segregator1404segregates a virtual viewing range of a virtual sensor of the model of the virtual AV. The virtual viewing range segregator1404is communicatively coupled to the virtual AV model generator1400to receive the model. Portions of the virtual viewing range segregator1404may be implemented in software or hardware. For example, the virtual viewing range segregator1404or a portion of the virtual viewing range segregator1404may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The virtual viewing range of the virtual sensor corresponds to a viewing range of a visual sensor1344of the AV1308operating in the environment1304. The virtual sensor sweeps or scans in a direction of the beam or rays, thus generating a collection of distance measurements within the virtual viewing range. The virtual viewing range is a range of horizontal and vertical angles through which the virtual sensor captures virtual sensor data. For example, a two-axis scanning virtual LIDAR captures shape information in the horizontal and vertical directions from a stationary location. In one embodiment, the virtual viewing range of the virtual sensor ranges from a single window to full spherical coverage of 360 by 180 degrees. In another embodiment, the virtual viewing range of the virtual sensor is 360 degrees in the horizontal and 30 to 120 degrees in the vertical. The virtual viewing range segregator1404segregates the virtual viewing range of the virtual sensor into a plurality of frustums. Each frustum is a portion of a solid shape, such as a cone or a pyramid that lies between one or two parallel planes cutting the solid shape. For example, a right frustum is a parallel truncation of a right pyramid or a right cone. When all the edges of the frustum are identical, the frustum becomes a uniform prism. An example virtual viewing range1530of a virtual sensor segregated into frustums, for example the frustums1534,1538, is illustrated below with reference toFIG.15B. Each plane section of a frustum is a base of the frustum. The axis of a frustum is the same as the axis of the original cone or pyramid. A frustum may be circular if it has circular bases. The height of a frustum is the perpendicular distance between the planes of the two bases. The geometric viewport generator1408generates a geometric viewport to simulate the virtual sensor. An example of a geometric viewport1500is illustrated below inFIG.15A. The geometric viewport generator1408is communicatively coupled to the virtual AV model generator1400and geometric viewport segregator1412to generate and transmit a geometric viewport. Portions of the geometric viewport generator1408may be implemented in software or hardware. For example, the geometric viewport generator1408or a portion of the geometric viewport generator1408may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The geometric viewport generator1408generates a geometric viewport including a plurality of pixels. The geometric viewport is a viewing region having a polygonal shape used for rendering a representation of the objects1316,1320as an image. In one embodiment, the geometric viewport includes an area of interest for the virtual sensor expressed in coordinates such as meters or GNSS coordinates. In one embodiment, the geometric viewport includes an area that is expressed in rendering-device-specific coordinates. For example, a plurality of pixels is used to express the screen coordinates in which the objects1316,1320are rendered. In one embodiment, the geometric viewport includes a 2D rectangle that is used to render a 3D environment as viewed by a spatiotemporal configuration of the virtual sensor. In one embodiment, the geometric viewport has a rectangular shape, as illustrated with reference to the rectangular viewport1500below inFIG.15A. The geometric viewport has a height corresponding to a number of rays emitted from the virtual sensor. As illustrated and described above with reference toFIG.6, a visual sensor such as the LIDAR system602emits light lays604a-cfrom a light emitter606, for example, a laser transmitter). The height of the geometric viewport thus corresponds to the number of rays emitted by the virtual sensor that is modeling a visual sensor1344of the AV1308. An example of a height1504of the rectangular viewport1500is illustrated below inFIG.15A. In one embodiment, the geometric viewport has a width that increases as a number of the frustums increases. The width corresponds to the density of the virtual sensor simulation or the density of the LIDAR returns. An example of a width1508of the rectangular viewport1500is illustrated below inFIG.15A. The number of rays emitted by the virtual sensor and the density of sensor returns correspond to the resolution of the LIDAR and allow the LIDAR to provide a three-dimensional view of the environment1304by scanning laser rays back and forth across the virtual viewing range. The geometric viewport segregator1412divides the generated geometric viewport into sections to simulate the virtual sensor. An example of a segregated geometric viewport1500is illustrated below inFIG.15C. The geometric viewport segregator1412is communicatively coupled to the geometric viewport generator1408to receive the geometric viewport. Portions of the geometric viewport segregator1412may be implemented in software or hardware. For example, the geometric viewport segregator1412or a portion of the geometric viewport segregator1412may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The geometric viewport segregator1412segregates the geometric viewport into a plurality of sections. Each section of the geometric viewport is a virtual area used by the raster image generator1420to scale and size a raster image when rendering the raster image to the geometric viewport. An example of sections1560and1564of the geometric viewport1500is illustrated below inFIG.15C. Each section of the plurality of sections that the geometric viewport is segregated into corresponds to a frustum of the virtual viewing range. In one embodiment, each section of the geometric viewport corresponds to a region of the environment1304that is rendered on the geometric viewport. The geometric viewport segregator1412obtains a section by truncating, using parallel planes, a pyramid of vision of the virtual sensor. The section is thus an adaptation of a cone of vision that a visual sensor1344of the AV1308has to the geometric viewport. In one embodiment, the segregating of the geometric viewport into the plurality of sections includes mapping a near plane of each frustum onto a corresponding section of the plurality of sections. The planes that intersect a frustum perpendicular to the viewing direction of the virtual sensor are called the near plane and the far plane. For example, a section may correspond to a frustum of a rectangular pyramid. An example of a near plane1542of a frustum1534is illustrated below with reference toFIG.15B. The virtual point cloud generator1416renders a virtual point cloud of the virtual sensor. The virtual point cloud generator1416is communicatively coupled to the virtual AV model generator1400and the raster image generator1420to generate images representing the virtual point cloud data. Portions of the virtual point cloud generator1416may be implemented in software or hardware. For example, the virtual point cloud generator1416or a portion of the virtual point cloud generator1416may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The virtual point cloud generator1416renders a virtual point cloud of the virtual sensor. In an embodiment, the virtual point cloud is generated or modeled by projecting simple geometric shapes such as triangles that intersect the virtual laser beams generated by the virtual sensor. The virtual point cloud is then subsequently generated based on virtual sensor location and the relative position of the geometric shapes with respect to the virtual sensor. The virtual point cloud generator1416relies on the physical properties of the sensor to be simulated. For example, for simulating spinning LIDARs, the virtual point cloud generator1416accounts for the rotational movement of the spinning LIDAR motor and the movement of the entire LIDAR package, including the sensor housing, mounted on a moving vehicle). In an embodiment, the virtual point cloud generator1416simulates a virtual sensor by iteratively progressing the dynamic state of the simulated environment1304according to a fixed timestamp and then capturing the resulting viewing range of each laser at the iterated timestamp. In one embodiment, the virtual point cloud generator1416uses parameters of the virtual sensor to tune the virtual sensor model and render the virtual point cloud. For example, the virtual point cloud generator1416may vary the scan angle, pulse rate frequency, sidelap, or mean point density of the virtual sensor to render the virtual point cloud. In one embodiment, the virtual point cloud generator1416uses a data-driven model of the virtual sensor, which is tuned based on real LIDAR data obtained from the visual sensors1344. The virtual point cloud generator1416uses a dataset, which is a set of pose-observation pairs of a LIDAR. Each virtual LIDAR pose-observation pair is converted by the virtual point cloud generator1416into the virtual point cloud. The pose of the LIDAR refers to the degrees of rotation and translation of the LIDAR's orientation. The pose-observation pair data is therefore used to reconstruct a 3D scene sensed by the LIDAR. The virtual point cloud includes a plurality of coordinate positions representing a portion of the environment1304that is located within the virtual viewing range of the virtual sensor. The data points within the virtual point cloud include measurement coordinates of external surfaces of the objects1316,1320located within the virtual viewing range of the virtual sensor. In one embodiment, the virtual point cloud is converted to the plurality of coordinate positions using surface reconstruction. In other embodiments, the plurality of coordinate positions of the virtual point cloud is used to render a digital elevation model or a volumetric model of the portion of the environment1304that is located within the virtual viewing range of the virtual sensor. In one embodiment, the virtual point cloud generator1416renders a plurality of virtual point clouds of a plurality of virtual sensors of the virtual vehicle. Feature curves of the objects1316,1320may be extracted from the plurality of virtual point clouds. The virtual point cloud generator1416extracts feature curves from the intersections of the plurality of virtual point clouds that represent regions of the environment1304. For example, the virtual point cloud generator1416uses linear approximation of the plurality of virtual point clouds through a variational-shape approximation approach. Variational-shape approximation is a process of repeatedly partitioning the plurality of virtual point clouds into a set of geometric shapes, for example, ellipses, that provide a concise representation of a surface of an object of the environment1304. In one embodiment, the virtual point cloud generator1416aggregates the plurality of virtual point clouds into an aggregate virtual point cloud. The aggregate virtual point cloud represents a portion of the environment1304located within a virtual viewing range of the plurality of virtual sensors. For example, the virtual point cloud generator1416computes an axis-aligned bounding box for an overlapped region between the plurality of virtual point clouds. The virtual point cloud generator1416divides the bounding box into grid boxes and merges points within each grid box by averaging their locations and colors. The raster image generator1420generates a raster image based on the virtual point cloud data. The raster image generator1420is communicatively coupled to the virtual point cloud generator1416to receive the virtual point cloud data. Portions of the raster image generator1420may be implemented in software or hardware. For example, the raster image generator1420or a portion of the raster image generator1420may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. In one embodiment, the raster image generator1420generates a plurality of raster images from the virtual point cloud data. The raster images provide a visual representation of how the virtual sensor is operating by rendering lines along the virtual lasers without having to display the virtual point cloud. Among the benefits and advantages of the disclosed approach are that rendering the lines does not affect the simulation performance and is useful in verifying the virtual point cloud data. For example, the raster images can be used to determine whether the virtual lasers and the virtual point cloud are matched visually. Each raster image includes the plurality of pixels of the geometric viewport and represents coordinate positions of an object located within the environment. In one embodiment, a raster image includes a dot matrix data structure that represents a rectangular grid of pixels viewable via the geometric viewport. Each raster image rendered on the geometric viewport corresponds to a bitmap. The bitmap may be stored in the same format used for storage in the sensor data store1332or as a device-independent bitmap. Each raster image may be characterized by a width and a height of the raster image in pixels and by a number of bits per pixel or color depth. In one embodiment, a plurality of virtual sensors of the model of the virtual AV is arranged in a spatiotemporal configuration of a plurality of potential spatiotemporal configurations. For each such spatiotemporal configuration of the plurality of spatiotemporal configurations, the raster image generator1420renders a raster image representing the plurality of coordinate positions of the environment1304within the virtual viewing range of the plurality of virtual sensors. In one embodiment, the rendering of the raster image includes receiving, using the visual sensors1344of the AV1308, the sensor data1352. The sensor data1352represents coordinate positions of an object located within the environment1304, for example, object1316. The process of receiving the sensor data1352representing coordinate positions of the object is illustrated and described in detail above with reference to the LIDAR system602inFIG.6. The perception module1336or the AV sensor configurator1328may transmit the sensor data1352to the sensor data store1332via the communication device1360and the communication interface1428. The raster image generator1420generates pixels representing the object1316. The generated pixels are combined with the sensor data1352to generate the raster image. In this manner spectral information may be combined with the pixels to increase object classification accuracy. In one embodiment, the rendering of the raster image is based on a geometric position and a directional orientation of a visual sensor1344relative to the coordinate positions of an object, for example object1316. The rendering of the raster image is based on a position and a six degrees of freedom directional orientation of each visual sensor1344. For example, the six degrees of freedom directional orientation of a visual sensor may be determined by utilizing known reference geometries. In one embodiment, the rendering of the raster image is performed by varying parameters of the visual sensor1344, such as a horizontal angle, a horizontal resolution, a vertical angle, a vertical resolution, a range, a shape of a beam spot (circular, rectangular, or elliptical), a divergence, or a signal cutoff. In one embodiment, the raster image includes a two-dimensional representation of a virtual three-dimensional cylindrical surface of the object, for example object1316. The virtual three-dimensional scene is projected on to a virtual three-dimensional cylindrical surface, which is then unwrapped to form a two-dimensional rectangle that contains a representation of the virtual three-dimensional cylindrical surface. The raster image generator1420renders, onto the geometric viewport, a distinct raster image representing an object, for example1316. The AV sensor configurator1328determines a representational quality of the distinct raster image associated with the reflectance of a surface of the object1316. In one embodiment, the representational quality is used to classify the surface of the object1316as one that provides specular reflection or one that provides diffuse reflection. For specular surfaces, such as glass or polished metal, the reflectance is low at all angles except at the appropriate reflected angle. On the other hand, for diffuse surfaces, such as white paint, reflectance is more uniform. Thus, the reflectance of the surface of the object1316may be used to aid in object recognition and navigation within the environment1304. In another embodiment, the representational quality of the distinct raster image associated with the reflectance of the surface of the object1316is used to identify water bodies or water puddles on a roadway, such as to prevent hydroplaning by the AV1308. A water puddle on a roadway may have high reflectance only at certain wavelengths, while ice and snow generally have high reflectance across all visible wavelengths. The sensor configuration generation module1424uses the simulation of the virtual sensors to generate an optimal spatiotemporal configuration for the visual sensors1344. The sensor configuration generation module1424is communicatively coupled to the virtual point cloud generator1416and the communication interface1428to generate the optimal spatiotemporal configuration. Portions of the sensor configuration generation module1424may be implemented in software or hardware. For example, the sensor configuration generation module1424or a portion of the sensor configuration generation module1424may be part of a PC, a tablet PC, an STB, a smartphone, an internet of things (IoT) appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. The sensor configuration generation module1424determines, based on the virtual point cloud of the virtual sensor, an optimal spatiotemporal configuration of the visual sensors1344of the AV1308. In one embodiment, the sensor configuration generation module1424uses the virtual point cloud to determine parameters of a visual sensor1344, such as a LIDAR. For example, features extracted from the virtual point cloud are matched to features extracted from the sensor data1352that represents the object1316. The spatiotemporal configuration of the visual sensors1344of the AV1308may thus be fine-tuned via regression analysis and simulation. In one embodiment, the resulting optimal spatiotemporal configuration of the visual sensors1344of the AV1308specifies a wide-angle-emitting visual sensor1344and wide-angle optics to focus the backscattered light and obtain the time-of-flight data for modeling the environment1304. In one embodiment, the sensor configuration generation module1424determines, based on the aggregate virtual point cloud, an optimal spatiotemporal configuration of a plurality of visual sensors1344of the AV1308. Each visual sensor1344corresponds to a virtual sensor of the plurality of virtual sensors modeled by the virtual AV model generator1400. For example, the spatiotemporal configuration of the plurality of visual sensors1344may specify whether each visual sensor1344is laser-diode-based or uses an uncooled fiber laser. The spatiotemporal configuration may specify whether each visual sensor1344has the ability to split and route its high-power beams to multiple locations, etc. The spatiotemporal configuration may specify an optimal layout of a plurality of visual sensors1344. In one embodiment, the sensor configuration generation module1424determines, based on the virtual point cloud of the virtual sensor, a blind spot of a visual sensor1344of the AV1308. The blind spot is a spatiotemporal location of the environment1304around the AV1308that cannot be directly observed by the visual sensors1344while the AV1308is navigating. For example, the sensor configuration generation module1424may use the data points in the virtual point cloud to identify areas of low visibility such as where lighting conditions blur the contrast between an object and its surroundings or areas blocked by other objects such as cargo. To identify the blind spot, the sensor configuration generation module1424determines when an object, such as object1316, is located at the blind spot. The plurality of coordinate positions that represent the portion of the environment1304that is located within the virtual viewing range of the virtual sensor is free of the object1316. Hence, the virtual point cloud does not contain the coordinate positions corresponding to the object1316. In one embodiment, the sensor configuration generation module1424extends the viewing range of a visual sensor1344of the AV1308based on analyzing the virtual point cloud data of the virtual sensor. By extending the viewing range of the visual sensor1344, the sensor configuration generation module1424enables the visual sensors1344to generate a more-complete point cloud of the environment by reducing the time needed to sample the environment1344. For example, the spatiotemporal configuration of a visual sensor1344may angle each of the emitters and receivers above or below the horizontal to blanket more of the environment1344in the field of view of virtual lasers within the virtual sensor. In an embodiment, the optimality or effectiveness of a spatiotemporal configuration of sensors can be determined by using numerical (or eyeballing) methods. For example, such methods are based on the visibility and range of simulated LIDAR point cloud. In one embodiment, the sensor configuration generation module1424determines an optimal spatiotemporal configuration of the plurality of spatiotemporal configurations based on the plurality of raster images. For example, based on the plurality of raster images, the sensor configuration generation module1424may determine an optimal number of visual sensors1344for the AV1308. LIDAR sensors are expensive, difficult to manufacture at scale, and may lack the robustness necessary to account for potholes and extreme temperatures. The disclosed embodiments therefore determine an optimal number of LIDAR sensors to navigate the environment1304and identify objects such as1316, while reducing the cost of deployment. The communication interface1428communicates data such as a spatiotemporal configuration of the visual sensors1344, determined blind spot locations for the AV1308, coordinate positions of the objects1316,1320, raster images representing the coordinate positions of the objects1316,1320, or a geometric position and directional orientation of a visual sensor1344relative to the coordinate positions of an object1316. In one embodiment, the communication interface1428communicates instructions including an optimal spatiotemporal configuration of the visual sensors1344to the AV1308. The AV1308uses the instructions to configure and position its visual sensors1344according to the optimal spatiotemporal configuration to increase driving efficiency and safety. The communication interface1428may be an example of the communication device140shown inFIG.1. The communication interface1428is communicatively coupled to the AV1308across a network. In an embodiment, the communication interface1428communicates across the Internet, electromagnetic spectrum (including radio and optical communications) or other media (e.g., air and acoustic media). Portions of the communication interface1428may be implemented in software or hardware. For example, the communication interface1428or a portion of the communication interface1428may be part of a PC, a tablet PC, an STB, a smartphone, an IoT appliance, or any machine capable of executing instructions that specify actions to be taken by that machine. Segregation of a Virtual Viewing Range and a Geometric Viewport FIG.15Aillustrates a geometric viewport1500for determination of an optimal spatiotemporal sensor configuration for navigation of the AV1308using simulation of virtual sensors, in accordance with one or more embodiments. The geometric viewport1500has a rectangular shape. The rectangular viewport1500is used for rendering a representation of the objects1316,1320as an image. In one embodiment, the rectangular viewport1500includes an area of interest for the virtual sensor expressed in coordinates, such as in meters or GNSS coordinates. The rectangular viewport1500has a height1504corresponding to a number of rays emitted from the virtual sensor. The rectangular viewport1500has a width1508that increases as a number of the frustums increases. The number of rays emitted by the virtual sensor and the density of sensor returns correspond to the resolution of the LIDAR and allow the LIDAR to provide a three-dimensional view of the environment1304. FIG.15Billustrates a virtual viewing range1530for determination of an optimal spatiotemporal sensor configuration for navigation of the AV1308using simulation of virtual sensors, in accordance with one or more embodiments. The virtual viewing range1530of the virtual sensor is segregated into a plurality of frustums, for example,1534,1538, and1542. The virtual viewing range1530of the virtual sensor corresponds to a viewing range of a visual sensor1344of the AV1308operating in the environment1304. Segregating the geometric viewport1500into a plurality of sections includes mapping a near plane of each frustum, such as1546,1550, and1554of the plurality of frustums onto a corresponding section of the plurality of sections. FIG.15Cillustrates segregation of the rectangular viewport1500for determination of an optimal spatiotemporal sensor configuration for navigation of the AV1308using simulation of virtual sensors, in accordance with one or more embodiments. The rectangular viewport1500is segregated into a plurality of sections, for example, sections1560,1564, and1568. Each section, for example1560, corresponds to a frustum, for example1534. Example Environment for Determination of an Optimal Spatiotemporal Sensor Configuration FIG.16illustrates an example environment1600for determination of an optimal spatiotemporal sensor configuration for navigation of an AV using simulation of virtual sensors, in accordance with one or more embodiments. The virtual AV model generator1400generates a model of a virtual vehicle1604operating in the environment1600. The model of the virtual vehicle includes virtual sensors1608,1612. In one embodiment, the virtual sensor1608includes a topographic LIDAR, a bathymetric LIDAR, or a terrestrial LIDAR. In a topographic LIDAR, a pulsed laser is optically coupled to a beam director, which scans the laser pulses over a swath of terrain, usually centered on, and co-linear with, a trajectory of the vehicle on which the LIDAR is mounted. Unlike a topographic LIDAR, which uses an infrared wavelength of light, a bathymetric LIDAR typically uses a green wavelength of light to scan water bodies. A terrestrial LIDAR is a land-based laser scanner which, combined with a differential GNSS, enables the production of three-dimensional computer models. Each of the virtual sensors1608,1612has a virtual viewing range. The virtual viewing range segregator segregates the virtual viewing range of the virtual sensors1608,1612into a plurality of frustums. The virtual viewing range of the virtual sensor1608corresponds to a viewing range of a sensor1344of the AV operating in the environment1600. The geometric viewport generator1408generates a geometric viewport, for example the viewport1500illustrated and described above with reference toFIG.15A. The geometric viewport has a height, for example height1504, corresponding to a number of rays1616emitted from the virtual sensor. The raster image generator1420generates a raster image rendered onto the geometric viewport1500. The raster image includes a plurality of pixels of the geometric viewport1500and represents coordinate positions of objects located within the environment1600, for example, pedestrians1620,1624, and1628. In one embodiment, the model of the virtual vehicle1604, including the virtual sensors1608,1612, is simulated to determine the quality of the sensor data1352obtained and to determine which sensors are best suited to different operating conditions for the AV1604under different environmental conditions. For example, one type of sensor may perform better than another type of sensor in urban environments that contain many buildings, construction zones, or pedestrians. Similarly, one type of sensor may perform better than another type of sensor in rainy or snowy weather when there are puddles of water on the ground surface. The AV sensor configurator1328receives data describing the environment1600in which the AV1604is operating. The data describing the environment1600may include a pattern of weather, such as the temperature, whether it is a rainy or snowy day, and the visibility. The data describing the environment1600may also include parameters describing a density of the environment1600, such as a number of buildings per square mile, the amount of the environment1600that is covered by road surface, a number of pedestrians, for example pedestrian1620,1624,1628, an amount of vegetation per square mile, etc. For each sensor of the visual sensors1344, the virtual AV model generator1400generates a model of the virtual AV1604operating in the environment1600. The model of the virtual AV1604includes at least one virtual sensor1608. Using the received data describing the environment1600, a virtual point cloud of the virtual sensor1608is rendered. In one embodiment, the model of the virtual AV1604includes a position of the virtual sensor1608. For example, the position of the virtual sensor1608may be denoted by rectangular coordinates within the environment1600, a spatiotemporal configuration relative to the AV1604, or by a distance from the pedestrian1620. The AV sensor configurator1328renders the virtual point cloud of the virtual sensor1608by projecting a geometric shape that intersects a virtual laser generated by the virtual sensor1608. For example, the AV sensor configurator1328may project one or more triangles that intersect a virtual laser beam generated by the virtual sensor1608. The AV sensor configurator1328determines a position of the geometric shape, for example, the one or more triangles, relative to the position of the virtual sensor1608. The position of the geometric shape is then used to form the virtual point cloud data. Further details on the virtual point cloud generation are described above with reference toFIG.14. Referring back toFIG.16, in one embodiment, the virtual sensor1608is a virtual spinning LIDAR. A spinning LIDAR has a 360° field of view because a single spinning LIDAR can be mounted on the roof of the AV1604to obtain a complete view of the surroundings of the AV1604. The AV sensor configurator1328renders the virtual point cloud of the virtual sensor1608(spinning LIDAR) by simulating rotational movement of a motor of the virtual spinning LIDAR1608. For example, the rotational movement of the virtual spinning LIDAR1608may include a +15° to −25° vertical field of view, a range of 300 m, an angular resolution of 0.10°, and a mapping rate of 8 million points per second. In one embodiment, the AV sensor configurator1328renders the virtual point cloud of the virtual sensor1608by segregating a virtual viewing range of the virtual sensor1608into a plurality of frustums (e.g., frustums1534,1538), as described and illustrated above with reference toFIG.15B. The plurality of frustums are used in the generation of the virtual point cloud of the virtual sensor1608. In one embodiment, the AV sensor configurator1328renders the virtual point cloud of the virtual sensor1608by generating a geometric viewport, such as the viewport1500illustrated and described above with reference toFIG.15A. The geometric viewport includes a plurality of pixels. The geometric viewport has a height corresponding to a number of rays emitted from the virtual sensor1608. The geometric viewport is used to generate the virtual point cloud of the virtual sensor1608. The raster image generator1420renders a raster image representing a plurality of coordinate positions of the environment1600, as described above with reference toFIG.14. The raster image includes the plurality of pixels of the geometric viewport and represents coordinate positions of an object, for example pedestrian1620located within the environment1600. In one embodiment, the AV sensor configurator1328renders the virtual point cloud of the virtual sensor1608by segregating a geometric viewport into a plurality of sections, as illustrated and described above with reference toFIGS.14and15C. Each section of the plurality of sections corresponds to a frustum of a plurality of frustums of a virtual viewing range of the virtual sensor1608. The AV sensor configurator1328generates, using the plurality of sections of the geometric viewport, the virtual point cloud of the virtual sensor1608. In one embodiment, the geometric viewport has a width that increases as a number of the plurality of frustums increases. In one embodiment, the segregating of the geometric viewport into the plurality of sections includes mapping a near plane of each frustum of the plurality of frustums onto a corresponding section of the plurality of sections, as described and illustrated above with reference toFIGS.14and15B. Referring now toFIG.16, a quality metric of the virtual sensor1608is determined using the virtual point cloud. The quality metric reflects the range and visibility of the virtual sensor1608and is used to compare different types of visual sensors under different operating conditions of the AV1604. The quality metric may be expressed as a vector of different components, for example, viewing range, or a weighted aggregate of the components. Certain components that are more important on rainy days, for example determining a reflectance of a surface of an object may be weighted higher than other components of the quality metric. In one embodiment, the quality metric includes a range of the virtual sensor1608or a visibility of the virtual sensor1608. The range and visibility of the virtual sensor1608may depend on a speed at which an object, for example pedestrian1620is scanned. Certain virtual sensors may include oscillating plane mirrors, a polygonal mirror, or a dual-axis scanner that increase the range in certain weather conditions. In one embodiment, the quality metric includes a point density of the virtual point cloud. The point density refers to an average number of points per unit area, which may be expressed as points per square meter. The point density may also be determined as an average distance between points (nominal point spacing). In one embodiment, the quality metric includes a vertical accuracy of the virtual sensor1608. The vertical accuracy is expressed as the root mean square error (RMSE) and is a measure of the absolute deviation of the point cloud data from a known vertical datum, such as a surveyed location. In one embodiment, the quality metric includes a precision of the virtual sensor1608, which refers to the repeatability of a sensor measurement. The quality metric may be affected by the types of light that the virtual sensor1608uses to image an object, for example pedestrian1620. For example, a typical LIDAR uses ultraviolet, visible, or near infrared light to image objects. The quality metric may include the range of materials, including metals, non-metallic objects, or rocks that the virtual sensor can detect. In one embodiment, the AV sensor configurator1328determines the quality metric of the virtual sensor1608by determining, using the virtual point cloud of the virtual sensor1608, a size of a blind spot of the sensor of the AV1604. Determination of a blind spot of a virtual sensor is described in detail above with reference toFIG.14. Referring now toFIG.16, the pedestrian1628may be located in a blind spot of the virtual sensor1608. The blind spot includes a plurality of coordinate positions of the environment1600. If the pedestrian1628is located in a blind spot, the plurality of coordinate positions sensed by the virtual sensor1608will be free of the pedestrian1628. The size of a blind spot of a sensor affects sensor quality and the viewing range. For example, a blind spot located where a pedestrian or another vehicle suddenly crosses a street may lead to a collision with the AV. Therefore, the quality metric is based on the size of the blind spot of the virtual sensor1608. In one embodiment, the AV sensor configurator1328determines the quality metric of the virtual sensor1608by rendering, using a timestamp, a state of the environment1600. For example, two-dimensional LIDAR data may be rendered in a floating point binary format and the timestamp of the two-dimensional LiDAR data1352is stored. A virtual viewing range of the virtual sensor1608at the timestamp is determined. The different virtual sensor types may be compared using the virtual viewing ranges to determine which sensor type is best suited to the environment1600. In one embodiment, the AV sensor configurator1328determines the quality metric of the virtual sensor1608by rendering, using the virtual point cloud of the virtual sensor1608, a raster image describing the environment1600. Using the raster image, a reflectance of a surface of an object, for example pedestrian1620in the environment1600is determined. The performance of the virtual sensor1608in the presence of objects having a higher reflectance can affect the range and visibility on rainy days. Generation of the raster image and determination of reflectance is described in detail above with reference toFIG.14. Using the plurality of quality metrics across the different virtual sensors, a range of different sensor types are evaluated. An optimal sensor for operating the vehicle within the environment1600is selected. For example, a spinning LIDAR may be better suited to certain environmental conditions than a solid-state LIDAR or a smart camera. Process for Determination of an Optimal Spatiotemporal Sensor Configuration for an AV FIG.17illustrates a process1700for determination of an optimal spatiotemporal sensor configuration for navigation of the AV1308using simulation of virtual sensors, in accordance with one or more embodiments. In one embodiment, the process ofFIG.1700is performed by the AV sensor configurator1328. Other entities, for example, one or more components of the AV1308perform some or all of the steps of the process1700in other embodiments. Likewise, embodiments may include different and/or additional steps, or perform the steps in different orders. The AV sensor configurator1328generates1704a model of a virtual vehicle operating in the environment1304. The model of the virtual vehicle includes a virtual sensor having a virtual viewing range. In one embodiment, the virtual sensor includes a number of virtual lasers separated into a number of groups. Each group of virtual lasers is angled and spaced from each other group as well as from the virtual viewing range. The individual virtual lasers are angled based on the virtual viewing range and the number of virtual lasers. Thus, different sensors and different sets of lasers and angles are modeled. The AV sensor configurator1328segregates1708the virtual viewing range, for example1530, of the virtual sensor into a plurality of frustums. The virtual viewing range1530of the virtual sensor corresponds to a viewing range of a visual sensor1344of the AV1308operating in the environment1304. The virtual sensor sweeps or scans in a direction of the beam or rays, thus generating a collection of distance measurements within the virtual viewing range1530. The virtual viewing range1530is a range of horizontal and vertical angles through which the virtual sensor captures virtual sensor data. Each frustum is a portion of a solid shape, such as a cone or a pyramid that lies between one or two parallel planes cutting the solid shape. The AV sensor configurator1328generates1712a geometric viewport, for example1500, including a plurality of pixels. The geometric viewport1500has a height, for example1504, corresponding to a number of rays emitted from the virtual sensor. The geometric viewport1500is a viewing region having a polygonal shape used for rendering a representation of the objects1316,1320as an image. For example, a plurality of pixels is used to express the screen coordinates in which the objects1316,1320are rendered. The AV sensor configurator1328segregates1716the geometric viewport1500into a plurality of sections, for example, sections1560,1564. Each section of the plurality of sections corresponds to a frustum of the plurality of frustums. Each section of the geometric viewport1500is a virtual area used by the raster image generator1420to scale and size a raster image when rendering the raster image to the geometric viewport1500. In one embodiment, each section of the geometric viewport corresponds to a region of the environment1304that is rendered on the geometric viewport. The AV sensor configurator1328renders1720a virtual point cloud of the virtual sensor, wherein the virtual point cloud includes a plurality of coordinate positions representing a portion of the environment1304located within the virtual viewing range of the virtual sensor. The AV sensor configurator1328uses parameters of the virtual sensor to tune the virtual sensor model and render the virtual point cloud. For example, the AV sensor configurator1328may vary the scan angle, pulse rate frequency, sidelap, or mean point density of the virtual sensor to render the virtual point cloud. The AV sensor configurator1328determines1724, based on the virtual point cloud of the virtual sensor, an optimal spatiotemporal configuration of the visual sensor1344of the AV1308. In one embodiment, the AV sensor configurator1328uses the virtual point cloud to determine parameters of the visual sensor1344, such as a LIDAR. For example, features extracted from the virtual point cloud may be matched to features extracted from the sensor data1352that represent the objects1316. The spatiotemporal configuration of the visual sensors1344of the AV1308are thus fine-tuned via regression analysis and simulation. Process for Determining an Optimal Sensor FIG.18illustrates a process1800for determining an optimal sensor for navigation of an AV1308using simulation of virtual sensors, in accordance with one or more embodiments. In one embodiment, the process ofFIG.1800is performed by the AV sensor configurator1328. Other entities, for example, one or more components of the AV1308perform some or all of the steps of the process1800in other embodiments. Likewise, embodiments may include different and/or additional steps, or perform the steps in different orders. The AV sensor configurator1328receives1804, using one or more processors, data describing an environment1304in which the AV1308is operating. The data describing the environment1304may include a pattern of weather, such as the temperature, whether it is a rainy or snowy day, and the visibility. The data describing the environment1304may also include parameters describing a density of the environment1304, such as a number of buildings per square mile, the amount of the environment1304that is covered by road surface, a number of pedestrians, an amount of vegetation per square mile, etc. For each sensor of a plurality of sensors1344of the AV1308, the AV sensor configurator1328generates1808, using the one or more processors, a model of a virtual AV operating in the environment1304. The model of the virtual AV includes a virtual sensor corresponding to the sensor. The AV sensor configurator1328renders1812, using the received data describing the environment1304, a virtual point cloud of the virtual sensor. In one embodiment, the model of the virtual AV includes a position of the virtual sensor. For example, the position of the virtual sensor may be denoted by rectangular coordinates within the environment1304, a spatiotemporal configuration relative to the AV1308, or by a distance from an object1316. The AV sensor configurator1328renders the virtual point cloud of the virtual sensor by projecting a geometric shape that intersects a virtual laser generated by the virtual sensor. The AV sensor configurator1328determines a position of the geometric shape relative to the position of the virtual sensor. The position of the geometric shape is then used to form the virtual point cloud data. The AV sensor configurator1328determines1816, using the virtual point cloud, a quality metric of the virtual sensor. The quality metric reflects the range and visibility of the virtual sensor and is used to compare different types of visual sensors under different operating conditions of the AV1308. The quality metric may be expressed as a vector of different components, for example, viewing range, or a weighted aggregate of the components. Certain components that are more important on rainy days, for example determining a reflectance of a surface of an object may be weighted higher than other components of the quality metric. The AV sensor configurator1328selects1820, using the plurality of quality metrics, an optimal sensor of the plurality of sensors1344for operating the AV1308within the environment1304. Additional Embodiments In some embodiments, one or more processors of a vehicle are used receive data describing an environment in which the vehicle is operating. For each of multiple sensors of the vehicle, the one or more processors generate a model of a virtual vehicle operating in the environment. The model of the virtual vehicle includes a virtual sensor corresponding to the sensor. The received data describing the environment is used to render a virtual point cloud of the virtual sensor. The virtual point cloud is used to generate a quality metric of the virtual sensor. The quality metric includes a range of the virtual sensor or a visibility of the virtual sensor. The quality metrics are used to select an optimal sensor of the multiple sensors for operating the vehicle within the environment. In some embodiments, the quality metric further includes a point density of the virtual point cloud. In some embodiments, the quality metric further includes a vertical accuracy of the virtual sensor. In some embodiments, the quality metric further includes a precision of the virtual sensor. In some embodiments, the quality metric further includes a virtual viewing range of the virtual sensor. In some embodiments, the determining of the quality metric of the virtual sensor includes rendering, using a timestamp, a state of the environment. The one or more processors determine the virtual viewing range of the virtual sensor at the timestamp. In some embodiments, the determining of the quality metric of the virtual sensor includes determining, using the virtual point cloud of the virtual sensor, a size of a blind spot of the sensor of the vehicle. In some embodiments, the blind spot includes multiple coordinate positions of the environment. An object is located at the blind spot. The multiple coordinate positions are free of the object. In some embodiments, the virtual sensor includes a topographic LIDAR, a bathymetric LIDAR, or a terrestrial LIDAR. In some embodiments, the model of the virtual vehicle further includes a position of the virtual sensor. The rendering of the virtual point cloud of the virtual sensor includes projecting, using the one or more processors, a geometric shape that intersects a virtual laser generated by the virtual sensor. The one or more processors determine a position of the geometric shape relative to the position of the virtual sensor. In some embodiments, the virtual sensor is a virtual spinning LIDAR. The rendering of the virtual point cloud of the virtual sensor includes simulating, using the one or more processors, rotational movement of a motor of the virtual spinning LIDAR. In some embodiments, the determining of the quality metric of the virtual sensor includes rendering, using the virtual point cloud of the virtual sensor, a raster image describing the environment. The raster image is used to determine a reflectance of a surface of an object in the environment. In some embodiments, the rendering of the virtual point cloud of the virtual sensor includes segregating, using the one or more processors, a virtual viewing range of the virtual sensor into multiple frustums. The virtual viewing range of the virtual sensor corresponds to a viewing range of the sensor of the vehicle. The frustums are used to generate the virtual point cloud of the virtual sensor. In some embodiments, the rendering of the virtual point cloud of the virtual sensor includes generating, using the one or more processors, a geometric viewport including multiple pixels. The geometric viewport has a height corresponding to a number of rays emitted from the virtual sensor. The geometric viewport is used to generate the virtual point cloud of the virtual sensor. In some embodiments, a raster image is rendered representing multiple coordinate positions of the environment. The raster image includes the multiple pixels of the geometric viewport and represents coordinate positions of an object located within the environment. In some embodiments, the rendering of the virtual point cloud of the virtual sensor includes segregating, using the one or more processors, a geometric viewport into multiple sections. Each section corresponds to a frustum of multiple frustums of a virtual viewing range of the virtual sensor. The multiple sections of the geometric viewport are used to generate the virtual point cloud of the virtual sensor. In some embodiments, the geometric viewport has a width that increases as a number of the frustums increases. In some embodiments, the segregating of the geometric viewport into the multiple sections includes mapping a near plane of each frustum of the multiple frustums onto a corresponding section of the multiple sections. In some embodiments, the raster image is used to determine a reflectance of a surface of the object. A control module of the vehicle is used to operate the vehicle to avoid a collision of the vehicle with the object based on the reflectance. In some embodiments, a distinct raster image representing the object is rendered onto the geometric viewport. A representational quality of the distinct raster image associated with the reflectance of a surface of the object is determined. In the foregoing description, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the invention, and what is intended by the applicants to be the scope of the invention, is the literal and equivalent scope of the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. In addition, when we use the term “further including,” in the foregoing description or following claims, what follows this phrase can be an additional step or entity, or a sub-step/sub-entity of a previously-recited step or entity.
138,905
11861785
DETAILED DESCRIPTION Certain aspects of this disclosure are provided below. Some of these aspects may be applied independently and some of them may be applied in combination as would be apparent to those of skill in the art. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of aspects of the application. However, it will be apparent that various aspects may be practiced without these specific details. The figures and description are not intended to be restrictive. The ensuing description provides example aspects only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the example aspects will provide those skilled in the art with an enabling description for implementing an example aspect. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the application as set forth in the appended claims. Ray tracing is a graphics processing and rendering technique that can be used to produce photorealistic images by modeling light transport to simulate optical effects. Ray tracing can realistically simulate the lighting of a three-dimensional (3D) scene and its objects by rendering physically correct reflections, refractions, shadows, and indirect lighting in the two-dimensional (2D) view of the scene. Ray tracing can be a computationally intensive technique. For example, the computational resources (e.g., compute time) used to ray trace a single frame can increase with the number of rays that are traced per frame and/or can increase with the computational resources (e.g., compute time) expended to trace each individual ray. Due to this computational complexity, ray tracing may often be limited to non-real time uses. Real-time ray tracing has long been sought after for uses such as rendering video games, virtual reality (VR) and augmented reality (AR) experiences, etc. Real-time ray tracing has recently become possible, using, for example, hardware acceleration units and/or graphics processing units (GPUs) that can provide parallelization of the underlying calculations for each individual ray that is projected into a scene. The number of rays that can be projected into a scene for each frame is often relatively small, as the rendering time per frame cannot exceed some maximum amount without losing real-time performance. The image quality when using real-time ray tracing can be improved by increasing the number of rays projected into the scene per frame. This can be achieved by increased parallelization (e.g., providing additional computational resources that allow more rays to be traced simultaneously). However, hardware upgrades can carry high upfront costs and may be difficult or impossible to retrofit onto existing systems and platforms. A scalable and efficient solution that can improve the real-time performance of existing ray tracing hardware is desirable. For example, the number of rays projected into the scene per frame can also be increased by tracing each ray more efficiently (e.g., reducing the compute time per ray trace operation allows more ray trace operations to be performed in the same fixed rendering time per frame). As described in more detail below, systems and techniques are described herein for providing accelerated ray tracing operations, such as by producing tight world-space bounding regions (e.g., bounding boxes) at a controlled computational cost. FIG.1is a diagram illustrating an example of a ray tracing technique100. As illustrated, a ray tracing system can perform ray tracing by casting a plurality of rays (e.g., ray152a, ray154a, and ray156a) from a virtual or imaginary view camera110(e.g., which determines the view into the 3D scene), through the pixels140of a 2D viewing plane, out into the 3D scene. The ray tracing system can then trace the path of each ray to determine if the ray reaches back to a light source120in the 3D scene. In this technique, each ray is projected through a particular pixel of the plurality of pixels140that are located on the 2D viewing plane. In the event a particular ray reaches a light source (e.g., light source120) in the 3D scene, then information from that ray can be used to contribute to the final color and/or illumination level of the pixel (from the pixels140) through which the particular ray was projected. For example, when rays projected into the scene intersect with one or more objects (e.g., such as object130), color and lighting information from the point(s) of intersection on the object(s) surfaces can contribute to the final colors and illumination levels of the pixels associated with the rays. Similarly, different objects can have different surface properties that reflect, refract, and/or absorb light in different ways, which can also contribute to the final pixel colors and/or illumination level. Rays can also reflect off of objects and hit other objects in the scene, or travel through the surfaces of transparent objects, etc., before reaching a light source (e.g., light source120). For example, as illustrated inFIG.1, ray152ais projected into the scene and intersects object130, resulting in generation of a first reflection ray152band a second reflection ray152c. The first reflection ray152breaches light source120and consequently, can contribute color or illumination information for rendering the particular one of the pixels140through which ray152was projected. The second reflection ray152cdoes not reach light source120, and consequently, may not directly contribute color or illumination information back to the pixels140. A same or similar scenario is illustrated for ray154aand its first reflection ray154b(which reaches light source120) and second reflection ray154c(which does not reach light source120), as well as for ray156aand its first reflection ray156b(which reaches light source120) and second reflection ray156c(which does not reach light source120). As mentioned previously, each interaction between a ray and an object or surface within the 3D scene can contribute color and/or illumination information back to the particular pixel through which the ray was projected. In some cases, tracing a greater number of interactions per ray can provide increased visual fidelity (e.g., quality) of the rendered scene at the expense of increased computational cost (e.g., time). For example, a ray tracing approach that prioritizes speed over quality might calculate or otherwise determine only the first reflection for each ray, while a ray tracing approach that prioritizes quality over speed might determine three or more reflections per ray. In some cases, after observing either a maximum number of reflections or a ray traveling a certain distance without intersection), the ray can cease to travel and the pixel's value can be updated. In some cases, the ray can cease to travel and the pixel's value can be updated based on a ray traveling a certain distance without reflection (e.g., reflection being one possible outcome of an intersection). In some cases, the number of rays that are projected through each pixel of the 2D viewing plane can be adjusted based on a similar tradeoff between computational cost and visual fidelity. Ray tracing can therefore become very costly in terms of the time and/or computational power that is required to render realistic-looking scenes, based, for example, on the number of rays projected into the scene and the number of additional rays that are traced for secondary reflections and refractions. Due to this computational complexity, ray tracing is typically limited to non-real time uses (e.g., scenes or visual effects that could be rendered in advance for film and television). Real-time ray tracing has long been sought after for use cases such as rendering video games, virtual reality (VR) and augmented reality (AR) experiences, etc. Real-time ray tracing has recently become possible and is often performed by hardware acceleration units and/or graphics processing units (GPUs) that can provide parallelization of the underlying calculations for each individual ray that is projected into the scene. The number of rays that can be projected into the scene for each frame is often relatively small, as the rendering time per frame cannot exceed some maximum amount without losing real-time performance. The image quality when using real-time ray tracing can be improved by increasing the number of rays projected into the scene per frame. This can be achieved by increased parallelization (e.g., providing additional computational resources that allow more rays to be traced simultaneously). However, hardware upgrades can carry high upfront costs and may be difficult or impossible to retrofit onto existing systems and platforms. A scalable and efficient solution that can improve the real-time performance of existing ray tracing hardware is desirable. For example, the number of rays projected into the scene per frame can also be increased by tracing each ray more efficiently (e.g., reducing the compute time per ray trace operation allows more ray trace operations to be performed in the same fixed rendering time per frame). One example of a ray tracing acceleration technique utilizes tree-based acceleration structures to improve the efficiency of ray intersection tests. For example, scenes can be converted into bounding volume hierarchies (BVHs), which are hierarchical tree structures composed of ever-tighter bounding volumes (also referred to as “bounding regions” such as bounding boxes or “axis-aligned bounding boxes” (AABBs)). For example,FIG.2Aillustrates an example structure200ain which a scene containing a plurality of triangle primitives252a-252eis arranged into a series of ever-tighter bounding boxes256a-256e. Scenes may contain hundreds, thousands, or more primitives, but for purposes of clarity, only the five triangle primitives252a-252eare depicted. The bounding boxes256a-256ecan be AABBs, which are bounding boxes having a minimized area or volume within which all points of the enclosed primitives (e.g., triangle primitives252a-252e) may lie. The bounding boxes may be axis-aligned such that the edges of each bounding box256a-256eare parallel to a coordinate axis (e.g., the x, y, and z axes).FIG.2Billustrates an example hierarchical data structure200bhaving nodes that are associated with the bounding boxes256a-256eand triangle primitives252a-252eshown inFIG.2A. The hierarchical data structure200bcan be a BVH. For example, a BVH root node262acan correspond to the bounding box256ashown inFIG.2A; similarly, an intermediate BVH node262bcan correspond to the bounding box256bofFIG.2A; intermediate BVH node262ccan correspond to the bounding box256cofFIG.2A, and so on. A BVH root node (e.g., BVH root node262aofFIG.2B) contains an AABB (e.g., bounding box256aofFIG.2A) enclosing all the individual scene or object geometry contained in the BVH leaf nodes. Each primitive in the BVH root node is assigned to either the left or right child node. The child nodes contain the AABBs containing their assigned geometry, and this geometry is likewise assigned to left or right child nodes, recursively until the BVH leaf nodes contain a small number of primitives, e.g., four or fewer. Depending on the extent of any scene changes and/or object deformations, the next and any subsequent frames may require one or more new BVH build operations or BVH refitting/update operations based on the scene changes. Testing each ray for intersection against every primitive in the scene can be inefficient and computationally expensive. BVHs can be used to accelerate ray intersection testing techniques. For example, each ray can be tested for intersection against BVH bounding boxes using a depth-first tree traversal process instead of against every primitive in the scene. As mentioned previously, bounding boxes encompass or surround different amounts of scene geometry or primitives and become increasingly tighter with the depth of the BVH tree structure. Bounding boxes (e.g., AABBs or other bounding boxes) or other bounding regions can be defined with respect to world-space or object-space. World-space can be considered a constant (e.g., the coordinate space of the overall 3D scene). Objects can exist in their own coordinate space, which is referred to as object-space (e.g., the coordinate space in which the object was modeled or created). For example,FIGS.3A and3Bare diagrams depicting object-space and world-space AABBs (axis-aligned bounding boxes) for the same geometry. Here,FIG.3Aillustrates an object-space AABB320of a geometric scene object310. Scene objects can include the 3D or graphical objects that are present in a 3D scene for which ray tracing is performed. In some cases, geometric scene objects can be scene objects that include geometric primitives such as triangles. In some examples, scene objects can include AABBs or other object representations. Object-space AABB320and scene object310are both shown in the object-space300aof the scene object310.FIG.3Billustrates the same geometric scene object310but transformed into the world-space300bof the scene (e.g., the scene to which scene object310belongs or is located). A world-space AABB330(or other world-space bounding box) encloses both the object-space AABB320and the scene object310. Ray tracing can utilize a two-level acceleration structure system, such as a top-level acceleration structure (TLAS) and a bottom-level acceleration structure (BLAS), as depicted inFIG.4. For example,FIG.4illustrates a TLAS410and a BLAS430, which are described in greater depth below. The TLAS410is built in world-space. TLAS primitives are instances of BLASs, which are defined in object-space. A TLAS can be constructed as a BVH with leaf nodes (including leaf nodes412,414,416,422,424,426, and428) containing a BLAS. For example, the TLAS leaf nodes422,424,426, and428each contain or are otherwise associated with one of the two BLASs440and460. A translation matrix can be encoded in the TLAS leaf node to perform conversion from world-space to object-space and/or vice versa, as described in greater depth below. A BLAS can be constructed for each object in a scene, referred to as a scene object. For example,FIG.4illustrates a BLAS440that may be constructed for a first unique scene object and a BLAS460that may be constructed for a second unique scene object. BLAS440includes leaf nodes442,444,446,452,454,456, and458and BLAS460includes leaf nodes462,464,466,472,474,476, and478. BLAS primitives can be the triangles or the AABBs of procedural primitives used to build the scene object. A bottom level BVH is built over the set of these triangles or AABBs of the scene object, with each BLAS leaf node containing a small number (e.g., up to four, five, or some other number) of triangles or AABBs. For example, in the context ofFIG.4, the BLAS leaf nodes452-458and472-478can each contain some quantity of triangles, AABBs, or other primitives used to build the scene object. In some examples, a BLAS can also be referred to as a “bottom level BVH.” Multiple instances of the same BLAS can be included in a TLAS. For example, if a TLAS includes a car object, then a BLAS of a tire can be included four times. The same BLAS can also be included in or referenced by multiple TLASs, as illustrated inFIG.4. In some examples, a TLAS can be created using an Object-To-World matrix, which transforms an input represented in object-space coordinates to an output representation in world-space coordinates. A World-To-Object matrix can apply the transformation in the opposite direction (e.g., transforming an input represented in world-space coordinates to an output representation in object-space coordinates). In some cases, a TLAS can be built over a set of BLASs by using the Object-To-World matrix to compute the world-space AABB of each BLAS (e.g., the world-space AABB of the BLAS root nodes442and462). A BVH is then built over these world-space AABBs of the BLAS root nodes and can be referred to as a top level BVH or the TLAS410. In some cases, TLAS and BLAS creation can be performed using a similar or identical technique. For example, the same SAH-based (Surface Area Heuristic) algorithm or approach can be utilized for both TLAS and BLAS construction. In some cases, the performance of BVH-accelerated ray tracing can depend on the tightness of the world-space AABBs generated for the BLAS included in or associated with a TLAS leaf node. For example, a tight bounding box will usually outperform a loose bounding box because fewer rays enter the BLAS, and moreover, rays that do enter the BLAS are less likely to pass through empty space.FIG.5Ais a diagram500aillustrating an example of a relatively loose bounding box510athat encloses a scene object530. As shown, the bounding box510ais considered loose as there is a large amount of empty space between the boundary of the bounding box510aand the scene object530.FIG.5Bis a diagram500billustrating an example of a relatively tight bounding box510bthat encloses the same scene object530. As illustrated, the bounding box510bis tighter compared to the bounding box510aofFIG.5A, as there is very little empty space between the boundary of the bounding box510band the scene object530. When a ray intersects a BLAS bounding box, the ray is automatically checked for lower-level intersection against each of the constituent primitives within the BLAS. A ray that hits only empty space within the bounding box surrounding a BLAS therefore represents wasted computational work (and increased time/decreased efficiency). In the example ofFIG.5A, the two rays522and524both intersect the relatively loose bounding box510a, but neither ray actually intersects the scene object530. As such, the ray intersections determined for rays522and524with the loose bounding box510awill result in wasted computational work, because the rays522and524in actuality pass through empty space despite their intersection with the loose bounding box510a. In the example ofFIG.5B, the same two rays522and524are shown. Here, because a relatively tight bounding box510bis used to enclose the scene object530, neither of the two rays intersect with the bounding box510b. Therefore, unlike when loose bounding box510awas used to enclose scene object530, neither of the rays522and524will result in an intersection with the bounding box510band thus avoids wasted computational work. Reducing wasted computational resources by generating tighter world-space bounding boxes is desirable. World-space bounding boxes can include bounding boxes (e.g., AABBs) with coordinates given in world-space, rather than in object-space or other coordinate systems. In some cases, a world-space bounding box can be represented as an object-space bounding box by transforming its world-space coordinates to object-space coordinates (e.g., using a World-to-Object matrix). However, computing tighter world-space bounding boxes is itself associated with a computational overhead that may be incurred each time an updated volume (e.g., BVH) or new volume (e.g., BVH) is generated in response to scene changes between frames. In this case, generating tight world-space bounding boxes for TLAS leaf nodes at a controlled computation cost becomes more desirable. Systems, apparatuses, processes (also referred to as methods), and computer readable media (collectively referred to as “systems and techniques”) are described herein that can provide accelerated ray tracing operations by producing tight world-space bounding regions (e.g., bounding boxes such as AABBs) at a controlled computational cost. Bounding boxes will be used herein as examples of bounding regions. However, any type of bounding regions can be used that are not necessarily “boxes,” such as polygons, circles, ellipses, or other shape of bounding region. In some aspects, tight world-space bounding boxes can be determined using one or more ray tracing acceleration data structures. In some examples, the ray tracing acceleration data structure can include a bounding volume hierarchy (BVH) and/or a hierarchical tree. Different approaches to calculating world-space bounding boxes can offer different tradeoffs between computational overhead and ray tracing performance, as will be described in greater depth below. FIG.6is a diagram illustrating an example ray tracing system600, in accordance with some examples of the disclosure. The ray tracing system600can implement the systems and techniques disclosed herein, including aspects associated withFIGS.7A-9E. The ray tracing system600can perform various tasks and operations such as, for example, ray tracing tasks and operations (e.g., ray-primitive intersection, ray-bounding volume intersection, ray-AABB intersection, acceleration data structure construction and/or updating, rendering, etc.). In the example shown inFIG.6, the ray tracing system600includes storage602, compute components610, a ray tracing engine620, an acceleration data structure engine622, and a graphics processing engine624. It should be noted that the components602through624shown inFIG.6are non-limiting examples provided for illustration and explanation purposes, and other examples can include more, less, and/or different components than those shown inFIG.6. For example, in some cases the ray tracing system600can include one or more display devices, one more other processing engines, one or more other hardware components, and/or one or more other software and/or hardware components that are not shown inFIG.6. An example architecture and example hardware components that can be implemented by the ray tracing system600are further described below with respect toFIG.11. References to any of the components of the ray tracing system600in the singular or plural form should not be interpreted as limiting the number of such components implemented by the ray tracing system600to one or more than one. For example, references to a processor in the singular form should not be interpreted as limiting the number of processors implemented by the ray tracing system600to one. One of ordinary skill in the art will recognize that, for any of the components shown inFIG.6, the ray tracing system600can include only one of such component(s) or more than one of such component(s). The ray tracing system600can be part of, or implemented by, a single computing device or multiple computing devices. In some examples, the ray tracing system600can be part of an electronic device (or devices) such as a desktop computer, a laptop or notebook computer, a tablet computer, a set-top box, a smart television, a display device, a gaming console, a video streaming device, an IoT (Internet-of-Things) device, a smart wearable device (e.g., a head-mounted display (HMD), smart glasses, an extended reality (XR) device (e.g., a VR headset or head-mounted display (HMD), an AR headset, HMD, or glasses, etc.), or any other suitable electronic device(s). In some implementations, the storage602, compute components610, ray tracing engine620, acceleration data structure engine622, and graphics processing engine624can be part of the same computing device. For example, in some cases, the storage608, compute components610, ray tracing engine620, acceleration data structure engine622, and graphics processing engine624can be integrated into a smartphone, laptop, tablet computer, smart wearable device, gaming system, and/or any other computing device. In other implementations, the storage602, compute components610, ray tracing engine620, acceleration data structure engine622, and graphics processing engine624can be part of two or more separate computing devices. For example, in some cases, some of the components602through624can be part of, or implemented by, one computing device and the remaining components can be part of, or implemented by, one or more other computing devices. The storage602can be any storage device(s) for storing data. Moreover, the storage602can store data from any of the components of the ray tracing system600. For example, the storage602can store data from the compute components610, data from the ray tracing engine620, data from the acceleration data structure engine622, and/or data from the graphics processing engine624. In some examples, the storage602can include a buffer for storing data for processing by the compute components610. The compute components610can include a central processing unit (CPU)612, a graphics processing unit (GPU)614, a memory616, and/or one or more hardware accelerator components618. In some implementations, the compute components610can include other processors or compute components, such as one or more digital signal processors (DSPs), one or more neural processing units (NPUs), and/or other processors or compute components. The compute components610can perform various operations such as ray-primitive intersection, ray-bounding volume intersection, ray-AABB intersection, acceleration data structure construction, acceleration data structure updating, scene rendering, rasterization, geometry processing, pixel processing, visibility processing, etc. The operations for the ray tracing engine620, the acceleration data structure engine622, and the graphics processing engine624(and any other processing engines) can be implemented by any of the compute components610. In one illustrative example, the operations of one or more of the ray tracing engine620, the acceleration data structure engine622, and the graphics processing engine624can be executed by the GPU614. In some cases, the operations of one or more of the ray tracing engine620, the acceleration data structure engine622, and the graphics processing engine624can be executed by the CPU612. In some cases, the operations of one or more of the ray tracing engine620, the acceleration data structure engine622, and the graphics processing engine624can be executed by a combination of CPU612and GPU614. In some cases, the compute components110can include other electronic circuits or hardware, computer software, firmware, or any combination thereof, to perform any of the various operations described herein. In some examples, the ray tracing engine620can include one or more ray tracing Application Programming Interfaces (APIs). In one illustrative example, the ray tracing engine620can include one or more ray intersection engines. For example, ray tracing engine620can include one or more ray-primitive intersection engines and/or can include one or more ray-bounding volume intersection engines. In some cases, ray tracing engine620can include one or more ray-triangle intersection engines and/or can include one or more ray-AABB intersection engines. In some examples, the ray tracing engine620can implement one or more ray intersection engines using one or more hardware-accelerated ray tracing units (RTUs) and/or arithmetic logic units (ALUs). In some examples, the acceleration data structure engine622can construct or generate one or more acceleration data structures. The acceleration data structures generated by acceleration data structure engine622can be used by one or more of ray tracing engine620and graphics processing engine624. In one illustrative example, acceleration data structure engine622can construct or generate a Bounding Volume Hierarchy (BVH). In some cases, acceleration data structure engine622can generate two-level acceleration structures (e.g., an acceleration data structure including a TLAS and one or more BLASs). The acceleration data structure engine622can be implemented using the CPU612, the GPU614, or a combination of the two. In some examples, the acceleration data structure engine622can additionally, or alternatively, be implemented using one or more of the dedicated hardware accelerator components618. In some examples, the graphics processing engine624can include a graphics processing pipeline. For example, graphics processing engine624can include, but is not limited to, one or more of a geometry processing stage, a visibility stage, a rasterization stage, and a pixel processing pipeline. In some examples, graphics processing engine624can communicate with or access the memory616of the compute components610. Memory616can include one or more of a system memory, a frame buffer, a graphics memory, one or more caches, etc. In some cases, the ray tracing system600(e.g., using the ray tracing engine620, the acceleration data structure engine622, and/or the graphics processing engine624) can obtain an acceleration data structure that includes one or more primitives of a scene object. For example, the ray tracing system600can obtain the acceleration data structure from storage602and/or memory616. In some cases, the acceleration data structure can be generated or constructed using the acceleration data structure engine622. When the ray tracing system600obtains an acceleration data structure, the ray tracing engine620can apply a graph cut to the acceleration data structure. A graph cut is a partition of the vertices of a graph into two disjoint subsets (e.g., a graph cut divides the vertices of the graph are into a first subset and a second subset, where no vertices are present in both the first subset and the second subset) In some examples, the acceleration data structure engine622can apply a graph cut to the acceleration data structure. In some cases, the ray tracing engine620and the acceleration data structure engine622can work in combination to apply a graph cut to the acceleration data structure. In some aspects, the ray tracing system600(e.g., using the ray tracing engine620, the acceleration data structure engine622, and/or the graphics processing engine624) can determine a set of nodes of the acceleration data structure based on the graph cut. The set of nodes determined by the ray tracing system600can be located adjacent to the graph cut, as will be described in greater depth below with respect to the example graph cuts illustrates inFIGS.9A and9B. In some examples, a set of nodes adjacent to a graph cut can be located immediately above the graph cut line (e.g., the graph cut line separates the set of nodes and their child nodes). In some cases, a set of nodes adjacent to a graph cut can be located immediately below the graph cut line (e.g., the graph cut line separates the set of nodes and their parent nodes). In some examples, the ray tracing system600can determine the set of nodes based on the graph cut using the ray tracing engine620and/or the acceleration data structure engine622. In some cases, the ray tracing system600(e.g., using the ray tracing engine620, the acceleration data structure engine622, and/or the graphics processing engine624) can generate a world-space bounding box for a scene object. For example, the ray tracing system600can generate a world-space bounding box for the scene object that is associated with or included in the obtained acceleration data structure described previously above. In some cases, the world-space bounding boxes can be AABBs. In one illustrative example, the ray tracing system600can generate the world-space bounding boxes for a set of nodes determined based on a graph cut, using the acceleration data structure engine622(and/or the ray tracing engine620). The acceleration data structure engine622can obtain one or more representations of a scene object or other scene geometry and generate and/or update a BVH or other acceleration data structure that includes the scene object or scene geometry. In some examples, the acceleration data structure engine622can obtain representations of a scene object or other scene geometry at least in part from one or more of the storage602and the memory616. In some cases, the acceleration data structure engine622can obtain representations of a scene object or other scene geometry from the ray tracing engine620(and/or one or more of the compute components610). The acceleration data structure engine622can operate over representations of scene objects and scene geometry using both object-space representations and world-space representations. In some examples, the acceleration data structure engine622can use one or more Object-To-World matrices and/or World-To-Object matrices to transform scene objects/geometry from object-space representations into world-space representations, and from world-space representations to object-space representations, respectively. The following discussion makes reference to the examples ofFIGS.7A and7B, which both depict a scene object710in its object-space (e.g., prior to scene object710being transformed into a world-space representation according to one or more aspects of the systems and techniques described herein). Scene object710can include a plurality of geometric primitives each having one or more vertices. For instance, scene object710can include a plurality of triangles, polygons, procedural primitives, etc. In some examples, scene object710can be represented by and/or stored in an acceleration data structure, such as a BVH or hierarchical tree. For example, scene object710can be represented by or stored in a BLAS, as previously discussed above. The BLAS containing scene object710can itself be contained in, referenced, or pointed to by one or more TLAS leaf nodes. For instance, as noted above with respect toFIG.4, a given BLAS can include a BVH for a unique scene object and therefore may be included in multiple different TLAS leaf nodes. As illustrated,FIG.7Adepicts scene object710enclosed by an object-space bounding box720. In some examples, the object-space bounding box720is an object-space AABB determined for scene object710. In some cases, object-space bounding box720can be the BLAS root node AABB (e.g., because object-space bounding box720includes all of the geometry and/or primitives that comprise scene object710). FIG.7Billustrates the same scene object710enclosed by a proxy geometry740. In some examples, the proxy geometry740can be a convex hull or a convex hull approximation. In some examples, the proxy geometry740can be a bounding box (e.g., AABB). The proxy geometry740(whether a convex hull, convex hull approximation, or otherwise) can be determined based on object-space vertices associated with scene object710. For example, where scene object710is stored as a BLAS, proxy geometry740can be determined based on the object-space vertices of the BLAS (e.g., proxy geometry740can be the convex hull of the BLAS root node). In some cases, proxy geometry740can be determined based on the object-space vertices of the geometry and/or primitives stored within the BLAS. The following discussion also makes reference to the example ofFIG.8A, which depicts the object-space view700aofFIG.7Aas transformed into a world-space view800a. For example, as depicted inFIG.8A, scene object710and its associated object-space AABB720have both been transformed from object-space into world-space (e.g., using an Object-To-World matrix). Transformed scene object710and transformed object-space AABB720are further shown as being enclosed within a world-space bounding box830. In some examples, one or more of the world-space view800a, the scene object710, the object-space AABB720, and/or the calculated world-space bounding box830depicted inFIG.8Acan be the same as or similar to the world-space300b, the scene object310, the object-space AABB320, and/or the calculated world-space bounding box330depicted inFIG.3B, respectively. In some examples, the world-space bounding box830can be a world-space AABB calculated to enclose the world-space transformed vertices of object-space AABB720. Where the world-space bounding box830encloses all of the world-space transformed vertices of object-space AABB720, it is noted that world-space bounding box830will also enclose each individual vertex of the geometry and/or primitives included in scene object710(e.g., because the individual vertices of scene object710are themselves enclosed by object-space AABB720). As mentioned above, in some cases object-space AABB720can be a BLAS root node AABB, in which case world-space AABB830may be generated for one or more TLAS leaf nodes that contain the BLAS/BLAS root node. In one illustrative example, the systems and techniques described herein can transform vertices (e.g., vertices corresponding to the eight corners) of the BLAS root node AABB720(e.g., the AABB of the root node of the BLAS associated with the TLAS leaf node) into world-space and place the world-space AABB830around the vertices (e.g., the eight transformed corners/vertices). The world-space AABB830that is generated from the vertices or corners of the object-space AABB720enclosing scene object710can be used (e.g., by the ray tracing system600) to perform one or more ray tracing operations. In one illustrative example, continuing in the scenario above in which object-space AABB720and scene object710are stored in a BLAS that is itself associated with a TLAS leaf node, the generated world-space AABB830can be used (e.g., by the ray tracing system600) to perform ray tracing operations such as ray intersection tests. For example, if a ray projected into the scene is determined by the ray tracing system600to intersect the world-space AABB830generated for a TLAS leaf node, then the BLAS associated with that TLAS leaf node will be traversed and further ray intersection tests will be performed for the child nodes and/or leaf nodes of the BLAS; if a ray projected into the scene is determined by the ray tracing system600as not intersecting the world-space AABB830generated for a TLAS leaf node, then the BLAS associated with that TLAS leaf node need not be traversed. As such, it can be desirable to generate a world-space AABB (e.g., such as world-space AABB830) that is tight with respect to the actual geometry or primitives contained within the BLAS or object-space AABB associated with a TLAS leaf node, as has been described above. However, approaches that generate world-space AABBs for TLAS leaf nodes based on transforming only the eight corners/vertices of object-space AABB720into world-space often result in an overly loose (e.g., non-tight) bounding box. World-space AABB830is an example of a loose or non-tight bounding box, as world-space AABB830can be seen to include significant amounts of empty space beyond the volume occupied by scene object710and beyond the volume occupied by object-space AABB720. As noted above, the ray tracing system600can implement the systems and techniques described herein to provide accelerated ray tracing operations by producing world-space bounding boxes that are tight to an underlying scene object and have a controlled computational cost. In one illustrative example, the ray tracing system600can use object-space representations of scene objects and/or scene primitives to generate world-space bounding boxes with greater tightness relative to an underlying scene object, as will be described in greater depth below. In some examples, the generated world-space bounding boxes can be world-space AABBs. In a first approach, the ray tracing system600can obtain a maximally tight world-space bounding box for a scene object or other set of scene primitives by individually transforming each vertex of the scene object/scene primitives from an object-space representation to a world-space representation. A world-space bounding box subsequently calculated over the resulting set of all the transformed world-space vertices (e.g., the object-space vertices of the scene object that have been transformed into world-space representations) will have a maximal tightness relative to the underlying scene object. FIG.8Billustrates an example of this first approach. In particular,FIG.8Bis a diagram800billustrating an example of a scene object710that has been transformed into world-space and enclosed by a maximally tight world-space bounding box850. In some cases, the maximally tight world-space bounding box850can be an AABB. The scene object710depicted inFIG.8Bcan be the same as the scene object710depicted inFIGS.7A-8Aand described above. It is noted that, in comparison to the relatively loose world-space bounding box830ofFIG.8A, the maximally tight world-space bounding box850ofFIG.8Bis computed for the same scene object710yet includes significantly less empty space beyond the volume occupied by scene object710. In some examples, the ray tracing system600can obtain the maximally tight world-space bounding box/AABB850by transforming each vertex associated with the geometry of scene object710from object-space to world-space, using, for example, an Object-To-World matrix as previously described above. In one illustrative example, the ray tracing system600can calculate or otherwise determine the maximally tight world-space AABB850for a TLAS leaf node. The TLAS leaf node can contain or otherwise be associated with a BLAS that was previously constructed for a given scene object such as scene object710. Because ray tracing performance can depend on bounding box or AABB tightness, this first approach of individually transforming each vertex of the scene primitives from object-space to world-space can offer the highest ray tracing performance as compared to other approaches described herein. In some examples, this first approach may be associated with a higher computational cost as compared to the other approaches described below. This higher computational cost can arise due to the individual transformation of each object-space vertex into a world-space vertex, especially as the number of vertices per BLAS or TLAS increases. In some cases, when the BVH associated with a TLAS and/or a BLAS is updated or otherwise changed, the first approach may calculate a new AABB by re-computing individual object-to-world vertex transformations. In some examples, the BVH associated with a TLAS or a BLAS may be updated or otherwise changed frequently (e.g., in response to a scene change, object deformation, etc.). As noted above, the first approach of generating world-space bounding boxes (e.g., at TLAS leaf nodes) by individually transforming each vertex included in a scene object from object-space to world-space can offer the greatest ray tracing performance, but with higher upfront computational cost of BVH construction. For example, a maximally tight world-space bounding box such as AABB850ofFIG.8Bcan be associated with the quickest completion time or lowest amount of required time to perform ray tracing and/or ray intersection tests, as compared to looser or non-maximally tight world-space bounding boxes (e.g., such as the relatively loose world-space bounding box830ofFIG.8A). A maximally tight world-space bounding box such as AABB850may also be associated with the greatest completion time for BVH construction or computation, since each individual vertex is transformed from object-space to world-space before the maximally tight world-space bounding box850can be constructed. Therefore, this first approach of generating world-space bounding boxes by individually transforming each vertex of a scene object from object-space to world-space may be used when a relatively large time budget is available for BVH construction and a relatively small time budget is available for ray tracing operations such as ray intersection tests. Additionally or alternatively, in some cases the first approach may be performed when adequate computational resources are available for performing such techniques. In some aspects, the ray tracing system600can dynamically determine which approach to take based on available time budget and/or available computations resources. In another illustrative example, the ray tracing system600can perform a second approach to determine a world-space bounding box (e.g., a world-space AABB) that is tight to an underlying scene object. The second approach includes determining a proxy geometry for one or more vertices associated with the scene object. After determining the proxy geometry (or an approximation thereof) for the scene object, the ray tracing system600can calculate or otherwise determine a world-space bounding box or AABB over the vertices of the proxy geometry using the first approach described above. For example, the systems and techniques described herein can transform each vertex of the proxy geometry from object-space to world-space and then calculate the world-space AABB over the transformed vertices. An example of this second approach of determining a tight world-space bounding box based on a proxy geometry determined for a scene object is illustrated inFIG.8C. As illustrated,FIG.8Cis a diagram800cdepicting an example of an Object-To-World transformation of a scene object710and its associated proxy geometry740(or an approximation thereof). In some examples, proxy geometry740is computed or otherwise determined for object-space vertices included in scene object710. Subsequently, vertices of proxy geometry740can be transformed from an object-space representation into a world-space representation (e.g., using an Object-To-World matrix). The transformed world-space vertices of proxy geometry740can then be used to calculate or construct a tight world-space bounding box870(e.g., an AABB) that encloses both the proxy geometry740and all of the geometry of scene object710. It is noted that, in comparison to the relatively loose world-space bounding box830ofFIG.8A, the proxy geometry-based world-space bounding box870is relatively tight to the same underlying scene object710and can therefore offer improved ray tracing performance and/or speed. In some examples, scene object710can be the same as the scene object710described above with respect to one or more ofFIGS.7A-8B. In some cases, proxy geometry740can be the same as or similar to the proxy geometry740previously described with respect toFIG.7B. For example, the proxy geometry can be a convex hull (or a convex hull approximation) determined over the set of all vertices included in the scene object710. In some cases, other hull geometries and/or proxy geometries can be calculated or approximated over the vertices of scene object710without departing from the scope of the present disclosure. In some aspects, the determination of proxy geometry740may introduce an additional computational overhead. However, the additional computational overhead of determining proxy geometry740can, in some examples, be less than the computational overhead of transforming each vertex of the scene object710from object-space to world-space. In some examples, the initial determination of proxy geometry740can reduce the total number of vertices of scene object710that are ultimately transformed from object-space to world-space, and as such, may result in a faster (e.g., shorter) BVH construction time than that associated with constructing the maximally tight world-space AABB850as described above with respect toFIG.8B. In some cases, the world-space bounding box or AABB870that can be subsequently calculated over the resulting proxy geometry740can have a greater tightness relative to the underlying scene object710than the loose world-space bounding box830ofFIG.8Aand a lesser computational cost relative to the maximally tight world-space bounding box850ofFIG.8B. In another illustrative example, the ray tracing system600can perform a third approach to determining a world-space bounding box that is tight to an underlying scene object. The third approach includes applying a graph cut across an acceleration data structure associated with the primitives of the scene object. The ray tracing system600can transform vertices of acceleration data structure nodes (e.g., BVH and/or BLAS nodes) that are adjacent to or at the graph cut line (e.g., immediately above or below the graph cut line) from object-space to world-space. A world-space bounding box generated or constructed for the transformed vertices at the graph cut line can be tight to the underlying scene object stored in the acceleration data structure to which the graph cut was applied. In some examples, the ray tracing system600can use the third approach of applying a graph cut to an acceleration data structure to apply a graph cut across the BLAS associated with a TLAS leaf node, obtain the object-space bounding boxes (e.g., AABBs) for the BLAS nodes immediately above or below the graph cut line, and transform the vertices of the object-space AABBs into a set of world-space vertices. The ray tracing system600can then construct a world-space AABB around the resulting set of transformed vertices and use the world-space AABB as a tight world-space bounding box for the TLAS leaf node. An example of this third approach is illustrated inFIGS.9A and9B, which depict two different graph cut lines950aand950b, respectively, applied to the same acceleration data structure. In the example ofFIGS.9A and9B, the illustrated acceleration data structure (e.g., acceleration data structure900aand acceleration data structure900b, respectively) can be a BVH and/or a BLAS. In some cases, when the acceleration data structure is a TLAS and/or a BLAS, a tight world-space bounding box can be obtained based on applying a graph cut950aor950bacross the bottom level BVH (e.g., BLAS) included in a given TLAS leaf node. A graph cut partitions the nodes of the bottom level BVH into two disjoint subsets, such that any path from the root node902of the bottom level BVH to a leaf node (e.g.,932,933,934,935,936,937,928) of the bottom level BVH crosses the graph cut line (e.g.,950a,950b) exactly once. Based on this observation, any graph cut across the bottom level BVH will yield a set of AABBs that contain the entire geometry of the scene object or model that is represented by the bottom level BVH or BLAS. For example, with respect toFIG.9A, the graph cut line950apartitions the nodes of acceleration data structure900a(e.g., BLAS or BVH) into two disjoint subsets, with a first subset located above graph cut line950aand a second subset located below graph cut line950b. The first subset of nodes includes a BLAS root node902and BLAS child nodes912,914,924, and926(e.g., above graph cut line950a). The second subset of nodes includes BLAS child node922and BLAS leaf nodes928,932,933,934,935,936, and937(e.g., below graph cut line950a). The set of AABBs/bounding boxes of the nodes that are immediately adjacent to (e.g., either directly above or directly below) graph cut line950acontain the entire geometry of the underlying scene object that is represented by acceleration data structure900a. With respect toFIG.9B, the graph cut line950bpartitions the nodes of acceleration data structure900b(e.g., BLAS or BVH) into two disjoint subsets, with a first subset located above graph cut line950band a second subset located below graph cut line950b. Because graph cut line950bis different from graph cut line950a, so too are the disjoint subsets associated with each graph cut line also different from each other. For example, the first subset of nodes created by graph cut line950bincludes BLAS root node902and BLAS child nodes912,914,922, and926(e.g., above graph cut line950b). The second subset of nodes created by graph cut line950bincludes BLAS child node924and BLAS leaf nodes928,932,933,934,935,936, and937. In some cases, the ray tracing system600can determine optimal graph cut by applying a cost metric during the traversal or examination of the acceleration data structure(s) associated with the primitives of the scene object. For example, the acceleration data structures can include an acceleration data structure900a,900b(e.g., as described above) and/or a bottom-level BVH. In some aspects, the ray tracing system600can determine an optimal graph cut by applying a Surface Area Heuristic (SAH) to treelet growth for a given computational budget. The SAH provides an estimate of the ray tracing performance of different build decisions for a BVH or other acceleration data structure. In some cases, the ray tracing system600can use the SAH to determine the choice of graph cut across a BLAS through an iterative technique in which a root node (e.g., the root node902of the acceleration data structure900a,900bor the root node of a bottom level BVH otherwise associated with a TLAS leaf node) is placed in a stack and has its child nodes (e.g., nodes912-926) selectively expanded based on their SAH until the number of nodes in the stack reaches a pre-determined computational budget. FIGS.9C-9Edepict an illustrative example of a technique that can be performed by the ray tracing system600for determining a set of nodes980that can be used to determine an optimal graph cut across an acceleration data structure970(e.g., as described above). In some cases, the acceleration data structure970can be a BVH or other hierarchical tree-based structure. As illustrated, a stack960can store one or more nodes of the acceleration data structure970. In one illustrative example, the stack960can be part of the memory616ofFIG.6. As will be explained in greater depth below, traversal of the acceleration data structure970to determine the optimal graph cut can be based at least in part on popping the stack960(e.g., taking the top node or element from stack960). In some examples, when the traversal of acceleration data structure970reaches a leaf node of the acceleration data structure970, the stack960can be popped and traversal can proceed to the top node that was popped from stack960. In some aspects, after the root node of the acceleration data structure970(e.g., root node902ofFIG.9AandFIG.9B) has been traversed or placed in the stack960, the iterative technique for determining an optimal graph cut across acceleration data structure970can be performed as follows: 1) pop the stack (e.g., take the top node or element from the stack) and place the popped node's or element's children on the stack; 2) sort the stack by the SAH of each element; and 3) repeat until the number of nodes on the stack reaches the computational budget. For example, as illustrated inFIG.9C, traversal can begin at the root node (e.g., node 0) of the acceleration data structure970. Node 1 and node 4 are the two children of root node 0, and traversal will proceed from root node 0 to either node 1 or node 4. In some embodiments, the selection between the two available child nodes can be based on the SAH as applied to node 1 and node 4. For example, traversal can proceed from root node 0 to the child node with the lowest SAH value. As depicted inFIG.9C, traversal proceeds from root node 0 to node 4 (e.g., in some examples, node 4 is determined to have a lower SAH value than node 1). Node 1, as the non-selected or non-visited child node, is pushed to the stack960. In some cases, node 1 can be pushed to stack960based at least in part on a determination that the traversal of acceleration data structure970should visit node 1 at some point in the future. Stack960can be used as a queue or indication of nodes that were not selected for traversal but should be visited or traversed in the future. After the traversal proceeds from root node 0 to child node 4 (e.g., after the child node 4 is visited), node 4 can be added to a current set or listing (e.g., to the set of nodes980) that includes nodes of acceleration data structure970that may be used to determine the optimal graph cut. After the traversal has visited or otherwise examined node 4, the traversal can proceed to one of the children of node 4. As illustrated inFIG.9C, the children of node 4 are leaf node 5 and leaf node 6. Similar to as was described above, one of the two leaf nodes can be selected for traversal in a next step (e.g., based on the SAH), with the non-selected leaf node being pushed to stack960. In the example ofFIG.9C, leaf node 5 is selected for traversal and leaf node 6 is non-selected (e.g., and leaf node 6 is therefore pushed to stack960for traversal in a future step). Traversal proceeds from child node 4 to leaf node 5, and leaf node 5 is added to the set of nodes980of acceleration data structure970that may be used to determine the optimal graph cut. As illustrated inFIG.9C, the set of nodes980currently contains node 4 and node 5. As illustrated inFIG.9D, node 4 can be removed from the set of nodes980, based on the addition of its child node 5 to same set of nodes980. For example, because acceleration data structure970is a BVH or other hierarchical tree structure, the set of nodes980can be maintained to avoid the simultaneous presence of a parent node and its child node. After the traversal has visited or otherwise completed an examination of node 5, node 5 can be checked for any child nodes that can be visited in a next traversal step. However, because node 5 is a leaf node of the acceleration data structure970, no child nodes are available to visit in the next traversal step. In response to no nodes being available to visit in the next traversal step, the node stored at the top of stack960can be popped and visited in the next traversal step. As illustrated inFIG.9D, node 6 is the node stored at the top of stack960. Traversal can therefore proceed from leaf node 5 to leaf node 6. Popping leaf node 6 from stack960can cause the leaf node 6 to be removed from stack960(e.g., leaving node 1 as the new topmost node stored at the top of stack960). Leaf node 6 can then be added to the set of nodes980that may be used to determine an optimal graph cut across the acceleration data structure970. At the end of the traversal step that visits or otherwise examines leaf node 6, the set of nodes980contains leaf node 5 and leaf node 6, and the stack960contains the child node 1. After the traversal has visited or otherwise completed an examination of node 6 (e.g., after node 6 has been added to the set of nodes980), node 6 can be checked for any child nodes that can be visited in a next traversal step. Node 6 is a leaf node of the acceleration data structure970, and therefore has no child nodes that can be visited in the next traversal step. As described above, in response to determining that node 6 has no child nodes that can be visited in the next traversal step, the node stored at the top of stack960can be popped and visited in the next traversal step. As illustrated inFIG.9E, node 1 is the node stored at the top of stack960. Traversal can therefore proceed from node 6 to node 1, as is also illustrated inFIG.9E. In response to being popped from the stack960, node 1 can be removed from the stack960. Node 1 can then be added to the set of nodes980that may be used to determine an optimal graph cut across the acceleration data structure970. After the traversal step visits or otherwise examines node 1, the stack960is empty and the set of nodes980contains three nodes (e.g., node 5, node 6, and node 7). Although the most recently visited or traversed node (e.g., node 1) has two child nodes (e.g., node 2 and node 3), the iterative traversal technique described above can terminate without visiting nodes2or3. In one illustrative example, the iterative traversal technique can terminate based on a pre-determined computational budget being reached. For example, the pre-determined computational budget can include a maximum number of nodes or entries that can be stored in then set of nodes980(e.g., if the pre-determined computational budget indicates that the maximum number of nodes that can be stored in the set of nodes980is three, the iterative traversal technique can terminate after the example ofFIG.9E). At the end of the iterative technique (e.g., once the pre-determined computational budget is reached), the set of nodes980or elements can represent an optimal graph cut across the acceleration data structure970for the given computational budget. The ray tracing system600can then calculate a world-space AABB for the vertices of the BLAS nodes adjacent to the determined optimal graph cut line, as has been described above. For example, the world-space AABB can be calculated by applying an Object-To-World matrix to the vertices of the object-space AABBs of the BLAS nodes adjacent to the graph cut line and then building the world-space AABB over the transformed vertices. In some aspects, the selection of a graph cut line to apply across the BLAS (e.g., bottom level BVH) of a TLAS leaf node can be used to obtain a desired degree of granularity or tightness in the world-space bounding box that is subsequently constructed for the TLAS leaf node. For example, a graph cut applied immediately below the BLAS root node of a TLAS leaf node would result in a world-space AABB with a relatively low degree of tightness (e.g., because the world-space AABB generated for the TLAS leaf node is built around the vertices of the BLAS root node's AABB). In some examples, a graph cut applied immediately above the BLAS leaf nodes would result in a world-space AABB with a relatively high degree of tightness (e.g., because the BLAS leaf nodes contain the individual primitives of the BLAS, the world-space AABB generated for the TLAS leaf node is built around the vertices of each individual primitive). In some cases, applying a graph cut immediately above the BLAS leaf nodes can result in the same world-space AABB as is generated according to the first approach described above (e.g., because both approaches transform each vertex of the individual primitives into world-space vertices that are then used to generate a maximally tight world-space AABB). In some cases, graph cut selection can therefore offer a tunable selection in the tradeoff between world-space AABB versus compute time. A larger amount of compute time is needed to build a BVH with tight AABBs than loose AABBs; however, tighter AABBs allow subsequent ray tracing to be performed in less compute time. In an illustrative example, graph cut selection can be performed based at least in part on one or more cost metrics indicating the amount of available compute time for BVH and/or AABB building and the amount of available compute time for ray tracing. Graph cut selection can additionally be based on a prediction or understanding of how often the bottom level BVH and world-space AABBs might be rebuilt for a given scene object, as a fast BVH and AABB build time may be needed for scene objects that deform or otherwise require frequent BVH rebuilds. In some examples, a particular approach can be selected or configured as described above based on a known or determined BVH build time metric, e.g., such that an appropriate BVH and/or a tightest world-space AABBs can be constructed subject to the constraint of maximum build time given by the BVH build time metric. FIG.10is a flowchart illustrating an example of a process1000for graphics processing. Although the example process1000depicts a particular sequence of operations, the sequence may be altered without departing from the scope of the present disclosure. For example, some of the operations depicted may be performed in parallel or in a different sequence that does not materially affect the function of the process1000. In other examples, different components of an example device or system that implements the process1000may perform functions at substantially the same time or in a specific sequence. At block1002, the process1000includes obtaining an acceleration data structure. In some examples, the acceleration data structure includes one or more primitives of a scene object. For example, the acceleration data structure can be obtained by or from the acceleration data structure engine622associated with the ray tracing system600illustrated inFIG.6. In some cases, the acceleration data structure can include a bounding volume hierarchy (BVH). In some examples, the acceleration data structure can include a bottom-level acceleration structure (BLAS). For example, the acceleration data structure can include one or more of the acceleration data structure900aillustrated inFIG.9Aand/or the acceleration data structure900billustrated inFIG.9B. In some cases, the one or more primitives of the scene object can be included in one or more leaf nodes of the acceleration data structure. For example, one or more of the leaf nodes928and932-937of the acceleration data structure900aillustrated inFIG.9Aand/or of the acceleration data structure900billustrated inFIG.9Bcan include the one or more primitives of the scene object. In some examples, the acceleration data structure can include a BLAS that is associated with a top-level acceleration structure (TLAS) leaf node. In examples where the acceleration data structure includes a BLAS, the BLAS can additionally, or alternatively, include one or more intermediate BLAS nodes. For example, the one or more intermediate BLAS nodes can include one or more of the BLAS child nodes912,922,924and/or926of the acceleration data structure900aillustrated inFIG.9Aand/or of the acceleration data structure900billustrated inFIG.9B. One or more of the intermediate BLAS nodes can include an axis-aligned bounding box (AABB) encompassing a subset of the one or more primitives of the scene object. At block1004, the process1000includes applying a graph cut to the acceleration data structure. In some examples, the graph cut can be applied directly above or directly below a plurality of leaf nodes of the acceleration data structure. In some cases, when the acceleration data structure is a TLAS and/or a BLAS, the graph cut can be applied across the bottom level BVH (e.g., BLAS) included in a given TLAS leaf node. Applying the graph cut can partition the nodes of the acceleration data structure into two disjoint subsets, such that any path from the root node of the acceleration data structure to a leaf node of the acceleration data structure crosses the graph cut line exactly once. For example, applying the graph cut to the acceleration data structure can include applying the graph cut line950aillustrated inFIG.9Aor the graph cut line950billustrated inFIG.9B. In some cases, any graph cut across a bottom level BVH or acceleration data structure can be used to determine a set of bounding boxes (e.g., AABBs) that contain the entire geometry (e.g., the primitives included in the acceleration data structure) of the scene object associated with the acceleration data structure. At block1006, the process1000includes determining a set of nodes of the acceleration data structure based on the graph cut. In some examples, the set of nodes is located adjacent to the graph cut. For example, the set of nodes determined based on the graph cut can include the one or more nodes of the acceleration data structure that are located immediately above the graph cut line. In some cases, the set of nodes determined based on the graph cut can include the one or more nodes of the acceleration data structure that are located immediately below the graph cut line. In some examples, the set of nodes determined based on the graph cut can include a plurality of leaf nodes of the acceleration data structure. The plurality of leaf nodes can include each vertex of the scene object associated with the acceleration data structure. For example, the set of nodes determined based on the graph cut can include the nodes922,924,925,936,937, and928illustrated inFIG.9Aas being located immediately below the graph cut line950a. In another example, the set of nodes determined based on the graph cut can include the nodes932,933,924,936,937, and928illustrated inFIG.9Bas being located immediately below the graph cut line950b. In some examples, at block1006, the process1000can further include determining one or more child nodes of a root node of the acceleration data structure and determining a Surface Area Heuristic (SAH) for each child node. The graph cut can be applied to the acceleration data structure based on the determined SAH for each child node. For example, the one or more child nodes can be determined based on the graph cut and/or graph cut line (e.g., as described above). In some cases, an optimal graph cut for a given computational cost budget can be determined using the SAH. For example, the SAH can be applied to treelet growth of the acceleration data structure for the given computational cost budget. In some cases, an iterative technique can be used to determine the optimal graph cut to apply to the acceleration data structure. For example, the iterative technique can include placing the root node of the acceleration data structure in a stack and selectively expanding the root node and its child nodes (e.g., by popping the stack) based on their SAH until the number of nodes in the stack reaches the given computational budget. In some examples, the root node of the acceleration data structure can be placed in the stack960illustrated inFIGS.9C-9E. The stack (e.g., stack960) can be included in the memory616illustrated in the ray tracing system600ofFIG.6. At block1008, the process1000includes generating a world-space bounding box for the scene object (e.g., the scene object associated with the acceleration data structure). In some examples, the world-space bounding box is generated for the set of nodes determined based on the graph cut. For example, the generated world-space bounding box can include one or more of the world-space bounding boxes830,850, and/or870illustrated inFIGS.8A-8C, respectively. In some cases, the world-space bounding box generated for the scene object can be a world-space axis-aligned bounding box (AABB). In some examples, at block1008, the process1000can include obtaining a respective object-space bounding box for each node of the set of nodes determined based on the graph cut. Each respective object-space bounding box of each node (e.g., each node of the set of nodes determined based on the graph cut) can be transformed into a plurality of world-space vertices. In some examples, the world-space bounding box for the scene object can be generated using the transformed plurality of world-space vertices. In some examples, the processes described herein (e.g., process1000and/or any other process described herein) may be performed by a computing device, apparatus, or system. In one example, the process1000can be performed by a computing device or system having the computing device architecture1100ofFIG.11. The computing device, apparatus, or system can include any suitable device, such as a mobile device (e.g., a mobile phone), a desktop computing device, a tablet computing device, a wearable device (e.g., a VR headset, an AR headset, AR glasses, an extended reality (XR) device (e.g., a VR headset or HMD, an AR headset, HMD, or glasses, etc.), a network-connected watch or smartwatch, or other wearable device), a server computer, a vehicle (e.g., autonomous or non-autonomous vehicle) or computing device of a vehicle, a robotic device, a laptop computer, a smart television, a camera, and/or any other computing device with the resource capabilities to perform the processes described herein, including the process1000and/or any other process described herein. In some cases, the computing device or apparatus may include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component(s) that are configured to carry out the steps of processes described herein. In some examples, the computing device may include a display, a network interface configured to communicate and/or receive the data, any combination thereof, and/or other component(s). The network interface may be configured to communicate and/or receive Internet Protocol (IP) based data or other type of data. The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphics processing units (GPUs), digital signal processors (DSPs), central processing units (CPUs), and/or other suitable electronic circuits), and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein. The process1000is illustrated as a logical flow diagram, the operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. Additionally, the process1000and/or any other process described herein may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium may be non-transitory. FIG.11illustrates an example computing device architecture1100of an example computing device which can implement the various techniques described herein. In some examples, the computing device can include a mobile device, a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, an extended reality (XR) device, or a mixed reality (MR) device), a personal computer, a laptop computer, a video server, a vehicle (or computing device of a vehicle), or other device. The components of computing device architecture1100are shown in electrical communication with each other using connection1105, such as a bus. The example computing device architecture1100includes a processing unit (CPU or processor)1110and computing device connection1105that couples various computing device components including computing device memory1115, such as read only memory (ROM)1120and random-access memory (RAM)1125, to processor1110. Computing device architecture1100can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor1110Computing device architecture1100can copy data from memory1115and/or the storage device1130to cache1112for quick access by processor1110. In this way, the cache can provide a performance boost that avoids processor1110delays while waiting for data. These and other engines can control or be configured to control processor1110to perform various actions. Other computing device memory1115may be available for use as well. Memory1115can include multiple different types of memory with different performance characteristics. Processor1110can include any general-purpose processor and a hardware or software service, such as service11132, service21134, and service31136stored in storage device1130, configured to control processor1110as well as a special-purpose processor where software instructions are incorporated into the processor design. Processor1110may be a self-contained system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device architecture1100, input device1145can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. Output device1135can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc. In some instances, multimodal computing devices can enable a user to provide multiple types of input to communicate with computing device architecture1100. Communication interface1040can generally govern and manage the user input and computing device output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device1130is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)1125, read only memory (ROM)1120, and hybrids thereof. Storage device1130can include services1132,1134,1136for controlling processor1110. Other hardware or software modules or engines are contemplated. Storage device1130can be connected to the computing device connection1105. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor1110, connection1105, output device1135, and so forth, to carry out the function. Aspects of the present disclosure are applicable to any suitable electronic device (such as security systems, smartphones, tablets, laptop computers, vehicles, drones, or other devices) including or coupled to one or more active depth sensing systems. While described below with respect to a device having or coupled to one light projector, aspects of the present disclosure are applicable to devices having any number of light projectors and are therefore not limited to specific devices. The term “device” is not limited to one or a specific number of physical objects (such as one smartphone, one controller, one processing system and so on). As used herein, a device may be any electronic device with one or more parts that may implement at least some portions of this disclosure. While the below description and examples use the term “device” to describe various aspects of this disclosure, the term “device” is not limited to a specific configuration, type, or number of objects. Additionally, the term “system” is not limited to multiple components or specific aspects. For example, a system may be implemented on one or more printed circuit boards or other substrates and may have movable or static components. While the below description and examples use the term “system” to describe various aspects of this disclosure, the term “system” is not limited to a specific configuration, type, or number of objects. Specific details are provided in the description above to provide a thorough understanding of the aspects and examples provided herein. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the aspects in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the aspects. Individual aspects may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed, but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function. Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general-purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc. The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction(s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as flash memory, memory or memory devices, magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, compact disk (CD) or digital versatile disk (DVD), any suitable combination thereof, among others. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, an engine, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like. In some aspects the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se. Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor(s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example. The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure. In the foregoing description, aspects of the application are described with reference to specific aspects thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative aspects of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate aspects, the methods may be performed in a different order than that described. One of ordinary skill will appreciate that the less than (“<”) and greater than (“>”) symbols or terminology used herein can be replaced with less than or equal to (“≤”) and greater than or equal to (“≥”) symbols, respectively, without departing from the scope of this description. Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof. The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly. Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” can mean A, B, or A and B, and can additionally include items not listed in the set of A and B. The various illustrative logical blocks, modules, engines, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, engines, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium comprising program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may comprise memory or data storage media, such as random-access memory (RAM) such as synchronous dynamic random-access memory (SDRAM), read-only memory (ROM), non-volatile random-access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves. The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, an application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general-purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein. Illustrative aspects of the disclosure include: Aspect 1: A method of ray tracing, the method comprising: obtaining an acceleration data structure, the acceleration data structure including one or more primitives of a scene object; applying a graph cut to the acceleration data structure; determining a set of nodes of the acceleration data structure based on the graph cut, wherein the set of nodes is located adjacent to the graph cut; and generating a world-space bounding box for the scene object, wherein the world-space bounding box is generated for the set of nodes determined based on the graph cut. Aspect 2: The method of Aspect 1, further comprising: obtaining a respective object-space bounding box for each node of the set of nodes; and transforming each respective object-space bounding box of each node into a plurality of world-space vertices. Aspect 3: The method of Aspect 2, wherein the world-space bounding box for the scene object is generated based on the plurality of world-space vertices. Aspect 4: The method of any of Aspects 1 to 3, further comprising: determining one or more child nodes of a root node of the acceleration data structure; determining a Surface Area Heuristic (SAH) for each child node of the one or more child nodes; and applying the graph cut to the acceleration data structure based on the determined SAH for each child node. Aspect 5: The method of Aspect 4, further comprising: determining a computational cost budget specifying a maximum number of nodes in the set of nodes based on the graph cut; and determining the SAH for each child node of the one or more child nodes based on the determined computational cost. Aspect 6: The method of any of Aspects 1 to 5, wherein the graph cut is applied directly above or directly below a plurality of leaf nodes of the acceleration data structure. Aspect 7: The method of any of Aspects 1 to 6, wherein: the set of nodes determined based on the graph cut includes a plurality of leaf nodes of the acceleration data structure, wherein the plurality of leaf nodes includes each vertex of the scene object. Aspect 8: The method of Aspect 7, wherein the world-space bounding box is generated based at least in part on transforming each vertex of the scene object from an object-space representation into a world-space representation. Aspect 9: The method of any of Aspects 1 to 8, wherein the one or more primitives of the scene object are included in one or more leaf nodes of the acceleration data structure. Aspect 10: The method of any of Aspects 1 to 9, wherein the world-space bounding box generated for the scene object is a world-space axis-aligned bounding box (AABB). Aspect 11: The method of any of Aspects 1 to 10, wherein the acceleration data structure includes a bounding volume hierarchy (BVH). Aspect 12: The method of any of Aspects 1 to 11, wherein the acceleration data structure includes a bottom-level acceleration structure (BLAS). Aspect 13: The method of Aspect 12, wherein the BLAS: is associated with a top-level acceleration structure (TLAS) leaf node; and includes one or more intermediate BLAS nodes, each intermediate BLAS node including an axis-aligned bounding box (AABB) encompassing a subset of the one or more primitives of the scene object. Aspect 14: The method of any of Aspects 1 to 13, wherein the set of nodes located adjacent to the graph cut is located above the graph cut or below the graph cut. Aspect 15: A method of ray tracing, the method comprising: obtaining a bottom-level acceleration structure (BLAS), the BLAS including one or more primitives of a scene object; calculating a proxy geometry for a plurality of vertices of the BLAS, the proxy geometry having a first number of vertices that is smaller than a number of vertices contained in the BLAS; transforming the first number of vertices of the proxy geometry into a plurality of proxy geometry world-space vertices; and generating a world-space axis-aligned bounding box (AABB) for the BLAS, wherein the world-space axis-aligned bounding box encloses the plurality of proxy geometry world-space vertices. Aspect 16: The method of Aspect 15, wherein the proxy geometry is a convex hull or an approximation of a convex hull. Aspect 17: A method of ray tracing, the method comprising: obtaining a bottom-level acceleration structure (BLAS), the BLAS including a plurality of object-space vertices for one or more primitives of a scene object; transforming each vertex of the plurality of object-space vertices into a transformed world-space vertex; and generating a world-space axis-aligned bounding box (AABB) for the BLAS such that the world-space AABB encloses each transformed world-space vertex. Aspect 18: An apparatus for ray tracing, comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured to: obtain an acceleration data structure, the acceleration data structure including one or more primitives of a scene object, apply a graph cut to the acceleration data structure, determine a set of nodes of the acceleration data structure based on the graph cut, wherein the set of nodes is located adjacent to the graph cut, and generate a world-space bounding box for the scene object, wherein the world-space bounding box is generated for the set of nodes determined based on the graph cut. Aspect 19: The apparatus of Aspect 18, wherein the one or more processors are configured to: obtain a respective object-space bounding box for each node of the set of nodes; and transform each respective object-space bounding box of each node into a plurality of world-space vertices. Aspect 20: The apparatus of Aspect 19, wherein the world-space bounding box for the scene object is generated based on the plurality of world-space vertices. Aspect 21: The apparatus of any of Aspects 18 to 20, wherein the one or more processors are configured to: determine one or more child nodes of a root node of the acceleration data structure; determine a Surface Area Heuristic (SAH) for each child node of the one or more child nodes; and apply the graph cut to the acceleration data structure based on the determined SAH for each child node. Aspect 22: The apparatus of Aspect 21, wherein the one or more processors are configured to: determine a computational cost budget specifying a maximum number of nodes in the set of nodes based on the graph cut; and determine the SAH for each child node of the one or more child nodes based on the determined computational cost. Aspect 23: The apparatus of any of Aspects 18 to 22, wherein the graph cut is applied directly above or directly below a plurality of leaf nodes of the acceleration data structure. Aspect 24: The apparatus of any of Aspects 18 to 23, wherein the set of nodes determined based on the graph cut includes a plurality of leaf nodes of the acceleration data structure, wherein the plurality of leaf nodes includes each vertex of the scene object. Aspect 25: The apparatus of Aspect 24, wherein the world-space bounding box is generated based at least in part on transforming each vertex of the scene object from an object-space representation into a world-space representation. Aspect 26: The apparatus of any of Aspects 18 to 25, wherein the one or more primitives of the scene object are included in one or more leaf nodes of the acceleration data structure. Aspect 27: The apparatus of any of Aspects 18 to 26, wherein the world-space bounding box generated for the scene object is a world-space axis-aligned bounding box (AABB). Aspect 28: The apparatus of any of Aspects 18 to 27, wherein the acceleration data structure includes a bounding volume hierarchy (BVH). Aspect 29: The apparatus of any of Aspects 18 to 28, wherein the acceleration data structure includes a bottom-level acceleration structure (BLAS). Aspect 30: The apparatus of Aspect 29, wherein the BLAS: is associated with a top-level acceleration structure (TLAS) leaf node; and includes one or more intermediate BLAS nodes, each intermediate BLAS node including an axis-aligned bounding box (AABB) encompass a subset of the one or more primitives of the scene object. Aspect 31: The apparatus of any of Aspects 18 to 30, wherein the set of nodes located adjacent to the graph cut is located above the graph cut or below the graph cut. Aspect 32: An apparatus for ray tracing, comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured to: obtain a bottom-level acceleration structure (BLAS), the BLAS including one or more primitives of a scene object; calculate a proxy geometry for a plurality of vertices of the BLAS, the proxy geometry having a first number of vertices that is smaller than a number of vertices contained in the BLAS; transform the first number of vertices of the proxy geometry into a plurality of proxy geometry world-space vertices; and generate a world-space axis-aligned bounding box (AABB) for the BLAS, wherein the world-space axis-aligned bounding box encloses the plurality of proxy geometry world-space vertices. Aspect 33: The apparatus of Aspect 32, wherein the proxy geometry is a convex hull or an approximation of a convex hull. Aspect 34: An apparatus for ray tracing, comprising: a memory; and one or more processors coupled to the memory, the one or more processors configured to: obtain a bottom-level acceleration structure (BLAS), the BLAS including a plurality of object-space vertices for one or more primitives of a scene object; transform each vertex of the plurality of object-space vertices into a transformed world-space vertex; and generate a world-space axis-aligned bounding box (AABB) for the BLAS such that the world-space AABB encloses each transformed world-space vertex. Aspect 35: A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by one or more processors, cause the one or more processors to perform any of the operations of Aspects 1 to 14 and Aspects 18 to 31. Aspect 36: An apparatus comprising means for performing any of the operations of Aspects 1 to 14 and Aspects 18 to 31. Aspect 37: A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by one or more processors, cause the one or more processors to perform any of the operations of Aspects 15 to 16 and Aspects 32 to 33. Aspect 38: An apparatus comprising means for performing any of the operations of Aspects 15 to 16 and Aspects 32 to 33. Aspect 39: A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by one or more processors, cause the one or more processors to perform any of the operations of Aspects 17 and 34. Aspect 40: An apparatus comprising means for performing any of the operations of Aspects 17 and 34.
102,307
11861786
DETAILED DESCRIPTION Ray tracing can produce vivid and detailed images from 3-D scene definitions, and can be used to model complicated light behavior and effects. Ray tracing is used here as a sampling technique for sampling or developing light transport data for parts of a 3-D scene that are relatively close to a point for which shading information is to be obtained. Here, when a sample comes from a point relatively close to a ray origin, it will be less noisy than a sample obtained from a point farther from the ray origin, because a volume of space where the sample is obtained grows as the distance from the origin grows. The ray sampling may be conducted at a relatively low density of sampling (such as a sampling density that would produce a noisy image, if the samples were of portions of the 3-D scene relatively far from the origin of the rays. Keeping the sampling density relatively low allows lower computation cost for ray tracing. In conjunction with this ray tracing approach, a sampling of discretized light transport records (explained below) associated with sub-portions of the 3-D scene farther from the ray origin is conducted (e.g., outside of a maximum distance to which the ray(s) were traced). Results of one or more of shading induced by ray intersection(s) and data from the light transport records can both be used to arrive at a final shading result. In more detail, ray tracing involves identifying an intersection between a ray traveling in the 3-D scene and a surface. Then, that surface can be shaded to produce the point sample that will be used to determine characteristics of a surface from which the ray was emitted. Identifying an intersection for a ray can be a computationally expensive operation. To make intersection testing more computationally efficient, a geometry acceleration structure can be provided that has elements bounding portions of the surfaces (which can be formed of primitives) in the 3-D scene. For example, a geometry acceleration structure may comprise a hierarchical tree of axis-aligned bounding boxes that terminate in leaf nodes, which collectively bound all the primitives forming surfaces in the 3-D scene. The geometry acceleration structure is used to identify a smaller set of surfaces that could be intersected by the ray; so, a ray is first traversed through the acceleration structure, and then is tested for intersection with any surfaces that remain candidates for being intersected by that ray. An approach that provides for pre-computation of light transport information within pre-determined volumes of space in a 3-D scene can be used to characterize light transit information in different portions of such 3-D scene. During rendering, it may be desired to determine characteristics of light energy arriving at a given point in the 3-D scene, and the pre-computed light transport information can be used. A grid of volume elements can provide a way to associate particular light information with particular parts of a 3-D scene, as explained below. An example of a grid of volume elements is a set of “packed” volumes, typically of uniform shape, that fill a 3-D scene. For example, a set of cubes of a given dimensionality can be packed to fill the 3-D scene. In some circumstances, multiple grids of volume elements can be defined to fill the 3-D scene multiple times. For example, a plurality of grids of volume elements, each respectively having a set of cubes of a given dimension, can be used to fill the scene. Functionally, this means that a larger cube of one grid will have contained therein multiple cubes of smaller size of a different grid (e.g., if an element is divided along each dimension, then 8 constituent elements would result). However, the grids of volume elements are not traversed by following a link or path from a larger volume element to a smaller. Rather, the volume elements are accessed during a march from one point (e.g., an origin of the ray) in a direction through the 3-D scene, and data from volume elements intersected during the march is accessed. Where multiple differently-sized grids populate the 3-D scene, a selection can be made of a particular size of volume element to sample at each location in the 3-D scene at which sampling is to be conducted. A march can be conducted by testing a defined shape (e.g, a cone) for overlap with a sequence of volume elements. The volume elements can overlap, such as where a size of the volume elements tested changes. As an example, a set of volume elements can be produced, ranging from small elements to larger elements that include smaller elements. Each volume element can be a 6-sided regular shape (e.g., a cube). Each face of the shape can parameterize light that is traveling through that face. A volume element that includes other volume elements will be associated with data that represents a blending of the light transport information for each included volume element. Thus, each volume element can use the same amount of data to represent light transport information, resulting in light transport information for a given volume of space being available at various degrees of specificity. Stated otherwise, a multi-sized set of nested volume elements, such as close packed cubic elements located in 3-D scene space (and in contrast to a sparse tree of volume elements positioned and sized to bound scene geometry) can be produced, wherein each volume element includes a characterization of the light emitted from each of the faces of that volume element. A larger volume element represents the light emission from each of the multiple smaller volume elements located in it, but with less precision. After creation of volume elements, they can be used for rendering by intersecting a conic section from a camera or a surface in the 3-D scene, and collecting the light emission encountered from all the volume element faces encountered on the path of the conic section. Closer to the origin of the cone (of which conic sections are taken at each sampling location), smaller volume elements are accessed and the light emission information is used, while farther from the origin, larger volume elements are accessed. One characteristic of sampling such volume element(s) is that each further level in the volume element structure can require eight times more memory (where each dimension is equally subdivided in a grid that is homogenous for different dimensions). Therefore, not only is the absolute memory size required to store the volume element set increased, but also the memory bandwidth required during rendering would increase, since sampling more small volume elements requires more memory bandwidth than sampling fewer large volume elements (holding constant an amount of data used to represent the relevant lighting information). Thus, having more layers in the hierarchy will yield more accurate results, but incurs a high memory cost. A cone here refers to a shape enclosing a volume, and which has an increasing cross-section area in a direction perpendicular to a longitudinal axis of the shape, as the shape becomes increasingly elongated on that axis. In some cases, the conic section may be symmetrical around such axis. Here, a conic section (a cross section of the volume) does not imply that the such cross-section have any particular shape. For example, the cross-section can be circular, an oval, rectangular, and so on. In the following disclosures, examples of using both point sampling and volume sampling techniques (e.g., ray tracing and volume element sampling) in order to determine lighting information at a location in a 3-D scene are disclosed. In summary of the following, point sampling is undertaken for one or more samples that are limited to within a threshold distance of the point. For example, rays can be traced to determine an intersection, if any, within a threshold distance of the point. Outside of that threshold distance, volume sampling can be performed. In an example, volume sampling is undertaken by marching a conic section through a grid of volume elements. Sizes of the volume elements sampled can be determined according to distance from the point. Such sizes also can be selected according to a spreading factor associated with the cone, where the spreading factor indicates how quickly the cone spreads as a function of distance. FIG.1depicts functional elements of a hybrid ray tracing system10. System10includes a source of ray definition data12, which provides an input to a ray intersection testing module14. Ray intersection testing module14also has as input an acceleration structure15and source(s) of 3-D scene data19. Ray intersection testing14communicates intersection testing results to a ray intersection shading module29. Ray intersection shading29outputs shading results to a sample buffer17. A volumetric rendering process27receives light transport information obtained from volumetric elements by a volumetric data access module25. Volumetric data access module25can receive inputs from one or more of a photon structure21and from a volume grid storage23, which contains light transport data, as described in more detail below. A grid creator22is operable to produce the grids of volume elements that are stored in and provided from grids23.FIG.1also depicts that a photon query process28can be provided to query photon maps stored as photon maps20. Photon maps20can be produced in conjunction with production of the grids of volume elements23, as a further product of processing preformed with light energy records21. Volumetric rendering process27can serve as a controller for volumetric sampling tasks and control which volume elements are to be sampled and also process results received from such sampling. FIG.2depicts a grid of volume elements40, with one of the volume elements41is specifically identified.FIG.3depicts a grid of volume elements43, which have smaller and denser volume elements than grid40. Since the grid43contains smaller elements than grid40, a number of elements in grid43can exist within one element of grid40.FIGS.2and3do not imply that the volume elements of grids41and43are hierarchical, or that there is a relationship between volume element41and volume elements that occupy portions of volume element41, (e.g., there is not an implication that the grids are traversed from a larger to a smaller element, within a volume encompassed by the larger element, as may be the case with a hierarchical acceleration structure). InFIG.3, volume elements50-52are specifically identified. Volume elements are associated with light transport characterization data. Light transport characterization data for a given volume element characterizes transport of light energy from that element; such light energy may be generated in that volume element, or may originate from outside that element. In one implementation, each volume element can be associated with record(s) of energy radiating from surfaces within that volume element. Such radiating energy can be characterized based on forward tracing from emissive sources. As an example, such data can represent light transport through specific faces of the volume elements. For clarity of description,FIG.3shows that element50has a face90, which is associated with a light transport characterization82. Light transport characterization82can include information about light being emitted from inside element50to an exterior of element50. Light transport characterization82also can include information both about light traveling into element50through face90, and vice versa. A similar light transport characterization83is identified for face91of element51. Light transport characterizations84and85are shown for other faces of element51. In one example, light transport characterization81is a less granular characterization of light transport. Such light transport characterizations81-85may include information about light directionality, color and intensity of light. The data can include one or more directions and quantification of light energy traveling in each of those directions. A light transport characterization can characterize light traveling in only one direction (e.g., out of the volume), a range of directions, and bidirectional light transport. In other implementations, a statistical distribution or curve can be provided that defines a pattern or distribution of light energy over the surface of each face. In an example, a distribution of light energy may be provided in which various parameters may be completed for each characterization, using the same distribution function. The pattern or distribution can be fitted according to the actual light energy being emitted. In some examples, a single type of pattern, which has one or more parameters that can be tuned for each face, and those parameters are then selected to match the actual distribution, to the extent possible. As explained, the association of light energy propagation through faces of the volume elements is an example, in that a variety of ways to express light transport within such a volume element are possible. In general, such light transport would be expressed in a manner that allows light transport, along a cone march, to be evaluated. Volume element41in turn includes volume elements43, of which 4 (of 8) are depicted inFIG.3.FIG.3also depicts that geometry, such as primitives45-46and shape47are located within the same volume as occupied by volume elements43, even though there may not be a logical linkage or connection that identifies which of such geometry is within a given volume element. Each set of volume elements, in an example, have an even distribution in the 3-D scene, because the volume elements of that set are arranged in a regular, non-sparse structure. Many kinds of data structures used in ray tracing are irregular, and are designed to reduce storage space required to represent the data structure. In one aspect, the volume elements in each set are in pre-determined respective locations, and each is associated with (“contains”) data representing light energy within the bounds of that volume. By contrast, an acceleration structure for abstracting geometry for use in testing rays for intersection in the 3-D scene has volume elements that are located and sized according to the geometry in the scene. Forward tracing light energy from lights into the 3-D scene can be used to determine light energy that will be represented in each volume element. Forward tracing may involve tracing rays from each light source, and for each place where the ray intersects, data representing light energy will be deposited. Such deposition is additive in that the 3-D scene will become brighter as more light energy is deposited. Such forward tracing has some similarities to photon mapping, in that photon mapping also involves forward tracing from lights. However, photon mapping provides a norming operation that maintains a total amount of light energy in the scene constant as photons are deposited. The norming operation results in surfaces having a number of photons that correlate to a relative complexity of how light interacts with that surface. For example, a flat painted wall may have only a few photons deposited, while a facet of a glass surface may have many more. In some approaches herein, the finest grid of volume elements (e.g., the grid with the smallest elements) may have on the order of 21\24 elements, which can be expressed as 8 levels below a root. If a grid of volume elements were to be used without using ray tracing, a finest grid may require on the order of 21\40 elements, or on the order of 32000 times more grid elements in the most granular level of the grid structure. These examples are non-limiting, and qualitative. FIG.4depicts an example geometry acceleration structure101, which can be represented by data stored in acceleration structure storage15. Geometry acceleration structure101includes a root element102that is associated with child elements104-106. Each child element104-107can in turn be related to child elements107-109. This chain of relationships may continue until reaching a set of leaf nodes110-112. In some implementations, each element bounds a portion of 3-D space, in which one or more elements of geometry exist. In some implementations, geometry acceleration structure101is sparse, such that areas of a 3-D scene that do not contain geometry have no geometry acceleration structure elements. Additionally, each acceleration structure element (except the root) is related to one or more parent elements, and one or more child elements. Multiple child elements may relate to the same parent, and multiple parents may also relate to a single child element. As an example, a parent node bounds a given sub-portion of space, in which certain portions of geometry reside, and child nodes bound selections of the geometry in that parent node. Geometry acceleration structures may have branches with different numbers of nodes between root and each leaf, may not have an explicitly defined root, may have all primitives bounded by nodes that bound only primitives, or which bound other nodes. As such,FIG.4is exemplary and not limiting as to an implementation of a geometry acceleration structure. For example, an acceleration structure for bounding scene geometry can include a tree of axis aligned bounding boxes (a tree here meaning that there is a relationship between elements that can be followed to traverse from a starting point in the tree to another point). For example, a tree of axis aligned bounding boxes can be hierarchical, and have all geometry bounded by leaf nodes of the hierarchy. Other examples of acceleration structures include K-D trees and sphere hierarchies. Functionally, a hierarchical acceleration structure can be traversed by starting at a root node, which can bound all scene geometry (the root node can be implied, as an extent of the entire 3-D scene), and then finding all children of the root, testing them for intersection, and then continuing to traverse the branches of all child nodes that were intersected by a ray, following the same pattern. Thus, in traversing an acceleration structure for geometry, a ray can be tested for intersection in a plurality of different parts of the 3-D scene concurrently. Ray intersection testing module14also accesses 3-D scene data from the source of 3-D scene data19(FIG.1). Such 3-D scene data can include primitives composing objects in the 3-D scene, and in an example, are accessed when a leaf node is intersected, so that the geometry in that leaf node is to be tested for intersection. Geometry acceleration structure101is used by ray intersection testing module14to remove sub-sets of scene geometry from having to be explicitly tested for intersection. For leaf nodes that are found to be intersected by a given ray, geometry residing in those leaf nodes is tested for intersection with that ray, and information for a closest intersection can be sent to ray intersection shading module29. Once an intersected surface is found, a shader can be run to determine what effect that surface will have on a rendering being produced. A shader can, for example, emit a reflection ray, and can also emit rays that are directed to light sources, in order to determine what light is hitting that intersected surface. FIG.9depicts an example process of producing lighting information for a point in a 3-D scene according to the disclosure.FIGS.5-7are used in explaining aspects of the process ofFIG.9(FIGS.5-7depict 2-D illustrations, instead of a 3-D model, for simplicity). InFIG.9, at265, a point (FIG.5,123) is identified as a location for which lighting information is to be obtained. The location can be a point on a surface of an object in a scene, or a sample of a pixel in a rendering, for example. At267, a ray (ray124ofFIG.5) is defined to be emitted from proximate the point, in a direction, and is associated with a spreading factor. InFIG.5, an expression of the spreading factor is depicted (in 2-D) as a cone defined by boundaries125and126that bracket ray124. At269, a transition zone is defined and includes maximum and minimum ray tracing distances (minimum distance131and maximum distance132ofFIG.5). In one example, the transition zone is defined based on the spreading factor. In one example, a wide spreading factor results in a transition zone closer to origin123. At271, using a geometry acceleration structure, rays are traced in the 3-D scene to identify an intersection within the maximum distance132of the transition zone, if any. InFIG.5, ray124is traced (FIG.9,271) from origin123to maximum distance131, attempting to identify a closest intersection for the ray and surface geometry, as explained above. At273, if there is an intersection before the transition zone (closer than minimum distance131inFIG.5), then, at275, results of shading that intersection are used to determine lighting information. At277, beginning from minimum distance131, a cone march begins. Ray tracing continues through the transition zone, and at279, if there is no ray intersection within the transition zone, at281, results of the cone march will be used for producing lighting information for the point. At279, if there is a ray intersection in the transition zone, then at283, results of the cone march are blended with a result of shading induced from or caused by the ray intersection (e.g., a shading output). Now,FIG.5is used to explain further aspects of the cone march introduced inFIG.9. The cone march includes that a conic section defined by boundaries125and126is projected from point123into space (in 2-D, the conic section becomes a line that is moved within the 2-D plan.)FIG.5depicts that a grid of volume elements, which each have a relatively small volume compared with volume elements of other grids, are selected for sampling comparatively close to point123. Light characterization information for each volume element intersected by the conic section is accumulated (here, the example assumes light characterization information is associated with faces of the volume elements, which is an example implementation). Such accumulation can include tracking an amount of light energy accumulated within various frequency spectra and also accumulate an opacity value associated with each intersected surface. The opacity value can be derived from characteristics of what lies in the interior of that volume element. The opacity value can be used to decide when to terminate the cone march. For example, a black wall would absorb light energy and be opaque, so the cone march can be stopped based on sampling the light characterization data of a volume element that specifies these properties. FIG.5also depicts that where the grid of volume elements being sampled increases in size, that a transition zone can be provided where volume elements of both sizes are sampled. By particular example, when switching between a grid of volume elements having a size according to volume element128, to a grid having volume elements of a size exemplified by volume element129, a transition zone is demarcated between134and135. Volume elements outlined in dashed form (e.g.,140) depict that an accumulated opacity value has been found to make further cone marching unnecessary. A decision criteria as to when a march can be stopped can vary with the application. FIG.6depicts a cross-section142of the conic projection discussed with respect toFIG.5. InFIG.6, the volume elements are of size like that of volume element127.FIG.6thus depicts that some volume elements are entirely within the area of cross-section142. Some volume elements are only partially within cross-section142(e.g., area144). For those volume elements, a weighted combination of the light characterization information can be combined with that of the other light characterization information.FIG.7depicts similarly that the volume elements increase in size (e.g., now the volume elements are sized like that of volume element128), but the cross-section143of the conic section also has grown.FIG.7also depicts that in practice, some volume elements will drop out of the cone march before other elements; and in particular, element146is not participating in the cone march, but surrounding elements are.FIGS.6and7also serve to illustrate that a number of volume elements will be sampled during the cone march and the light characterization information can be blended to arrive at a result that can be used in shading point123, or for other processing as appropriate. FIG.10depicts a substitution to a portion of the process ofFIG.9. Rather than perform a cone march (277inFIG.9) through one or more pre-defined grids of volume elements, a set of queries can be assembled, to be made of discretized light energy records. These queries can be generated for different regions of space that enclose volumes of space along a path of a conic projection through the scene along a path of the ray. In particular,FIG.10depicts, at314, that a set of queries can be determined. In an example, queries can have a spherical extent where radii of the queries can be determined based on the spreading factor of the ray and a distance from point123(FIG.5). A size of the volume queried would increase as a distance of the query increases from point123. In one approach, different maps or data structures containing discretized light records can be provided for use in such queries. Each map or data structure can have a different level of abstraction of light energy data. For example, a coarse map may contain discretized light energy records that each represents a blending of a plurality of such discretized light energy records. A map or data structure of an appropriate granularity can be selected to satisfy each query. Thus, a query with a large volume does not necessarily return more records, but rather can be used to query a data structure having light energy records that each represent a blending of more granular records (which may in turn be queried using a different data structure). In such an approach, it may be appropriate to provide a single data structure that can be used for each query, but records at an appropriate level of granularity are selected to satisfy a given query. The appropriate level can be determined based on a variety of factors, including the volume or size of the query, which can be correlated to a distance from a point for which light energy information is being gathered. Thus, discretized light energy records can begin as a description of light energy at a point in space, but upon blending with other records, or abstraction to a volumetric element, a resulting light energy record can be generated for a determined volume. Such generation can be done in advance or done on demand. In one approach, where such generation is done on demand, results of such generation can be cached. In one example, common volumes for a plurality of marching processes (such as different cone marches) can be identified, and then light energy characterization data at an appropriate level of granularity (seeFIGS.4-5as examples) can be generated. In one example, cones may be marches from different origins, but these all require light energy characterization data from the same portion of the 3-D scene at the same level of granularity (which can be determined by distance from respective origins, and respective spreading factors for example). In another approach, queries can be formed from multiple overlapping volumes, and Boolean logic can be applied to determine a final query result. For example, spherical queries can be made to overlap to an extent, and only photons that exist in the overlapping portion can be returned. These techniques can be used to approximate querying light energy data associated with surfaces of a grid of volume elements (seeFIG.3). As inFIG.9, where there is no ray intersection detected in the transition zone, then at318, photon query results are used to produce shading outputs for the point. If there was an intersection in the transition zone (and not before, seeFIG.9), then, at320, photon query results are blended with results from shading the intersection. Following on the more specific examples disclosed above,FIG.11depicts a more general process that can be practiced in implementations of the disclosure.FIG.11depicts that, at345, a location is identified for which lighting information is to be obtained. This location can be a point in the 3-D scene (e.g., a point on a surface of an object), or a sample being taken for a 2-D image being rendered, for example. Lighting information is used in the disclosure to include any kind of rendering information, and such lighting information would be expected to vary according to the type of rendering being produced. In order to produce such lighting information, at347, one or more point samples are taken of illumination arriving at the location. At349, one or more volume samples are determined for lighting conditions that may affect the location for which lighting information is to be obtained. At351, a distance restriction on such volume sampling to a distance outside of a defined radius from the location is established, while conversely, the point samples can be confined within that radius. At353, the point and volume samples are performed. At355, results from relatively close point samples are weighted more highly than other samples obtained. At357, results of point and volume sampling can be combined to produce the lighting information for the location. Thus, the process depicted inFIG.11is generic with respect to how the point samples and the volume samples may be taken. The point samples can be confined to a relatively close distance from the location, or otherwise weighted according to distance from the location, while the volume samples are accumulated over a volume swept through the 3-D scene, from the location. Attenuation or an extent of the 3D scene, for example, can govern a maximum distance at which volume sampling is conducted. The above disclosure related primarily to producing rendering outputs from specified data sources (e.g., shading of intersection results and gathering data from elements of one or more grids of volume elements.)FIG.8provides an overview of an example process for producing data sources used in producing the rendering outputs. FIG.8depicts an example process205by which light transport data can be produced for use during rendering from a 3-D scene.FIG.8depicts that process205provides that, at206, rays can be forward traced from lights into a 3-D scene. For example, a respective set of rays can be determined for each light, where a number of rays in each set can be determined according to an intensity or importance of that light. In one approach, the rays can be specified using Monte Carlo or Quasi Monte Carlo principles. Rays also can be specified based on known locations of objects in a 3-D scene. An artist also can directly specify rays or bundles of rays to be forward traced. This forward tracing establishes visibility of objects in the scene to each of the lights. Additionally, once visibility of each light is determined, further generations of rays can be forward traced according to characteristics of the respective surface intersected by each of the objects. At208, discrete light energy records can be deposited at each intersected surface according to characteristics of that surface. For example, a diffuse surface can have a set of light energy records dispersed on the surface, while a shiny surface can have a specular reflection represented by more densely packed light energy records. Also, rays that are traced from a given surface also will be determined according to the nature of the surface. For example, rays can be traced from a specular surface according to Snell's law. A diffuse surface scatters light more and thus can result in shooting more rays going in variant directions, but can be traced for a shorter distance, in an example. At210, an acceleration structure for use in photon map queries can be produced based on the locations of the deposited light energy records, and an appropriate norming process. This acceleration structure can be separate from an acceleration structure for tracing rays in the scene and also different from the grids of volume elements. Portions or an entirety of these structures can be shared. At214, the grids of volume elements can be produced by aggregating the light energy data described by the records into respective volumes of the 3-D scene that are within different of the volume elements. In one approach, face-specific representations of light energy propagation can be produced from the aggregated data. At216, an acceleration structure for ray tracing can be produced; this portion of the depicted process205may proceed according to conventional approaches. In some examples, however, volume grid elements being processed for producing the 3-D grid of volume elements (at214) can be used as an input for producing elements of the acceleration structure. For example, a smallest volume element being processed can be processed for both light energy records and geometry, even though the ultimate constituent elements of the grids of volume elements and of the acceleration structure are different. In some implementations, one or more of these acceleration structures (for photon querying, for abstracting scene geometry, and the 3-D grids) can be shared or partially shared structures. For example, a set of axis aligned bounding boxes can abstract scene geometry, and closer to a root node, also serve as grid elements, while leaf nodes can be sparse. Each of the above-described portions of process205is depicted serially. However, the process portions can proceed in parallel. For example, if working within a given volumetric portion of the 3-D scene, a part of multiple process portions (e.g.,210,212,214and216) can be performed, and then a different volumetric portion of the 3-D scene can be processed next. Additionally, a number of independent threads (or processing units) can be allocated for processing the different portions of the process, such that they may proceed concurrently. FIG.12depicts an example system401comprising one or more of programmable elements and fixed function elements, in which aspects disclosed above can be implemented. System401comprises a host interface403, which can provide an interface to a processor primarily devoted to execution of applications that may use system401for selected processing functionality, such as graphics processing. Such processor can be integrated within a system on chip. A bus404provides communication among various components described below. In some approaches, an application processor also can be connected to bus404, and thus, host interface403is not a necessary component. A variety of data masters402can be used to setup computation to be performed on system401. Such data masters402include a vertex data master405, a pixel data master406, a compute data master407and a ray data master408. Vertex data master405can be used to setup geometry processing to be performed on an array of computation clusters410. Pixel data master406can be used to setup pixel shading operations to be performed on the array410. Compute data master407can be used to setup general purpose parallelized computation on array410. Ray data master408can be used to setup ray tracing operations on array410, such as ray intersection testing and ray shading operations. Array410comprises a set of computation elements identified as cores421-424. Each core421-424comprises a respective local memory435-438. In one example, array410also may comprise shared texture pipelines430and431. A scheduler440can arbitrate among jobs to be performed for each data master405-408. A task distributer441communicates with scheduler440in order to distribute computation tasks to be performed on array410. A ray co-processor445can be provided to assist in ray tracing computation. In one example, ray co-processor445comprises a collector function that collects rays to be processed into groups according to one or more grouping criteria. System401also can comprise a variety of other coprocessors451-453that can be special purpose hardware for different activities, such as audio processing or other digital signal processing. A texture loader454can be used to load texture information as an offload to texture pipelines430-431. Array410also can communicate with a cache hierarchy461that may also couple with a system memory interface462. Elements depicted inFIG.1can be implemented in system401by programming array410, by using ray co-processor445, using one or more co-processors, or a combination thereof. Depending on implementation different, fewer, or additional components may be provided in a system according toFIG.12. For example, not all systems may implement geometry processing on the same physical computation resources as pixel shading or ray processing. Array410can be programmed to perform processes or otherwise implement functions shown inFIG.1. Fixed function circuitry can also be provided to perform such functions, or portions thereof. Various portions of system401can perform different portions of the processes and operations described herein. For example, vertex data master405can operate to obtain vertex data used in creation of an acceleration structure that abstracts scene geometry, and also during forward tracing to create discretized light data records. Array410can be programmed with shaders that are activated in response to ray intersections. Array410also can be programmed to perform the calculations for marching a conic section through the disclosed grids of volume elements and other tasks such as ray intersection testing, for example. Ray co-processor445can be provided to perform some specific tasks for ray operations. For example, ray co-processor445can operate to make collections of rays that are submitted to array410for processing concurrently, and operate to swap out ray processing tasks that begin to fail to fully use a computational bandwidth of array410or an independently schedulable portion thereof. Portions of array410can be performed to execute different tasks concurrently. For example, one portion of array410can be producing a portion of a grid for marching, while another portion is marching a conic section through a previously-produced portion of the grid. FIG.13depicts aspects of an example system that can receive and respond to queries relating to discovering light energy records. Elements of the depicted system are introduced, before more detailed explanation is provided. A general purpose processing cluster475can execute shader code477-479. Each of these portions of shader code can be instantiated or begin execution in response to an identified intersection between a ray and a surface, for example. General purpose processing cluster475can use a main memory471to store data during execution and can include buffering space473that can be used for purposes described below. As an example, shader code477issues a query480relating to discovery of light energy records, which will be served by a query resolver485. This query can be received by an API484that provides one or more calls that specify criteria to be made in each such call. API484provides a uniform format to queries and can provide abstraction for underlying hardware that may have different capabilities in different implementations. In some implementations, API484may support a baseline type of query or queries, and in other implementations, extended query types or formats may be supported. Query resolver485can read from acceleration structure487that can be implemented as a graph of a set of interconnected elements that abstract subsets of light energy records located in a 3-D scene. A subset of light energy records in light energy records489can be identified to be read. A working memory491can store intermediate results of photon queries. Descriptions of abstraction modeling processes493can be stored and used by query resolver485to produce one or more results for each of the queries it receives, such as query480. When executing shader code requests for light record information (e.g., emits a query to discover photons within a defined radius of a specified point), the shader code may have coded with some preliminary guess or heuristic as to how many photons may be returned in response to a given query. However, in a query that simply returns all records that meet a given specification, there is no apriori limitation on a number of records that are discovered and returned. So, shader code may reserve a buffer space (e.g., buffer473) to receive records returned from a query. However, such buffer space reservation would need to be sized to a “worst-case” scenario, in which a large number of records were returned. Additionally, in a situation where memory is constrained, or where it is desirable to reduce data traffic (e.g., for power consumption), this approach may be undesirable. The following disclose provides a variety of example approaches to enabling shader code to have more predictable responses to such queries, to enable serving of a wider variety of queries and to accelerate the computation of useful responses to such queries. These queries also can be used to produce pre-computed light transport data for use in techniques and systems disclosed above. Queries according to the disclosure also can be used to query and return such pre-computed light transport data. FIG.14depicts an example of light energy records located within defined volume elements of a 3-D space. These light energy records can be discovered and processed by query resolver485.FIG.14depicts that light energy records can contain a variety of different information. The information in the light energy records can be used in conjunction with different kinds of query definitions or other processing approaches to produce an ultimate result to a given query. Examples of light energy records include light energy records496and501. Light energy record496includes data defining an emission497that has a directionally-specific distribution of light energy. Emission497can be represented as a parameterized distribution, such as by supplying coefficients for a selectable directionally-specific weighting function. Emission498of lighting energy record501shows a simpler example of a direction and intensity vector. FIG.15Adepicts an example situation in which a query for light energy records within a radius504of a query locus502. InFIG.15A, records505-507are located within radius504. Query resolver485may search within an acceleration structure for one or more elements that bound the volume of space enclosed by the query. There may be multiple such elements that collectively bound the volume. Query resolver485may search for these elements of the acceleration structure and identify the appropriate records in any order (i.e., there is no guarantee that query resolver485identifies records in a known order, such as ordering by increasing distance from origin502. Also, a number of records located in the volume of the query would be unknown initially. However, some kinds of queries may benefit from or require a selected relative ordering or sorting among records. For example, a query may ask for a specified or maximum number of nearest records to a locus (“k-nearest-neighbor” (knn) query), and which may also be limited to a maximum radius of search. In such a circumstance, results found by query resolver485would need to be compared or sorted in order to properly identify the responsive records. Query resolver485may not have enough working memory to store these results. Therefore, an approach to implementing a knn query is to emit a series of nearest neighbor queries, but each query tracks, as a minimum distance, the distance of the previously-identified record. This minimum distance also may contain identifying information about the previously-identified record. This information allows differentiating two records that are not located at the same distance (within a precision of the test). FIG.15Bdepicts a more specific example of how a knn (where k=3) query can be implemented. Initially, a query is made that requests the single nearest record to locus502. This query returns record504. A subsequent query is made, which includes information about the distance from the previous closest record returned (represented as radius510). Query resolver485can thus exclude from search any portions of space closer to locus502than this distance. Query resolver485may find that both record507and record506have the same distance from locus502. Query resolver485would operate to select from record506and507one record to return, according to identifier information for each record. For example, query resolver485may select a record with a sequentially earlier ID. For example, the second query may return record506, which is associated with radius512. A third query is emitted, and is associated with radius512and identifying information derived from record506(e.g., a selected number of low order bits from an ID). If query resolver finds record506first, then it can exclude this record based on the identifier bits, and then ultimately will find record507, and return that record. Such an approach is appropriate where query resolver485may be a fixed function or limited programmability circuit that has only a small amount of storage available when resolving each query (e.g., may have space only for identifying information for a single record). In such case, each time query resolver identifies a record that may be responsive to a query, it may need either to return that record or to replace an existing stored identifier. Such a query resolver can deterministically respond to a nearest-neighbor query, and by extension according to the above-described technique, to a knn, k>1, query. FIG.16is used to describe aspects of techniques to abstract records identified for a query, and present a combined result. These techniques can be used to increase determinism in an amount of data traffic that will be generated by a query, reduce query buffering requirements, and provide hardware acceleration for processing query results, and allowing artist control over aspects of hardware accelerated filtering or abstracting of query results. More particularly,FIG.16depicts an example set of curves530-534that have, as an independent variable, a number of light records and as dependent variable, a contribution ratio. In some implementations, these curves are for a set of light records organized by increasing distance from a particular locus. Thus, in some approaches, each of the curves describes a different overall weighting for a set of light records organized into increasing distance order. For example, curve532describes a linear decrease in contribution ratio for each incremental light record discovered, while curves534and533describe a faster drop off of contribution ratio. An implication of these techniques is that light records at different distances from a locus can be blended according to different strategies, and based on relative location or density of other light records. For example, in the linear curve532, each incremental record can be weighted by a linearly decreasing weight. In some approaches, the total weighting can be a constant value, e.g., such that the blending does not amplify a total energy represented by the records, but rather blends to produce a constant-energy result. These curves can be structured so that they have pre-defined weightings for each incremental record, assuming a pre-determined number of records; they also can be parameterized, such that the weighting of each record is determined based on a total number of records that was discovered. Determining final weights for each record based on a total number of records discovered can be implemented by first determining a total number of records before weighting each record and accumulating that output into a summation. Additionally, two or more of these curves can be blended together in order to arrive at an interpolated curve. For example, if curve532is weighted for 15 records, while curve534is weighted for 8 records, then if 10 records are identified for a given query, the weightings for those records can be determined by blending the weightings described by each of these curves. In some implementations, a set of curves can be pre-defined and stored or encoded in circuitry accessible to query resolver485, and can be part of abstraction modeling processes493. In some implementations, the order and shape of the curves can be specified by different polynomials. A selection of one or more polynomials and parameters for those polynomials can be passed with a query. A query can specify a volume to be searched for records in any of a variety of ways, such as a locus of one or more points and a distance from those points, extrusions, boxes, spheres, and so on. Some queries may not explicate a maximum volume, but instead may specify a maximum number of records. Some queries may specify a directionality to exclude certain records. For example, a query may require that directions of records have positive dot products with a direction specified by the query.FIG.18provides more details concerning examples of query definition options. FIG.17depicts an example where different queries536,538and539have different maximum radii from origin540, and therefore include different sets of records. The records that are discovered with respect to each of the queries can be blended in accordance with a selected curve fromFIG.16. For example, if the records identified for query538were blended according to curve533, and curve533transitioned from high to low between 2 and 4 records, then the 5threcord would have a relatively small contribution to the weighted sum while the first record would have a much higher contribution. If curve534finished transitioning at around 6 records, and curve532finished transitioning at12, then a blending between curve534and curve532may be used to blend the set of 10 records identified by query539. FIG.18depicts that blending can be controlled based on a variety of characteristics, which can include those specified by the query, and also those of the records themselves.FIG.18shows that a query can specify a blending curve for each of a number of different characteristics of light energy records. For example, similarity of direction can be one factor in an ultimate weighting determination, color (or more generally spectral content) of the light energy records can be another, and distance from a specified locus yet another example characteristic. These curves collectively can specify a final weighting function. In some examples, different channels of a record can be weighted differently. These options can be specified by a query. Such a query can reference a convention of prearranged query profiles, other selections of combinations of search criteria. Query resolver circuitry can be provided to accelerate such searches. FIG.19depicts a modified version of the example system401, where a ray and photon co-processor550is provided and comprises a set of resolver units551-553. Each resolver unit551-553contains an intersection test unit, an evaluation unit556, and a local memory557. Intersection test unit555can return a closest intersection result for a shape. The shape may be a primitive composing scene geometry, an element of an acceleration structure, displaced or implicit geometry, or another surface that can be defined and tested for intersection with a ray. Evaluation unit556may implement aspects of the query-related disclosures described above. Evaluation unit556may comprise circuitry that can only execute non-branching instruction streams, for example. Evaluation unit556may comprise circuitry that can evaluate a specified set of polynomials, such as a linear and a quadratic polynomial, for a set of coefficients and a value of the independent variable. Evaluation unit556also may include a multiplier unit, such as an integer, fixed point, single or double precision floating point unit. The multiplier may use a block floating point representation of one or more of the multiplicands. Evaluation unit556also may implement an accumulator that can accumulate results of these calculations into a buffer. Evaluation unit556also may be used to accelerate portions of an algorithm that is being predominantly executed on intersection test unit555, a core in array410, or a combination thereof. For example, evaluation unit556may return a stream of function evaluations, where the one or more independent variables is incremented according to a step size, such as a step size set by intersection test unit555. This stream of evaluations may be used to perform volumetric rendering techniques, ray marches, cone marches, and so on. In one example, evaluation unit556may be programmed to continue to evaluate an expression until an output of that expression changes sign, and then report current values for one or more independent variables. Unit556may output multiple values within a specified range of the sign change, such as a previous and current value, bracketing a zero crossing point of the expression. FIG.20depicts an example process that summarizes aspects disclosed herein. At561, a query can be received for processing. At563, records within a volume defined by the query are identified. These records are abstracted according to an abstraction model applied to the records, to produce a query result. Such query result may be of the same or similar format to a query result that would have been returned for a single light energy record, or may express some aspect of a distribution of the records identified. At567, the query result is returned. FIG.21depicts an example process of query abstraction. At569, one or more records that were within the volume can be rejected based on a rejection criteria (e.g., directionality comparison). This is an appropriate approach where a tester may first identify a record within a specified volume, but does not test other query parameters, but instead identifies such records for further analysis. At571, remaining records can be counted, and at573, relative weighting or blending criteria can be defined, or selected. At575, these records are weighted and at577, the weighted values are summed to produce a query result. At578, results of different of the weighting or blending criteria can be interpolated. In an example, each weighting function can be tuned for a pre-determined number of records, and after a total number of records determined to be responsive to the query is determined, a result from any one of the weighting or blending processes can be selected, such as according to a closest pre-determined number to the actual number of records. In an implementation, two of the results from different weighting or blending processes can be interpolated, if their pre-determined numbers bracket the actual number of records determined to be responsive. FIG.22further depicts aspects of an implementation of these disclosures. A general purpose programmable unit591executes shader code589. Shader code589emits a query480, which may comprises one or more of query bounding info604, materials properties605, program references607, and parameters609. An example of material properties605includes a Bidirectional Reflectance Distribution Function (BRDF). In such a case, a BRDF for a material can be supplied by a query, and that BRDF can be used in calculating a result returned in response to query480. As a particular example, a calculation can be made that determines how much energy of a distribution defined by a light energy record is emitted within a boundary defined by the BRDF. A query may be expressed by shader code589in a format supported by Application Programming Interface (API)484. API484can be implemented by computer executable modules that provide an interface that accepts a set of parameters and other information for query593, represented by query specifier module595. Query specifier module595can produce one or more constituent query specifications appropriate for capabilities of a query resolver597, which would provide results of query593. For example, a knn query call may be supported by API484, which converts such a query into a set of query specifications that are each served by underlying hardware, and the results of these separate query specifications collectively define the results for the knn search.FIG.22also depicts that a feedback loop may be provided to query specifier595from query resolver485. Query resolver485may provide results (e.g., results601-603) as they are available to a simple program execution unit611. When query resolver485completes the query, by identifying all responsive queries, query resolver485may provide a completion indication604. Such indication may also be provided with a last result returned for a query. Simple execution unit611can be configured with a program from program store613. Such program can have specific limitations appropriate to characteristics of simple program execution unit611. For example, some execution units may not support branch instructions, may perform only in-order instruction execution, may not support conditionals, or may not support looping, as examples. These limitations may be made in order to reduce an amount of silicon required to implement the execution unit, and/or to avoid or reduce branching code. In one example, a program can be implemented as a set of instructions for one increment or step of an algorithm. Such program can report intermediate result of one or more increments, or only a final result. Then, query resolver485may supply information for a subsequent step or increment. For example, simple program execution unit611may implement one step of a ray march, or cone march, volume rendering operation, texture coordinate interpolation, volume interpolation, function evaluation for an incremented independent variable, and so on. A program or programs executed by simple program execution unit611may be identified by program reference(s)607, supplied with query480. Another approach to simple program execution unit611is to provide a set of math function models615that can selectively be chosen to be implemented by execution unit611. As an example, these models may include polynomial functions. Parameters and a current value or values for the independent variable(s) may be supplied with query480. These parameters and current values also may be supplied or updated from initial values by query specifier595. For example, where execution unit611can evaluate a function, and return that evaluation result to query resolver485, which may decide to increment a variable or change a parameter, and request re-evaluation of that function. Execution unit611also may cooperate with a local accumulation function617that accepts values from execution unit611and accumulates these into a buffer location. In one example, the accumulation may include a simple summation, such as where execution unit611performed a weighting that accounts for values already accumulated in the buffer. In other situations, local accumulation may track more statistics concerning values that were accumulated, Local accumulation617may be implemented as a write instruction to a specific part of a local memory; in some implementations, this memory is not protected from incorrect program execution, such that execution unit611may update this value without arbitrating for access. That locally accumulated value may be returned to a global result buffer618after a final accumulation. The global buffer location may be specified by query480. Execution unit611also may be used to automate or accelerate other rendering tasks. As an example, differentials may be associated with rays. A differential for a ray can be modeled by tracing two or more additional rays that travel generally in the same direction as the original ray, but are not exactly co-parallel. In order to make use of the ray differential, a model of where these additional rays intersect with respect to the original ray can be made. Execution unit611can evaluate a function that approximates where each additional ray would have hit, based on its direction and a model of the surface intersected by the original ray. In one example, a tangent plane at an intersection point can be defined and based on an angle formed between each differential ray and the original ray, execution unit611can evaluate a function to identify an intersection position on this tangent plane. Thus, for a given intersection between a ray and a surface, execution unit can identify intersection points for the differential rays. These points can be expressed parametrically on a surface (e.g., a tangent plane). The term “light energy characterization” is used here to include any kind of directed flow of energy or other material, such as for modeling or quantifying intensity and/or directionality of energy propagation. A ‘light energy record” refers to data associated with a point in an n-dimensional space (e.g., n=3) which characterizes propagation of energy. For example, the record can include data that characterizes radiance, such as radiance of light, or propagation of electromagnetic wave energy. Such records can include data characterizing energy inbound to or outbound from a point on a surface, or existing in a region of a defined locus or defined volume. Different records can cover different volumes of space and can have overlapping volumes. Different records can represent the same or partially-overlapping volume at a different level of abstraction. As a general example, propagating electromagnetic waves, such as x-rays, microwaves or radio frequency waves can be modeled using such energy characterization data, as can infrared radiation. Thus, using the term “light” implies no limitation as to the kinds of energy or transport thereof capable of being modeled by implementations of the disclosure. In the disclosure, lighting and shading information can be produced and can be accessed. Some lighting and shading information serves as inputs to other processes that ultimately produce a final rendered output. Thus, shading information may not be a final product, but an intermediate thereof. Such intermediate data can take a variety of forms and need not directly express color, luminance, chrominance or the like. An example of a light energy record, in the context of 3-D rendering, is a “photon”, as used in the context of 3-D rendering applications, but light energy records do not need to conform to implicit or explicit limitations of “photons”. As would be apparent from the disclosure, some of the components and functionality disclosed may be implemented in hardware, software, firmware, or any combination thereof. If implemented in firmware and/or software, the functions may be stored as one or more instructions or code on a computer-readable medium, in one example, the media is non-transitory. Examples include a computer-readable medium encoded with a data structure and a computer-readable medium encoded with a computer program. Machine-readable media includes non-transitory machine readable media. Other kinds of media include transmission media. A non-transitory medium may be any tangible medium that can be accessed by a machine. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a machine. Those of skill will also appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software in a computer-readable medium, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. Modern general purpose processors regularly require in excess of two billion transistors to be implemented, while graphics processing units may have in excess of five billion transistors. Such transistor counts are likely to increase. Designs have used these transistors to implement increasing complex functionality and to increase parallelism. As such, it becomes increasingly necessary to be able to describe or discuss technical subject matter concerning such processors, whether general purpose or application specific, at a level of detail appropriate to the technology being addressed. In general, a hierarchy of concepts is applied to allow those of ordinary skill to focus on details of the matter being addressed. Describing portions of a design (e.g., different functional units within an apparatus or system) according to functionality provided by those portions is often an appropriate level of abstraction, rather than exhaustively describing implementations of such portions, since each of these portions may themselves comprise hundreds of thousands or millions of gates and millions, tens of millions or hundreds of millions of transistors. When addressing some particular feature or implementation of a feature within such portion(s), it may be appropriate to identify substituent functions or otherwise characterize some sub-portion of that portion of the design in more detail, while abstracting other sub-portions or other functions. A precise logical arrangement of the gates and interconnect (a netlist) implementing a portion of a design (e.g., a functional unit) can be specified. However, how such logical arrangement is physically realized in a particular chip (how that logic and interconnect is laid out in a particular design) still may differ in different process technology and for a variety of other reasons. To the extent that circuitry implementing particular functionality may be differently within different contexts, disclosure of a particular circuit may not be particularly helpful. Also, many of the details concerning producing netlists for functional units as well as actual layout are determined using design automation, proceeding from a high level logical description of the logic to be implemented (e.g., a “hardware description language”). As such, it is often unnecessary and/or unhelpful to provide more detail concerning a portion of a circuit design than to describe the functionality to be provided. The term “circuitry” does not imply a single electrically connected set of circuits. Circuitry may be fixed function, configurable, or programmable. In general, circuitry implementing a functional unit is more likely to be configurable, or may be more configurable, than circuitry implementing a specific portion of a functional unit. For example, a “simple execution unit” according to the disclosure is less configurable than an Arithmetic Logic Unit (ALU) of a processor may reuse the same portion of circuitry differently when performing different arithmetic or logic operations. As such, that portion of circuitry is effectively circuitry or part of circuitry for each different operation, when configured to perform or otherwise interconnected to perform each different operation. Such configuration may come from or be based on instructions, or microcode, for example. For example, a “query specifier module” may be implemented by machine code configuring a configurable or programmable processing unit, such as a core or a set of programmable cores. Thus, such a programmable processing unit, as configured by the machine code, becomes query specifier circuitry, where a person of ordinary skill would understand that the term “query specifier” describes functionality disclosed in the specification for such query specifier module, such as providing an interface that accepts a set of parameters and other information for a query and produce a query specification that is appropriate for capabilities of a query resolver that will service the query. In all such cases, describing portions of an apparatus or system in terms of its functionality conveys structure to a person of ordinary skill in the art. In the context of this disclosure, the term “unit” refers, in some implementations, to a class or group of circuitry that implements the functions or functions attributed to that unit. Such circuitry may implement additional functions, and so identification of circuitry performing one function does not mean that the same circuitry, or a portion thereof, cannot also perform other functions. In some circumstances, the functional unit may be identified, and then functional description of circuitry that performs a certain feature differently, or implements a new feature may be described. As such, a “unit” may be formed of one or more circuits that implement a function or functions, where one or more of the circuits may be composed of configurable or programmable logic elements. Examples of logic elements include portions of ALUs, and a combination of switches and interconnect that implement logical expressions, such as Boolean logic expressions. In some cases, a structure or structures implementing a given unit or module may have permanent physical differences or adaptations compared with structure(s) implementing other modules or units within an apparatus or system. However, such structure(s) also may be produced by a temporary adaptation or configuration, such as one caused under program control, microcode, or other source of configuration. Different approaches to design of circuitry exist, for example, circuitry may be synchronous or asynchronous with respect to a clock. Circuitry may be designed to be static or be dynamic. Different circuit design philosophies may be used to implement different functional units or parts thereof. Absent some context-specific basis, “circuitry” encompasses all such design approaches. Although circuitry or functional units described herein may be most frequently implemented by electrical circuitry, and more particularly, by circuitry that primarily relies on a transistor implemented in a semiconductor as a primary switch element, this term is to be understood in relation to the technology being disclosed. For example, different physical processes may be used in circuitry implementing aspects of the disclosure, such as optical, nanotubes, micro-electrical mechanical elements, quantum switches or memory storage, magnetoresistive logic elements, and so on. Although a choice of technology used to construct circuitry or functional units according to the technology may change over time, this choice is an implementation decision to be made in accordance with the then-current state of technology. This is exemplified by the transitions from using vacuum tubes as switching elements to using circuits with discrete transistors, to using integrated circuits, and advances in memory technologies, in that while there were many inventions in each of these areas, these inventions did not necessarily fundamentally change how computers fundamentally worked. For example, the use of stored programs having a sequence of instructions selected from an instruction set architecture was an important change from a computer that required physical rewiring to change the program, but subsequently, many advances were made to various functional units within such a stored-program computer. Functional modules may be composed of circuitry, where such circuitry may be fixed function, configurable under program control or under other configuration information, or some combination thereof. Functional modules themselves thus may be described by the functions that they perform, to helpfully abstract how some of the constituent portions of such functions may be implemented. In some situations, circuitry and functional modules may be described partially in functional terms, and partially in structural terms. In some situations, the structural portion of such a description may be described in terms of a configuration applied to circuitry or to functional modules, or both. The description of the aspects and features is provided to enable any person skilled in the art to make and use the systems, apparatuses and perform the methods disclosed. Various modifications will be readily apparent to those skilled in the art, and the principles described in this document may be applied to other aspects without departing from the spirit or scope of the disclosure. Thus, the description is not intended to limit the claims. Rather, the claims are to be accorded a scope consistent with the principles and novel features disclosed herein. The drawings include relative arrangements of structure and ordering of process components, solely as an aid in understanding the description. These relative arrangements and numbering is not an implicit disclosure of any specific limitation on ordering or arrangement of elements and steps in the claims. Process limitations may be interchanged sequentially without departing from the scope of the disclosure, and means-plus-function clauses in the claims are intended to cover the structures described as performing the recited function that include not only structural equivalents, but also equivalent structures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than, additional to, or less than, those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
75,206
11861787
DETAILED DESCRIPTION Displaced geometry can be implemented using vector displacements. Vector displacement allows an arbitrary vector or vector to control displacement of an element of scene geometry or a portion thereof. In some implementations, vector displacement allows a completely arbitrary displacement for any element of geometry. For example, an element of geometry could be displaced in any direction by any amount. Vector displacement thus provides a high degree of control over displacement, but presents a comparatively difficult rendering task. In some aspects herein, displacement is implicitly constrained within a bound, which is set based on one or more pre-defined vectors, where a maximum displacement can be set for these vectors. In one example approach, artist-supplied vectors can be associated with vertices that define source geometry. An artist can be any human, machine, or process that generates the vectors. The term is used to distinguish these vectors from other vectors that may be associated with source geometry, such as normals that can be associated with vertices and primitives of source geometry. Displacement can be further constrained, for any point on a 2-D surface, to be along a vector determined by interpolating two or more artist-supplied vectors associated with the 2-D surface. Thus, a completely general displacement can be constrained to an analytical result determined by an interpolated vector and a maximum displacement limit. In order to determine displaced geometry based on a given element of source geometry, the artist-supplied vectors for two or more vertices that define the element of source geometry can be used to control how, or otherwise be used in defining limitations on possible displacements for a particular location on the element of source geometry. The source geometry can be displaced according to some process and according to the determined control vector. The displaced geometry can then be used in ray intersection testing, and for other purposes as appropriate. Some aspects of the disclosure relate to exemplary systems and processes by which source geometry can be displaced and techniques that can be used in testing displaced geometry for intersection. FIG.1depicts a triangular primitive10that is associated with a geometric normal (which can in turn be defined by a winding order of vertexes forming the primitive and locations of those vertexes in space (the locations of the vertices establish a plane having a normal direction, and a convention establishes which way the normal points along that normal direction). InFIG.1, the vertexes forming primitive10also are associated with artist-supplied vectors13-15. Here, artist-supplied refers to the concept that these vectors are not needed to define the surface or location of the primitive10or its relationship to other primitives, such as a mesh of primitives. Rather, these vectors are used according to the techniques described below. FIG.2depicts a mesh of primitives18(e.g., looking at a cross-section of a mesh of primitives coarsely defining a sphere.) Primitive10is identified, along with artist-supplied vectors14and15. An interpolated vector16is shown between these artist-supplied vectors14and15. Collectively, the artist-supplied vectors for the primitives forming the sphere are used to define a shell20. Shell20is depicted as a smooth shape; however, shell20would be faceted to a degree determinable by whether and how much each of the original primitives were sub-divided. In one example, the primitives are not sub-divided, and shell20would have a facet for each original primitive. In other examples, each primitive would have a corresponding plurality of facets in shell20. Shells according to the example shell20can be used in furtherance of ray tracing implicitly defined geometry, according to the disclosures that follow. Displaced geometry provides an example of implicit geometry processing according to the disclosure. Here, implicit geometry includes approaches to storing a set of geometry data prior to “run time” that is used in some fashion to produce a final geometry surface. In one example, a function can be associated with primitive10, and that function can be evaluated based on one or more inputs produced during runtime, in order to evaluate presence of final geometry within a particular volume or at a point in 3-D space. An overview of how rays can be traced in a 3-D scene that has explicitly-defined and implicitly-defined geometry is depicted byFIG.3. At203, it is determined to trace ray(s) through a volume in 3-D space. Such a volume can be an entirety of a 3-D scene, or a portion thereof. Such determination may be implemented by emitting a ray during rendering of an image, for example. At205, such ray(s) begin (or continue) to traverse an acceleration structure. The acceleration structure includes a graph of elements that each bound a respective portion of the 3-D scene. Traversal of the graph allows identification of a final subset of geometry against which the ray(s) will be tested for intersection. In some implementations, such traversal occurs out of an order that the ray travels in the scene. As examples, each ray may be tested breadth-first, rays may be grouped and regrouped for testing according to a criteria, and traversal for rays emitted together may not begin together. Thus, a given ray may have candidate intersections identified out of distance order, and such intersection(s) should be further processed in order to ultimately identify a closest intersection for the ray, in typical usage. At207, the traversal results in identification of elements(s) that the ray enters, and hence need to be further processed to determine whether an intersection for each such ray exists there. At209, it is determined whether that element is a trapping element or not. A trapping element can be the same shape as other acceleration structure elements, but may be associated with a flag that indicates its status as a trapping element.FIG.5depicts an acceleration structure that has both non-trapping elements (e.g., bounding elements303,305,307,315and316), and trapping box elements309-313. Each of these elements is shown to have a connection with at least one other element, and would be located in 3-D space. Some implementations may have a single trapping element type; other implementations may have a plurality of trapping element types. Such implementations may have multiple bits allocated to indicate the trapping element type. In some examples, indicating that the element is a trapping element results in execution of a trapping element procedure211. Where an implementation includes multiple trapping element types, a different trapping element procedure may be executed for each type. Trapping elements also may be stored in a separate acceleration structure, which can be traversed separately from/additionally to one bounding explicitly defined geometry. Here, trapping element types refers to what procedure or differentiated computation protocol will be followed when entering that trapping element. For example, trapping elements can be provided to evaluate a Non-Uniform Rational B-Spline (NURBS) surface or a subdivision surface, determining Level Of Detail (LOD), performing motion blur calculations, and so on. As will be explained further below, an acceleration structure may contain elements for different purposes. For example, trapping elements309-311may enclose representations of different levels of detail for the same geometry. Some aspects of the disclosure provide tester circuitry that can selectively traverse an acceleration structure (such as without intervention of a generally programmable computation unit or program that emitted the ray being traversed), as explained below. If the element is not a trapping element, then at215, it is determined whether the element directly bounds geometry (e.g., it is a leaf node in a homogeneous acceleration structure). If not, then at205, traversal continues to subsequent elements (e.g., child elements of the previously-identified element). Reference counts are updated (223), as explained below. If there was directly-bounded geometry, then at217, that geometry is tested for intersection with the ray(s), and at219, results of such testing are outputted. Results of intersection testing can include an identifier for a primitive intersected by each ray(s), a distance to the intersection, parametric coordinates determined for an intersected surface, some combination thereof, or other data and combinations thereof In using trapping elements, fully completing the traversal of a ray may involve creating multiple distinct ray segments, each defined with a respective distinct ray data structure. A reference counts can be maintained within each trapping element and also across all of the ray segments used to fully trace a given ray (e.g., rays that may have different origins and/or termini along the path of the ray). For example, a trapping element may have a separate acceleration structure from a principal acceleration structure of a 3-D scene, the ray segment may be located in several elements in that acceleration structure; after resolving the reference count for that trapping element, the ray segment may be completed, but the entire ray, of which the ray segment is a part may not be. Since other intersections for each ray may have been identified already, at221, an intersection or intersections being tracked for each ray may be updated. For example, if an intersection closer than a previously-identified closest intersection was identified, then the new closest intersection is maintained in favor of the previous one. At223, one or more reference counts for each of the rays are updated according to the traversal. In particular, a count may be maintained for each ray, which tracks how many acceleration structure elements that ray currently exists in (where multiple segments are used for one ray, then distributed reference counts may be maintained and ultimately resolved). For example, the count is decremented when testing of a ray with an element completes, but is incremented if that ray is then indicated for testing against a child element. The count reaching zero indicates that the ray has completed traversal (although, depending on implementation, testing of that ray with all geometry may not yet have completed). Returning to209and211, a trapping element can be used to indicate that there is implicitly defined geometry within the 3-D volume bounded by that trapping element. Processes and system aspects are disclosed with respect to figures followingFIG.3. With respect to the remainder ofFIG.3, an output of a trapping element procedure may include an indication of a nearest intersection of a ray with geometry (implicit or explicit); where a closest intersection is desired, this defines a maximum distance that the ray needs to be traced. The ray also may be associated with minimum distance information, which may be used to exclude acceleration structure elements or geometry, as explained below. These intersections ultimately may feed into process portion213. At218, if intersection testing is completed, then at225, intersection(s) can be identified for shading, and at227, shader code can be executed. In some implementations, more than one intersection value may be provided for a single intersection, and at229, one of these values may be selected for use as an origin for a child ray emission occurring at231. For example, a different intersection value may be selected for a reflection ray than for a refraction ray, as explained below. FIG.4depicts a first example of a process for identifying an intersection between a ray and implicit geometry. In one example, the trapping element found to be intersected bounds a shell (and source geometry for the shell) as disclosed with respect toFIGS.1-2. At245, a point of intersection between each ray(s) and a shell surface is found. Thus, after determining that the ray(s) are to enter the trapping element, the rays can be projected to a surface of the shell at one or more points. FIG.6depicts a trapping element534that bounds a faceted shell323. The faceted shell323can be formed by extruding a set of primitives along directions defined by artist-supplied vectors (FIGS.1-2). For example, primitive332can be extruded to define a segment324of shell323. Thus, in one approach, there is a 1:1 correspondence between facets of a shell and original primitives of the source geometry.FIG.7Adepicts segment324as being constructed of primitive332of source geometry, and a set of bilinear patches that connect primitive332to facet325. For example, as shown inFIG.7B, bi-linear patch350connects vertex355and356of primitive332respectively to vertexes358and359of facet325. Using a bi-linear patch to define sides of each segment allows the segments to be adaptable to a range of problems, by allowing the sides of these segments to be non-parallel to each other.FIG.8depicts an alternate construction for a segment of a shell. In the example ofFIG.8, a set of bounding shapes365-367(e.g., tetrahedrons) are provided that collectively define the segment. FIG.6also depicts that an entrance330to one segment of shell323can be tracked, which corresponds with exit point331. Segment324has entrance point339and exit point340. In some situations, the ray may enter the shell but not leave the shell, in that it would first intersect a source geometry primitive. Any of these entrance points, including the first one to the shell, and each segment entered can be considered an entrance point or intersection with a shell surface. With respect to entrance points to each segment of a shell, tracking when a ray enters a different segment of the shell allows a specific geometry process to be associated with each primitive, and executed to evaluate the implicit geometry in that segment of the shell. Returning toFIG.4, the rays are then to be stepped through a volume enclosed by the shell to establish a current 3-D position for each ray. The stepping is on an interval (epsilon). At239, epsilon can be set. As an example, epsilon can be set according to a variety of inputs, such as a level of detail indicator235or a ray differential237. Epsilon can be fixed or variable;FIG.4includes description of a variable epsilon implementation. For the remainder ofFIG.4, a single ray is described, although multiple rays can be processed concurrently. At247, the ray is stepped. At248, it is determined whether the ray is at a surface of a volume exclusion element within the shell. A volume exclusion element defines a sub-portion of space within the shell in which no implicit geometry will exist. Further description of volume exclusion elements is provided with respect toFIGS.10and11. In brief, volume exclusion elements can be identified by determining a final extent of implicit geometry within the shell, and then defining a set of bounding volumes that enclose regions of space that have no geometry. These bounding volumes can vary in size, in order to fit within different portions of the final extent of the implicit geometry. If the ray has entered a volume exclusion element, then, at249, an exit point from the volume exclusion element is determined, and, at250, the current 3-D position of the ray is incremented to that exit point, and the determination at248is performed again. If the current 3-D position is not in a volume exclusion element, then, at251, that current 3-D position is projected to the surface of the primitive that was projected to define that portion of the shell. An example of such projection is depicted inFIG.9.FIG.9depicts that ray335is stepped a number of times along a direction of travel for the ray (to identify current 3-D positions405), and corresponding 2-D positions406on primitive332are identified for each current 3-D position. Each of these 2-D positions can be expressed, for example, as a parametric coordinate pair, or using barycentric coordinates. These 2-D positions can be used as inputs to a procedural geometry shader410, which executes in order to produce an implicit geometry characteristic (collectively,415inFIG.9) for each of the 2-D positions. As explained with respect toFIG.3, a step size can be set based on a level of detail indicator for the ray. A ray differential also can be used as an input to set a step size. Setting a step size is one way to adjusting an amount of computation used during marching of a ray. In one sense, the amount of computation may be adjusted based on how much detail is desired for a given implicit geometry rendering. However, in other situations, a total amount of computation may be reduced by finding an intersection region between a larger step size, and then refining the intersection. In some implementations, a region of 3-D positions may be snapped to the same 2-D position, based on a level of detail or ray differential. For example, even if a step size is set to one size, several steps of a ray may snap to be evaluated based on the same function. In another example, a larger step may be taken, and then one or more intermediate step can be interpolated from the ends of that step. For example, where a level of detail is low, then a larger step size may be taken, larger regions of the 2-D surface may all evaluate to the same function value, or interpolated values may be taken for intermediate values, or some combination of these options. In one example, the 2-D positions also can be used as inputs to a function that outputs a height of implicit geometry for that 2-D position. Here, a height can mean a distance along a path; such path may be a line segment. The line segment may be defined by interpolating artist-defined vectors (seeFIGS.1-2). In other examples, the path may be defined by a function or procedure associated with the source primitive, or a portion of it. Where an implementation displaces along a line segment, an intersection can be detected by comparing, a current height of the ray, in 3-D space, above the surface with a height generated by the implicit geometry function evaluated for that 2-D position. In one example, such comparison can include a subtraction. When the result of the subtraction changes sign, then, it is concluded that the ray has intersected the implicit geometry somewhere between the previous and current step. These operations are examples of operations depicted inFIG.4, including at253, running a geometry process to determine a geometry characteristic for the projected current ray point, and comparison255. The subtraction implements overlap determination257. If there is no overlap (e.g., the height of the ray is still greater than the implicit geometry at a given point, then the process returns to269to perform another step of the ray. If overlap was detected (e.g., the sign of the subtraction result changed), then at259, a bisection process can be conducted in order to refine the intersection point further. At261, a pair of 3-D positions can be identified that describe the interval of the ray which contains the intersection. At263, these points can be returned as representing the intersection of the ray with implicit geometry. It remains to be determined whether this intersection is a closest intersection, since geometry closer than the intersected implicit geometry may remain to be tested. Instead of comparing heights, a collision detection algorithm can be employed that compares a current 3-D position with the implicit geometry. The current 3-D position can be modeled as a sphere or shape with a certain extent. This extent can be controlled by level of distance information, a ray differential, or some combination thereof. In some applications, the implicit geometry being tested for intersection may originate from a volumetric data set. For example, the volumetric data set can be expressed as data in a uniform or hierarchical voxel structure. For example, data may originate from 3-D scanning technologies, such as medical imaging scanners, such as Computed Tomography (CT) scanners, and similar technologies. FIG.10depicts a curve430that represents a final surface of implicitly-defined geometry (shown in 2-D for clarity). Bounding element429encloses this geometry (shell and a trapping element, if provided). With respect to a trapping element, a size and overall dimensionality of the trapping element may be influenced by constraints on the form of the trapping element (e.g., an axis aligned box, a square, a sphere, and so on). Such constraints may affect a tightness of fit capable of being achieved for a given shell.FIG.11depicts exclusion elements (431and432specifically identified) that fill space between final geometry430and bounding429. These exclusion elements also may be sized and positioned according to constraints placed on the shapes that can be used for the elements. Further constraints may relate to an amount of memory to devote to these elements. For example, a minimum size of element may be required, or a maximum size of memory required for storing data defining the exclusion elements in a particular trapping element may be set. These decisions can be made based on characteristics of a computing platform that will implement the stepping algorithm, including memory bandwidth and size characteristics, power consumption constraints, requirements on latency, throughput, and so on.FIG.13depicts an example process by which volume exclusion elements can be produced. At451, a portion of implicit geometry is identified (e.g., procedurally defined displacement). Such identification can occur in a pre-execution environment, in which source geometry is submitted, along with a function(s), procedure(s), or other definition of how the implicit geometry will be determined when necessary (e.g., for testing for intersection). At455, these function(s), procedure(s), and so on are evaluated or executed as appropriate in order to obtain a final geometry extent. In some cases, such final geometry extent will depend on information available only during runtime, or more generally, information that is not yet available (e.g., the evaluation depends on a value retrieved during a lookup operation). In such circumstances, the source geometry, function, or procedure can be associated with information on a range of values that can be expected from the lookup. In other implementations, an expression that describes a value to be returned from the lookup can be supplied, and a final extent of implicit geometry can be evaluated based on a joint evaluation of these sources. At457, based on this evaluation, exclusion volumes are defined within a maximum extent of the final geometry and within a shell (seeFIG.6). Examples of implementations of exclusion volumes include voxel structures, which can be hierarchical, such as an oct-tree. In an alternate implementation, the shell may be omitted, and then exclusion volumes would be defined based on an extent of the trapping element that will bound the final geometry. If the shell were omitted, it would generally be expected that a larger quantity of volume exclusion elements would be required, since trapping element would not bound the final geometry as closely as the shell. At459, definitions of these exclusion volumes are stored for later access. In addition to defining exclusion volumes in a pre-pass, volume portions can be excluded based on properties of a function describing an implicit geometry surface. FIG.12depicts more details concerning how trapping elements can be used for intersection testing of implicit geometry and more generally, for abstracting portions of 3-D space. As an additional example usage, trapping elements can be used to abstract instances of the same geometric object, even if they do not use implicit geometry.FIG.12gives a toy example of a tree being a geometric object, with instances405-407of this geometric object being bounded by respective trapping elements431-433. These trapping elements can in turn be bounded by a bounding volume420(seeFIG.5). Instance431is shown as overlapping with instance432. Such overlap, in a 3-D scene, could be a situation where branches of these tree instances intertwine, such that they occupy overlapping volumes of space. A ray438is being traversed in the scene.FIG.3depicted a process that uses a trapping element procedure211;FIG.15depicts an example of trapping element procedure211. InFIG.15, when a ray encounters a trapping element, in order to test that ray for intersection with geometry in the trapping element, that ray is transformed at461to a coordinate system referenced by the trapping element. At463, one or more process(es) are performed. These processes can vary significantly based on the characteristics of the trapping element. For example, an implicit geometry process may be performed. Or, geometry bounded in the trapping element may be tested. Ultimately, at465, resulting data is produced. As an example, this resulting data is a closest intersection found, based on the geometry tested or the processes performed. A variety of data can be produced as a result of intersection testing in a trapping element, including barycentric coordinates, a distance to the intersection, a point in 3-D space identifying an intersection point, or another expression of a location of the intersection. Where this data contains positional information, it can be expressed in the coordinate system referenced by the trapping element. At467, the positional information and associated information is transformed from the referenced coordinate system to a global coordinate system (or another coordinate that is common to other operations to be performed). Such transformation could be performed immediately, but in another implementation, a transformation matrix may be provided that will allow the transformation to be effected at a later time. For example, a result data structure may contain the result data in the referenced coordinate system and a transformation matrix. Later, during an intersection disambiguation or sorting process, the transformation matrix can be applied to the result data. This implementation may be appropriate where the functional unit performing the intersection testing may not have a capability to perform a matrix transformation, or may not perform such transformation efficiently. If a trapping element does not reference a coordinate system other than the global coordinate system, then a transformation matrix may not be required. Returning toFIG.12, in this figure, the ray originates within both trapping element434and trapping element435. In some systems according to the disclosure, it could be a case that trapping element435(and/or geometry in435) is found to be intersected by ray438before it is determined that ray438also intersects trapping element434(and/or geometry in434) (e.g., if each ray begins testing at a root of a hierarchical acceleration structure, then ray438may visit trapping element435first). This situation could occur because of deferral of some intersection tests or deferred propagation of intermediate results of testing, or simply based on how the testing happened to be scheduled, for example. Thus, instance432may be evaluated for intersection before instance431, even though portions of instance431are closer to the origin of ray438.FIG.14depicts an example approach to intersection testing that accounts for these situations. FIG.14depicts that results for intersection testing with implicit geometry are produced at411, and results of testing the same ray with explicit geometry are produced at413. In the process ofFIG.14, there are multiple intersection results available for a given ray. More commonly, it might be expected that a single closest intersection for a ray is maintained, and each time an intersection result for that ray is identified, it is compared with that closest intersection, and the single closest one is maintained. Here, however, a simple distance evaluation may be insufficient to immediately disambiguate which intersection is closest, or in other situations, there may be two intersections that have distances indistinguishable from one another, at a resolution at which the results are expressed (e.g. single precision floating point). In these situations, an approach that provides reproducible results may be important, even though there is more than one “valid” solution. In the case of an acceleration structure element (trapping or regular), if any part of the volume of that element overlaps a range defined by a minimum distance and the current closest intersection, then that acceleration element would not be excluded from being entered for testing (acceleration structure elements do not establish a closest intersection for a ray (i.e., a maximum t)). FIG.14depicts that, at415, a determination whether any two or more of multiple intersection results are at undifferentiated distances. There may be some geometry intersections evaluated that are clearly not the closest one, under the circumstances present. These can be excluded; if they are excluded before the process ofFIG.14, then determination415may be omitted for those, but still may be needed to identify or maintain overlapping acceleration structure elements for test. If all geometry intersection results are at different intersection distances, then a closest intersection (or group of) can be used (here, a group of intersections can be, for example, a pair of points returned as bracketing an intersection of a ray with a surface, such as a result returned from a ray march as discussed with respect toFIG.9). At419, an ID for each object (e.g., acceleration structure element or primitive) having undifferentiated distance compared with comparison objects is accessed. Based on the IDs of the objects, one or more objects may be excluded from further processing, or selected. At423, intersection information for the ray is updated based on421. At425, reference counts for the ray are updated. A reference count for the ray is increased when it is added for test against an acceleration structure element, and decreased when removed or when an element is excluded from test, if previously indicated for test. Considering421in more detail, an acceleration structure element may be excluded from further processing if its identifier indicates that it already has been entered for testing. This may be determined by comparing at least a portion of the identifier for the acceleration structure element with identifier information stored or associated with the ray. Such information stored with the ray may include an identifier of the acceleration structure element that has a highest value in a sequence of identifiers (e.g., all the identifiers have a relative order, and the ray maintains identification of a highest order element) that was already entered for that ray. A specific example can be considered with respect to ray440. Ray440can be seen to first enter trapping element434. A minimum t would be established for that trapping element upon entering the trapping element. Ray440also intersects trapping element435, but the distance to that intersection is different from the intersection with trapping element434. However, it also is the case that the intersection with trapping element435remains within a volume of trapping element434. Thus, in this circumstance, trapping element434may be reentered and processed. So in one approach, the minimum t can be used to exclude, from retesting, geometry bounded by bounding elements that do not overlap with another element. Instances of the same geometry can be spread through a 3-D scene, with each instance being enclosed by a difference trapping element. Each trapping element includes a world space coordinate location (and/or extent). Each trapping element can be a different size and can be oriented differently. For example, trapping elements can be scaled and rotated. Each instance space can use a referenced coordinate system. Each trapping element also can include information about a transform to be applied to a ray in order to translate between world space and the referenced coordinate system of that instance. Each trapping element also can include a reference to objects or other data within that trapping element, for example, explicit geometry and other data, as explained above. In another example, each element of an acceleration structure can have an identifier, and acceleration structure elements that represent a trapping element encapsulating the same geometry can have a certain number of bits in common. Rays that intersect these different instance elements can be collected, and can begin intersection testing together. Where each trapping element has a reference to instance space, then that reference can be used to collect rays that will need to test that referenced instance space. Where a portion of an identifier is shared among elements that reference the same instance space, that portion of the identifier can be used to collect rays. FIG.16depicts a system501that can implement aspects disclosed herein. System501comprises a compute cluster502that can have a set of cores, each capable of executing instructions from a respective independent instruction stream. Each core can have a private cache and can share a secondary cache with one or more other cores; other cache configurations can be implemented. For example, cores503and504can each have a private L1 cache,505and506respectively. Cores503and504can share L2 cache507. Compute cluster502can read from acceleration structure storage509and from geometry storage508. Compute cluster502can be assisted with performance of various algorithms, such as rendering algorithms by throughput compute unit515. Compute unit515comprises a task collector521, a plurality of ray/primitive intersection test cells520and a plurality of ray/box test cells516. Each of these cells can be configured to execute one or more defined intersection algorithms. Ray/box test cells516can be implemented so that they produce a distance from a ray origin to an intersection point with a box, when the ray originates from outside of the box. Ray/box test cells516also can be implemented so that they return a distance that the ray travels to a point of exit of a box, when the ray originates in the box (e.g., ray438originates in trapping element435, and ray/box test cells516can be made to return a distance to exit trapping element435.) Ray/box test cells are an example of test cells for a particular kind of shape. Test cells can be provided for other kinds of shapes, either in addition to or in substitution of box test cells. In some examples, each test cell comprises fixed-function circuitry that performs at least a portion of a given intersection algorithm. Example primitive tests include tests for intersection with triangular primitives, such as the barycentric coordinates test. Boxes tested for intersection may be axis-aligned bounding boxes, for example. Other approaches to acceleration structure tests include kd-tree testing. In addition to these intersection testing cells, compute unit515may comprise a set of (one or more) limited programmability circuits512, which can be associated with respective test cells or included in task collector521. Each intersection test cell may use a respective local ray data storage514. As a particular example, ray data518comprises sets of ray definition data. Each set of ray definition data may comprise a minimum distance identification (min t). In an example, the minimum distance can be used to step through a set of elements, without having to test them all for each step in same process, as explained above. A maximum distance identification (max t), which can identify the closest current intersection for that ray, also can be stored. Data concerning the current closest intersection may be stored, such as interpolated varyings for an intersection point, barycentric coordinates, and a primitive identifier. In general, data stored can be selected based on data that would be needed to execute a shader for the ray (if the intersection to which the data pertains is one to trigger shader execution). Where an intersection involves a bounding box element (e.g., a trapping element) with a referenced coordinate system, a transformation matrix describing a mapping between global and local coordinates can be stored. As explained above, task collector521forms groupings of computation (e.g., groupings of rays that can be tested together). A grouping of rays can identify an acceleration structure element to be tested. In some examples, the acceleration elements can be elements that define a given object (or portion of) at a respective LOD. These elements may bound such different LOD geometry in overlapping space. In one implementation, these elements can be trapping elements. A ray can be associated with an LOD indicator, a ray differential, a spreading factor, or there can another mechanism for deciding a LOD at which geometry is to be represented. A limited programmability circuit can select one or more collections, each associated with a respective acceleration element, in which to place a ray. For example, even though the acceleration structure element tested may have a number of child acceleration structure elements, only a subset of those may be selected by the limited programmability circuit. For example, an acceleration structure element associated with a particular Level of Detail (LOD) may be selected. In some examples, the ray may be in a transition zone between two levels of detail, and the ray may be added to two collections, so that the ray is traversed in geometry at multiple levels of detail. An attenuation of the ray can be adjusted based on what the limited programmability circuit does, such as reducing importance of each of multiple rays that are derived from a single original ray. As another example, a limited programmability circuit can neglect to add a ray to any collection, even if a parent element was intersected. Thus, a limited programmability circuit can influence or control subsequent testing of a ray. System501also may provide a result return path511. In some cases, a result may require further processing that will be performed by program code distinct from program code that generated the task leading to the result. However, in some cases, the further processing may use some portion of data common to the program code that generated the task. Depending on an architecture of compute cluster502, and as one specific example, on an efficiency of moving data from one core to another (such as across different L2 caches507), result return path may be configured to return a result to a core that uses an L2 cache507that already stores the data to be used in the further processing. In some implementations, a destination identifier can be associated with a task, when it is generated, and that destination identifier can be used to guide a result back to a source of that task. FIG.17depicts an example of a limited programmability circuit(s)550that can be used to implement the limited programmability circuits512depicted inFIG.16. Circuit(s)550may comprise pre-defined mathematical functions552and programmable function implementations554. The pre-defined mathematical functions552may include a set of functions that can be evaluated for different values for one or more independent variables for the functions. Such pre-defined mathematical functions may include a matrix transformation for a 3-D space according to a transformation matrix supplied to the limited programmability circuit. In another example, programmable function implementations can execute or repeat a defined operation or set of operations for a number of times. Examples of how circuitry can be limited-programmability includes that the circuit is capable of executing a limited number of instructions, or is otherwise required to complete in a fixed timeframe, that programs avoid looping or branching, or that the circuit does not have an instruction fetch pipeline. In one example, branching is supported by executing multiple paths through a section of code, and then selecting a result or masking an undesired result. Where the limited programmability circuit does not support instruction fetching, a set of instructions can be pre-loaded through a control path. A limited memory may be provided for storing these instructions, and can be designed to support a maximum latency or timeframe of execution, as explained above. Thus, a limited programmability circuit can work in conjunction with a test cell in order to implement marching, iterations, progressive refinements, bisections, successive approximations, displacements, vector graphics, volumetric effects, and so on. FIG.18depicts an overall flow of ray information in an example implementation. Shader code580and shader code582each can emit rays; such rays can be defined by data contained in a ray data structure. The data in the ray data structures can be produced by shader code modules, which can submit the data using an API575. For example, API575may have a ray emit call that accepts a set of data for the ray. A collection tracking function584can receive data from the ray data structures and collect each new ray to begin traversal with one or more other rays. There may be a variety of intermediate steps or functional elements between emitting a ray and tracking that ray in a collection, andFIG.18does not imply a direct linkage. Ray collections produced by collection tracking function584can be emitted or submitted to begin traversal (e.g., collections586and588). These collections can be received for traversal by intersection testing function590(which can be implemented by primitive test cells and acceleration structure element test cells, in an example). Intersection testing function590can activate an implicit geometry shader function592, for one or more instances where implicit geometry is to be traversed or tested for intersection. Intersection testing function590and implicit geometry shader function592can each produce ray data structure updates, those from geometry shader function592being numbered594-596and those from intersection testing function590being numbered600-602. Intersection disambiguation function606can receive the data structure updates from these sources (or other sources, if present), and produce an output that updates ray collections in which the ray will be tracked (608) during further traversal (seeking a closest intersection) and initiation of ray shading (609) (for an identified closest intersection), which may cause emission of further rays to be traversed. The production of updates to data structures may be an appropriate implementation where the geometry shader function or at least certain portions thereof, are implemented by a limited programmability or fixed-function element coupled with intersection testing590. In such an implementation, a general purpose portion of code may not be invoked for that geometry shader function or such general purpose portion of code may setup the limited programmability unit, but not perform all of the calculations.FIG.18depicts aspects of an alternate implementation in which geometry shader function692is implemented by code executed on a general purpose compute element. In such an implementation geometry shader function592can be considered a “peer” of shader code580and582, in that geometry shader function592can be invoked in response to a ray intersection, as can code580and582, and an output of such geometry shader function592can be effected by using a ray emit call of API575. Thus, geometry shader function592can be invoked using the same semantic as used for invoking shaders after a ray completes intersection testing. However, geometry shader function592operates during an intermediate phase of intersection testing to produce results for testing a ray with implicit geometry. A result of that testing can be carried with a new ray emitted through API575. Thus, over the course of traversing a given ray path, multiple different rays may be emitted, and intersections may be accumulated over the path. Some implementations may use the geometry shader function592to compare intersections associated with a ray that caused invocation of the shader, and ultimately determine whether a newly-identified intersection is closer or not to the origin of the ray path, and retain the closer intersection only. In other implementations, test cells520can compare an intersection stored in localized ray data514with an intersection identified in an arriving ray data structure, and keep the closer. In such an implementation, test cells520maintain the current candidate for the closest intersection in its localized ray data514, by comparing intersections that come from geometry shader function592and/or from its own testing operations. Intersection disambiguation function606takes a set of intersections for a given ray path, and determines a closest intersection from among that set of intersections. For example, where a given ray path has traversed one or more instances of trapping elements, there may be a local intersection for that trapping element, while there may also be an intersection for the ray with geometry that was not bounded by a trapping element, which was found during concurrent testing of the ray. These intersections may be stored in different data structures, which are collected for comparison purposes. For example, a plurality of separately instantiated rays may ultimately be used to fully trace a single ray path, and those rays may be traced concurrently in the scene. In other implementations, multiple portions of a single ray path may be traced serially, where one ray completes (i.e., a data structure defining a ray that is along the ray path, but possibly only a limited segment of the path), and another is issued and carries information relating to completed portions of intersection testing. Reference counting across these multiple portions of a ray path may also be maintained as each segment completes. The functions disclosed with respect toFIG.18may be realized in fixed-function hardware, or in configurable hardware, or in hardware programmed by software. In further regard to trapping elements, the above-disclosure provided an example relating to displaced geometry. Trapping elements can be provided to handle a variety of situations. For example, motion-blur can be implemented within trapping elements by performing calculations using a time-value associated with the ray to test where the intersection with a moving object occurs at sequence of moments in time. Then, these results can be blended in order to determine a motion-blur feature. Although a trapping element may reference a coordinate system other than a world-space coordinate system, a trapping element may also use world-space coordinates. FIG.19depicts an example operation of throughput compute unit515. Tasks to be processed705are inputted to compute unit515. As an example, each task can include a collection key710, a data reference711, and an optional prioritization indicator712. In some implementations, key710identifies an input or part of a computation problem that will be shared among a plurality of computation processes. In some implementations, data reference711identifies a portion of data that is to be processed as a data element in a vector of data elements with the input or computation problem identified by key710. As one example, key710can identify an acceleration structure element, and data reference711can identify a ray to be tested for intersection with the acceleration structure element. Key710can identify a program or process to be performed on or with data referenced by data reference711. As another example, key710can identify a coefficient to be multiplied by data identified by data reference711. Other data describing tasks705can be available in the system, or provided within a datastructure, but not all of this data may be moved around together within throughput compute system515. For example, each task may be associated with additional data to be used in further processing based on a result of the task, but that additional data is unnecessary for performance of the task itself. These tasks705(or portions of descriptive information for the tasks, such as key710, data reference711, and prioritization712) may be provided to task collector521(FIG.16), which is shown here as containing a collection forming/updating module715. Module715may be implemented with a cache that stores collections of data references711, in association with a respective key710. As an example, multiple data references may be stored in association with a single key.FIG.19depicts collection storage718comprising keys720-723, each having a bin of data references associated with it, in summary of the above. A priority may be produced for each collection based on prioritization indicators712that were associated with each task whose data reference was associated with that collection. As an example, each collection may be given a priority based on the highest priority task in that collection. The same task (e.g., a data reference from that task) may exist in multiple collections. In the context of ray tracing, each collection can be associated with a shape to be tested for intersection with the set of rays collected into a collection associated with that shape. In an implementation, collection storage718can include an interleaved cache, where keys (e.g.,720-723) are hashed or masked in order to identify candidate positions at which a collection for that key may be placed. A collision among collections may be resolved by an eviction of a collection. A scheduler733uses the data in collection storage718to form packets comprising data from different tasks that were associated with a given key in a collection from collection storage718. Scheduler733can communicate with collection forming/updating module715in order to coordinate formation and eviction of collections from collection storage718. Scheduler733may store packets, awaiting emission, to one or more packet queues (two queues734and735depicted). Where multiple queues are used, packets can be sorted based on a priority of the packet. Queues can be implemented in a non-transitory memory as first-in-first-out memories, linked lists, ring buffers, and so on. Packets from queues734and735can be dispatched (e.g., dispatched packet719). Dispatched packet719is depicted to include a packet ID, a packet priority, and a set of keys, and associated data references. In one example, packets may include a single key, which identifies a program for execution, a data element to be used during execution, or both. Prioritization indicator712may be implemented in a variety of ways. Indicator712can simply be a sequence identifier (e.g., an incrementing number) that indicates a relative order or time at which the task was emitted. In one approach, this sequence identifier allows a minimum quality of service for completion of each task. Tasks also can have respective indicators712that are interpretable as a higher or lower priority than a minimum quality of service level. Tasks do not need to have unique indicators712, even if a general case provides an incrementing identifier. For example, a relatively higher priority for a newly emitted task can be achieved by duplicating a sequence identifier that is closer to a current task clearance number (as explained with respect toFIG.20), and implementations according to the disclosure can process the newly emitted task at the same priority as a previously-emitted task that has the same sequence identifier. Other implementations may provide a sequence identifier and a separate prioritization field. Test cells516/520(seeFIG.16) receive inputs at respective input buffers740-742. The inputs can be selected for distribution among test cells516/520based on which of the test cells stores localized data for execution related to those inputs. For example, definition data for a ray identified by a specific data reference711may be stored in a local memory of only one of the test cells516/520, and that data reference would be distributed to that test cell, along with a reference to a shape or shape data to be tested with respect to that ray. A task status feedback749can be implemented by the limited programmability circuit(s)550. In an example of traversing a ray through an acceleration structure, the feedback can include selecting which children, from a plurality of children, a ray should be collected against next. That can be effected by providing a task with a key710for each child element. More generally, circuit(s)550can calculate a reference or address of a program, acceleration structure element, or a data element to be used in subsequent processing or to be processed as a next step for a particular data reference711. In one example, modules of code can execute on compute cluster502in order to setup relevant data in local memories of test cells516/520. However, in some implementations, a task storage maintenance module716can setup data in those local memories, based on information arriving in task definitions. For example, module716can arrange direct memory transfer requests, from a shared coherent memory to local memories of test cells516/520. These transfers can be scheduled with awareness of which packets have been queued by scheduler733. Although the exact timing of when a given task is performed by test cells716/720may not be deterministic, a small cache can be provided to buffer data retrieved from a shared memory until used and then discarded. FIG.20depicts an example of implementing quality-of-service aware throughput computing. As shown inFIG.19, a task collector may produce collections of tasks that are to be execution on a set of computation elements. The task collector an establish groupings of tasks that can be executed concurrently for at least some portion of those tasks. The task collector can defer commencement of execution of particular tasks in favor of increasing throughput of completion of tasks as a whole. However, if tasks are selected for processing purely on throughput considerations, then certain tasks may fail to be completed on a timely basis. In the context of ray tracing, a relatively small number of rays may end up in seldom-visited portions of a 3-D scene. Thus, insufficient rays may be available to make a full collection for those portions, and so the rays may not be scheduled for further traversal, if a scheduling heuristic is made to select full collections in order to maximize computation parallelism. In a general computation scenario, a set of code modules, routines, or segments may have parts that are much more frequently visited than others. The execution of these elements of code may be scheduled by collecting requests for such execution and selecting collections based at least on respective numbers of requests collected for different elements of code. Here also, some requests may languish if scheduling is done purely on a throughput decision. In one example, tasks that are defined (defined tasks defined625) can be given increasing identifiers, starting from a task emission point631. Tasks can be selected for, and processed for throughput considerations, but additionally, a task clearance point632can be maintained. Task clearance point632identifies a position in the sequence of identifiers at which all lower task identifiers are to be prioritized for completion. As depicted inFIG.20, some tasks greater than task clearance point632may already have been completed (e.g., task642). As point632moves, scheduler733ofFIG.19may identify (644) collections in collection storage718that contain that task, select those collections for eviction, and dispatch (644) as a corresponding packet (e.g., in a fast packet queue, e.g.735). Task results can be obtained (646), and based on those results, a decision as to whether a task has been completed is made. If the task is not completed, then further collections in which the task is to be put are selected/updated (650). If that task is complete, then processing can continue (651) for other tasks. In a scheduling approach according toFIG.20, scheduling can be performed primarily based on throughput, but ensure that a given task does not linger more than a pre-determined time (e.g., processor cycles) before it is advanced. Giving tasks higher priorities can be accomplished by giving that task a sequence identifier lower than what is being issued to other tasks, which causes the task to reach clearance point632sooner than it would have otherwise. A separate priority indicator can be maintained also, as explained above. In the specific context of animation, sequences of frames may be rendered. Task identifiers can include data relating to a frame number (e.g., absolute, or a relative number for frames in flight), that frame number can be used for prioritization. Classes of rays also can be prioritized by such techniques, such as rays coming from a certain shader module, a certain type of ray, and so on. Implementations can provide a latency cap for individual tasks or classes of tasks, rays or classes of rays. To generalize to computation tasks, classes of tasks, such as tasks originating from a particular source, or which reference a particular dataset can be given a particular latency cap. Other ways to relatively prioritize rays or tasks may be provided in implementations that generally prioritize throughput, but also avoid exceeding latency caps for individual elements of computation. A number of tasks between task clearance point632and task emission point631can be selectable and can be modulated according to real-time system conditions. For example, if rendering is implemented on a processing system that also can intermittently perform more time-critical digital signal processing tasks, or where available memory is currently constrained, then task clearance point632can be made to follow more closely to emission point631. If implemented in firmware and/or software, functions may be represented as one or more instructions or code on a computer-readable medium, in one example, the media is non-transitory. Examples include a computer-readable medium encoded with a data structure and a computer-readable medium encoded with a computer program. Machine-readable media includes non-transitory machine readable media. Other kinds of media include transmission media. A non-transitory medium may be any tangible medium that can be accessed by a machine. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a machine. Modern general purpose processors regularly require in excess of two billion transistors to be implemented, while graphics processing units may have in excess of five billion transistors. Such transistor counts are likely to increase. Designs have used these transistors to implement increasing complex functionality and to increase parallelism. As such, it becomes increasingly necessary to be able to describe or discuss technical subject matter concerning such processors, whether general purpose or application specific, at a level of detail appropriate to the technology being addressed. In general, a hierarchy of concepts is applied to allow those of ordinary skill to focus on details of the matter being addressed. Describing portions of a design (e.g., different functional units within an apparatus or system) according to functionality provided by those portions is often an appropriate level of abstraction, rather than exhaustively describing implementations of such portions, since each of these portions may themselves comprise hundreds of thousands or millions of gates and millions, tens of millions or hundreds of millions of transistors. When addressing some particular feature or implementation of a feature within such portion(s), it may be appropriate to identify substituent functions or otherwise characterize some sub-portion of that portion of the design in more detail, while abstracting other sub-portions or other functions. A precise logical arrangement of the gates and interconnect (a netlist) implementing a portion of a design (e.g., a functional unit) can be specified. However, how such logical arrangement is physically realized in a particular chip (how that logic and interconnect is laid out in a particular design) still may differ in different process technology and for a variety of other reasons. To the extent that circuitry implementing particular functionality may be differently within different contexts, disclosure of a particular circuit may not be particularly helpful. Also, many of the details concerning producing netlists for functional units as well as actual layout are determined using design automation, proceeding from a high level logical description of the logic to be implemented (e.g., a “hardware description language”). As such, it is often unnecessary and/or unhelpful to provide more detail concerning a portion of a circuit design than to describe the functionality to be provided. The term “circuitry” does not imply a single electrically connected set of circuits. Circuitry may be fixed function, configurable, or programmable. In general, circuitry implementing a functional unit is more likely to be configurable, or may be more configurable, than circuitry implementing a specific portion of a functional unit. For example, a “test cell” or “limited programmability circuits” according to the disclosure can be less configurable than an Arithmetic Logic Unit (ALU) of a processor, in that an ALU typically performs a sequence of simple operations, whereas some implementations of limited programmability circuits would execute a pre-defined sequence of operations, which can be selected from a set of operations. Such operations may accept parameters, or may have some variations. In any case, an ALU can become a portion of circuitry for implementing each operation to implement a function, and thus effectively can be or become circuitry for implementing such function, when configured to perform or otherwise interconnected to perform each different operation. Such configuration may come from or be based on instructions, or microcode, for example. For example, a “task collector” and “test cells” may be implemented by fixed function circuitry, by machine code configuring a configurable or programmable processing unit, such as a core or a set of programmable cores, or a combination thereof. In some implementations, fixed or limited configurability circuitry is used to implement task collectors and test cells according to the disclosure. Nevertheless, a programmable processing unit, as configured by the machine code, can become a test cell or task collector, where a person of ordinary skill would understand that these terms relate back to functionality disclosed in the specification. In all such cases, describing portions of an apparatus or system in terms of its functionality conveys structure to a person of ordinary skill in the art. In the context of this disclosure, the term “unit” refers, in some implementations, to a class or group of circuitry that implements the functions or functions attributed to that unit. Such circuitry may implement additional functions, and so identification of circuitry performing one function does not mean that the same circuitry, or a portion thereof, cannot also perform other functions. In some circumstances, the functional unit may be identified, and then functional description of circuitry that performs a certain feature differently, or implements a new feature may be described. As such, a “unit” may be formed of one or more circuits that implement a function or functions, where one or more of the circuits may be composed of configurable or programmable logic elements. Examples of logic elements include portions of ALUs, and a combination of switches and interconnect that implement logical expressions, such as Boolean logic expressions. In some cases, a structure or structures implementing a given unit or module may have permanent physical differences or adaptations compared with structure(s) implementing other modules or units within an apparatus or system. However, such structure(s) also may be produced by a temporary adaptation or configuration, such as one caused under program control, microcode, or other source of configuration. Different approaches to design of circuitry exist, for example, circuitry may be synchronous or asynchronous with respect to a clock. Circuitry may be designed to be static or be dynamic. Different circuit design philosophies may be used to implement different functional units or parts thereof. Absent some context-specific basis, “circuitry” encompasses all such design approaches. Although circuitry or functional units described herein may be most frequently implemented by electrical circuitry, and more particularly, by circuitry that primarily relies on a transistor implemented in a semiconductor as a primary switch element, this term is to be understood in relation to the technology being disclosed. For example, different physical processes may be used in circuitry implementing aspects of the disclosure, such as optical, nanotubes, micro-electrical mechanical elements, quantum switches or memory storage, magnetoresistive logic elements, and so on. Although a choice of technology used to construct circuitry or functional units according to the technology may change over time, this choice is an implementation decision to be made in accordance with the then-current state of technology. This is exemplified by the transitions from using vacuum tubes as switching elements to using circuits with discrete transistors, to using integrated circuits, and advances in memory technologies, in that while there were many inventions in each of these areas, these inventions did not necessarily fundamentally change how computers fundamentally worked. For example, the use of stored programs having a sequence of instructions selected from an instruction set architecture was an important change from a computer that required physical rewiring to change the program, but subsequently, many advances were made to various functional units within such a stored-program computer. Functional modules may be composed of circuitry, where such circuitry may be fixed function, configurable under program control or under other configuration information, or some combination thereof. Functional modules themselves thus may be described by the functions that they perform, to helpfully abstract how some of the constituent portions of such functions may be implemented. In some situations, circuitry and functional modules may be described partially in functional terms, and partially in structural terms. In some situations, the structural portion of such a description may be described in terms of a configuration applied to circuitry or to functional modules, or both. The description of the aspects and features is provided to enable any person skilled in the art to make and use the systems, apparatuses and perform the methods disclosed. Various modifications will be readily apparent to those skilled in the art, and the principles described in this document may be applied to other aspects without departing from the spirit or scope of the disclosure. Thus, the description is not intended to limit the claims. Rather, the claims are to be accorded a scope consistent with the principles and novel features disclosed herein. The drawings include relative arrangements of structure and ordering of process components, solely as an aid in understanding the description. These relative arrangements and numbering is not an implicit disclosure of any specific limitation on ordering or arrangement of elements and steps in the claims. Process limitations may be interchanged sequentially without departing from the scope of the disclosure, and means-plus-function clauses in the claims are intended to cover the structures described as performing the recited function that include not only structural equivalents, but also equivalent structures. Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than, additional to, or less than, those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
69,688
11861788
This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure. “Comprising.” This term is open-ended. As used in the appended claims, this term does not foreclose additional structure or steps. Consider a claim that recites: “An apparatus comprising one or more processor units . . . .” Such a claim does not foreclose the apparatus from including additional components (e.g., a network interface unit, graphics circuitry, etc.). “Configured To.” Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs those task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f), for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. “First,” “Second,” etc. As used herein, these terms are used as labels for nouns that they precede, and do not imply any type of ordering (e.g., spatial, temporal, logical, etc.). For example, a buffer circuit may be described herein as performing write operations for “first” and “second” values. The terms “first” and “second” do not necessarily imply that the first value must be written before the second value. “Based On.” As used herein, this term is used to describe one or more factors that affect a determination. This term does not foreclose additional factors that may affect a determination. That is, a determination may be solely based on those factors or based, at least in part, on those factors. Consider the phrase “determine A based on B.” While in this case, B is a factor that affects the determination of A, such a phrase does not foreclose the determination of A from also being based on C. In other instances, A may be determined based solely on B. DETAILED DESCRIPTION As data acquisition and display technologies have become more advanced, the ability to capture three-dimensional (3D) volumetric content (also referred to herein as “visual volumetric content”), such as immersive video content, etc. has increased. Also, the development of advanced display technologies, such as virtual reality or augmented reality systems, has increased potential uses for 3D volumetric content, such as immersive video, etc. However, 3D volumetric content files are often very large and may be costly and time-consuming to store and transmit. For example, communication of 3D volumetric content, such as volumetric point cloud or immersive video content, over private or public networks, such as the Internet, may require considerable amounts of time and/or network resources, such that some uses of 3D volumetric content, such as real-time uses or on-demand uses, may be limited. Also, storage requirements of 3D volumetric content files may consume a significant amount of storage capacity of devices storing such files, which may also limit potential applications for using 3D volumetric content. Additionally, once transmitted 3D volumetric content may be computationally expensive to render. For example, meshes generated from depth maps included in a bit stream for compressed 3D volumetric content may require a large number of vertices to render. In some embodiments, an encoder may be used to generate a compressed version of three-dimensional volumetric content to reduce costs and time associated with storing and transmitting large 3D volumetric content files. In some embodiments, a system may include an encoder that compresses attribute and/or spatial information of a volumetric point cloud or immersive video content file such that the file may be stored and transmitted more quickly than non-compressed volumetric content and in a manner such that the compressed volumetric content file may occupy less storage space than non-compressed volumetric content. In some embodiments, such compression may enable 3D volumetric content to be communicated over a network in real-time or in near real-time, or on-demand in response to demand from a consumer of the 3D volumetric content. Additionally, an encoder may generate metadata indicating vertices budgets to be applied to different areas of a scene, wherein the compressed 3D volumetric content represents the scene. Thus a rendering device may use the associated metadata to assign vertices budgets to different areas of the 3D volumetric content scene when rendering the scene. This may simplify the rendering process, as less vertices may be assigned to areas of the scene comprising less complex objects, while more vertices may be assigned to areas of the scene comprising more complex objects. Also, the burden of determining object complexity may be off-loaded to the encoder/source device (e.g. a server), wherein the rendering device (e.g. client) applies the already determined vertices budgets for the different areas of the scene as indicated in the metadata communicated with the 3D volumetric content. In some embodiments, a system may include a decoder that receives encoded 3D volumetric content comprising video encoded attribute information and video encoded depth maps, along with metadata indicating mesh vertices budgets for areas of the 3D volumetric content via a network from a remote server or other storage device that stores or generates the volumetric content files. For example, a 3-D display, a holographic display, or a head-mounted display may be manipulated in real-time or near real-time to show different portions of a virtual world represented by 3D volumetric content. In order to update the 3-D display, the holographic display, or the head-mounted display, a system associated with the decoder may request data from the remote server based on user manipulations (or anticipated user manipulations) of the displays, and the data may be transmitted from the remote server to the decoder in a form of encoded 3D volumetric content (e.g. video encoded attribute patch images and video encoded depth maps). The displays may then be updated with updated data responsive to the user manipulations, such as updated views. For example, updated versions of the 3D volumetric content may be rendered on the displays, wherein the metadata indicating mesh vertices budgets for areas of the 3D volumetric content included in the bit stream are used by the rendering device to allocate vertices in the rendering process. In some embodiments, sensors may capture attribute information for one or more points, such as color attributes, texture attributes, reflectivity attributes, velocity attributes, acceleration attributes, time attributes, modalities, and/or various other attributes. For example, in some embodiments, an immersive video capture system, such as that may follow MPEG immersive video (MIV) standards, may use a plurality of cameras to capture images of a scene or object from a plurality of viewing angles and/or locations and may further use these captured images to determine spatial information for points or surfaces of the object or scene, wherein the spatial information and attribute information is encoded using video-encoded attribute image patches and video-encoded depth maps accompanied with metadata indicating mesh vertices budgets for different areas of the 3D volumetric content, as described herein. Generating 3D Volumetric Content In some embodiments, 3D volumetric content that is to be encoded/compressed, as described herein, may be generated from a plurality of images of an object or scene representing multiple views of the object or scene, wherein additional camera metadata is known about the placement and orientation of the cameras that captured the multiple views. For example,FIG.1Aillustrates an object (person102) for which multiple images are being captured representing multiple views of the object, when viewed from cameras located at different locations and viewing angles relative to the object. InFIG.1Acameras104,106,108,110, and112view person102from different camera locations and/or viewing angles. For example, camera112captures a front center (FC) view of person102, camera108captures a left side (LS) view of person102, camera110captures a right side (RS) view of person102, camera104captures a front left (FL) view of person102, and camera106captures a front right (FR) view of person102. FIG.1Billustrates additional cameras that may be located behind person102. For example, camera118captures a back center (BC) view of person102, camera114captures a back left (BL) view of person102, camera116captures a back right (BR) view of person102, etc. FIG.1Cis a top view illustrating the cameras shown inFIGS.1A and1Bthat are located at different locations and viewing angles relative to person102. Note that the camera positions and camera angles shown inFIGS.1A-1Care given as an example configuration and in some embodiments other camera configurations may be used. For example, in some embodiments, when capturing images for a scene, the cameras may face outward towards the scene as opposed to pointing inward towards an object, as shown inFIG.1C. Also, in some embodiments, the cameras may not necessarily be arranged in a circular configuration, but may instead be arranged in other configurations, such as a square, rectangle, grid pattern, etc. FIG.1Dillustrates images that may have been captured via cameras104-118as shown inFIGS.1C-1D. For example image120shows a front center (FC) view, image122shows a back center (BC) view, image124shows a left side (LS) view, image126shows a right side (RS) view, image128shows a front right (FR) view, image130shows a front left (FL) view, image134shows a back right (BR) view, and image134shows a back left (BL) view. In some embodiments, camera metadata is associated with each of the views as shown inFIG.1D, wherein the camera metadata (e.g. source camera parameters) indicate locations and camera angles for the respective cameras104-118that were used to capture images120-134. In some embodiments, this camera metadata may be used to determine geometry information for the object or scene that is being captured by the respective cameras, such as X, Y, and Z coordinates of points of the object or scene (or other types of spatial information). In some embodiments, input data may have already been processed to determine geometry information, such as a depth map for each camera, as well as other attribute for each camera, such as colors, etc. For example, in such a case the input data may be pre-processed for depth estimation using multiple camera views. For example,FIG.2illustrates depth values for a depth patch image being determined using camera location and camera angle information for multiple cameras that capture images for a same portion of the object or scene from the different locations and/or camera angles, according to some embodiments. For example, a component of an encoder, such as an atlas constructor may use source camera parameters (e.g. camera metadata indicating source camera parameters, such as camera location and orientation) along with the images captured from the cameras to determine distances to surfaces in the captured images from the cameras at the known locations with the known orientations. In turn, spatial information indicating locations in space for the surfaces may be determined using the determined distances from the cameras and the known locations and orientations of the cameras. For example, source camera parameters may indicate locations and orientations for right side camera110and front right camera106that both capture images of a portion of a shoulder of person102. Moreover, an atlas constructor may determine that the cameras106and110are both capturing images comprising a same surface of the object (e.g. the portion of the person's shoulder). For example, pixel value patterns in the images may be matched to determine that images from both cameras106and110are capturing the same portion of the person102's shoulder. Using the source camera parameters and knowing points in the captured images that are located at a same location in 3D space, the atlas constructor may triangulate a location in 3D space of the matching portions of the captured images (e.g. the portion of person102's shoulder). Based on this triangulation from the known locations and orientations of cameras106and110, the atlas constructor may determine geometry/spatial information for the portion of the object, such as X, Y, and Z coordinates for points included in the matching portion of the person102's shoulder as shown inFIG.2. Furthermore, the spatial/geometry information may be represented in the form of a depth map (also referred to herein as a depth patch image). For example, as shown inFIG.2the spatial information for the person's shoulder, e.g. points with coordinates X1, Y1, Z1; X2, Y2, Z2; and X3, Y3, Z3, may be projected onto a flat plane of a depth map, wherein the X and Y spatial information is represented by a location of a given point in the depth map202. For example, X values may be represented by locations of the points along a width of the depth map202(e.g. the “U” direction) and Y values may be represented by locations of the points along the height of the depth map202(e.g. the “V” direction). Moreover, the Z values of the points may be represented by pixel values (“pv”) associated with the points at locations (U,V). For example, a first point with coordinates in 3D space of X1, Y1, Z1may be represented in the depth map at pixel (U1, V1) which has pixel value pv1, wherein darker pixel values indicate lower Z values and lighter pixel values indicate greater Z values (or vice versa). In some embodiments, depth maps may only be generated for views that are to be included in an atlas. For example, depth maps may not be generated for redundant views or redundant portions of views that are omitted from the atlas. Though, in some embodiments, image data and source camera parameters of all views may be used to generate the depth maps, but the redundant views may not be included in the generated depth maps. For example, whereas cameras106and110capture redundant information for the person102's shoulder, a single depth map may be generated for the two views as opposed to generating two redundant depth maps for the person's shoulder. However the images captured from cameras106and110that redundantly view the person's shoulder from different locations/camera viewing angles may be used to determine the spatial information to be included in the single depth map representing the person's shoulder. FIG.3illustrates a flowchart for an example process for generating an atlas from the captured views, wherein redundant information already included in a given view already included in the atlas is omitted from other views that are to be included in the atlas, according to some embodiments. At block302, a view optimizer (such as a view optimizer of an encoder) receives source views comprising both attribute and depth information, such as source views comprising views120-134illustrated inFIG.1D. The view optimizer also selects one of the received views as a main view. In some embodiments, the view optimizer may also receive source camera parameters which indicate locations and orientations of the cameras that captured the source views (e.g. camera metadata). The view optimizer may select one or more main views and tag the selected views as main views. In order to determine a ranking (e.g. ordered list of the views) at block304the view optimizer then re-projects the selected one or more main views into remaining ones of the views that were not selected as main views. For example, the front center view (FC)120and the back center view (BC)122may be selected as main views and may be re-projected into the remaining views, such as views124-134. At block306, the view optimizer determines redundant pixels, e.g. pixels in the remaining views that match pixels of the main views that have been re-projected into the remaining views. For example, portions of front right view128are redundant with portions of front center view120, when pixels of front right view128are re-projected into front center view120. In the example, these redundant pixels are already included in the main view (e.g. view120from the front center (FC)) and are omitted from the remaining view (e.g. view128from the front right (FR)). The view optimizer may iteratively repeat this process selecting a next remaining view as a “main view” for a subsequent iteration and repeat the process until no redundant pixels remain, or until a threshold number of iterations have been performed, or another threshold has been met, such as less than X redundant pixels, or less than Y total pixels, etc. For example, at block308the re-projection is performed using the selected remaining view as a “main view” to be re-projected into other ones of the remaining views that were not selected as “main views” for this iteration or a previous iteration. Also, at block312redundant pixels identified based on the re-projection performed at310are discarded. At block314the process (e.g. blocks308-312) are repeated until a threshold is met (e.g. all remaining views comprise only redundant pixels or have less than a threshold number of non-redundant pixels, etc.). The threshold may be measured also be based on all of the remaining views having empty pixels (e.g. they have already been discarded) or all of the remaining views have less than a threshold number of non-empty pixels. The ordered list of views having non-redundant information may be provided from the view optimizer to an atlas constructor of an encoder. Additionally, the source camera parameters (e.g. camera metadata) may be provided from the view optimizer to the atlas constructor. The atlas constructor may prune the empty pixels from the respective views (e.g. the pixels for which redundant pixel values were discarded by the view optimizer). This may be referred to as “pruning” the views. The atlas constructor may further aggregate the pruned views into patches (such as attribute patch images and geometry patch images) and pack the patch images into respective image frames. For example,FIG.4illustrates an atlas comprising packed attribute patch images representing views included in the atlas, wherein redundant information has been omitted and also illustrates a corresponding atlas/depth map comprising depth patch images that correspond with the attribute patch images included in the adjacent attribute patch image atlas, according to some embodiments. Attribute patch images404and406for main views120and122are shown packed in the atlas402. Also, patch images408and410comprising non-redundant pixels for views124and126are shown packed in atlas402. Additionally, attribute patch images412,414,416, and418comprising non-redundant pixels for remaining views128,130,132, and134are shown packed in atlas402. Atlas420/depth map420comprises corresponding depth patch images422-436that correspond to the attribute patch images404-418packed into attribute atlas402. As further described in regard toFIGS.5-9C, the depth patch images422-436may be converted, at a decoder/renderer, into mesh-based representations and further simplified based on mesh budget metadata generated by the encoder and included in the bit stream. This may simplify rendering at a receiving device that is to render a reconstructed version of the object or scene, such as person102. For example, if the depth patch images were encoded as a video image frame as shown inFIG.4(e.g. if atlas420was encoded as a video image), a rendering device converts the depth pixel values into point values in 3D space or covert the point values into meshes. However, often times a rendering device has limited computational capacity as compared to an encoding device (e.g. a server doing the encoding may have more computational capacity than a VR or AR device doing the rendering). Thus, generating the meshes and strategically simplifying the meshes at the decoding/rendering device using mesh budget metadata determined by the encoder may simplify the rendering process at the decoding/rendering device with limited computational capacity. Resolution Budgeting by Area of a Scene Volumetric visual content, such as immersive videos, provide up to six degree of freedom range for viewing. In some implementations meshes may be used to render immersive video content. However, rendering meshes may be compute intensive, especially for meshes comprising a large number of vertices. For example, rendering meshes with large numbers of vertices may slow down processing of an immersive video such that an acceptable frame rate cannot be maintained. In order to reduce the number of vertices in a scene, the scene may be uniformly down-sampled at a decoding device/rendering device. However, such an approach may result in a coarse mesh that introduces distortion and reduces quality. In some embodiments, in order to simplify meshes used in a three-dimensional scene while maintaining a high-quality image for the three-dimensional scene, an adaptive down-sampling or mesh vertices budgeting process may be employed, wherein different down-sampling factors or different mesh vertices budgets are applied to different areas of a scene based on geometrical complexity of objects included in the respective areas, or based on object types of objects included in the respective areas. Such down-sampling factors or different mesh vertices budgets may be determined by an encoder and included in a bit stream as mesh budget metadata to be used by a decoder/rendering device when rendering the 3D scene. In some embodiments, division of a scene into areas and allocations of down-sampling factors and/or mesh vertices budgets to the determined areas of the scene may be updated at fixed time intervals. For example, multiple frames of the scene may be encoded using the determined scene areas and associated down-sampling factors and/or mesh vertices budgets and after an amount of time has elapsed, the process of determining areas and associated down-sampling factors/mesh vertices budgets may be updated. In some situations the areas may change or remain the same when the updated areas and down-sampling and/or mesh vertices budgets are updated. In some embodiments, objects in a scene may be identified using a machine learning or neural network. Furthermore, the machine learning algorithm or neural network may be able to classify the identified objects as being objects falling to particular object type categories, such as “person”, “car”, “tree”, “wall”, etc. Furthermore, a mesh analysis module may store complexity values for different types of objects. For example, a higher complexity value may be assigned to a “person” object type than is assigned to a “wall” object type. Additionally, or alternatively, in some embodiments, objects in a scene may be reconstructed at an encoder and an object complexity value may be determined based on factors associated with reconstructing the object, such as a number of vertices used to reconstruct the object, an amount of processing resources used to reconstruct the object, an amount of time required to reconstruct the object, etc. These complexity values for different areas/objects may then be included as mesh budget metadata in an encoded bit stream generated by the encoder. Objects with higher complexity values may be grouped into common areas that are allocated a higher mesh vertices budget and objects with lower complexity values may be grouped into common areas that are allocated a smaller mesh vertices budget. Said another way, objects with similar complexity scores may be grouped into common areas. In some embodiments, similarity in complexity scores of objects may be determined based on the objects having complexity scores that deviate from one another less than a threshold value. Thus, a scene may be reconstructed using a sufficient number of vertices to reconstruct objects with complex geometries, such that distortion is reduced. But, at the same time reconstruction of the scene may be less resource intense as compared to previous systems that did not apply a down-sampling factor or mesh vertices budget for areas. For example, instead of all objects being reconstructed using a same vertex or polygon rate (or density per block of pixels), fewer vertices (or pixels) may be used to reconstruct objects with less complex geometries, such as a wall. In some embodiments, a mesh analysis module may perform a rate distortion optimization (RDO) process to optimize across various variables to determine parameters that reduce overhead and result in high quality rendering of the scene. For example, partitioning a scene into more areas may improve rendering quality but may add overhead costs to signal area definitions for each of the area and to signal associated mesh vertices budgets for each of the areas. In contrast having fewer areas may reduce overhead signaling costs but may negatively impact quality by including a more complex object in an area that does not have a sufficient mesh vertices budget to render the more complex object without distortion. Or, having fewer areas may reduce rendering efficiency by including less complex objects in a common area with more complex objects and assigning a higher mesh vertices budget than what is necessary to properly render the less complex objects. Thus, a rate distortion optimization process may iterate through different combinations of area divisions and mesh vertices budget allocations to determine an optimum or semi-optimum distribution of areas and mesh vertices budget allocations. In some embodiments, such as patch-based compression schemes, area definitions and associated mesh vertices budgets and/or down sampling factors may be signaled in a header or other data structure and may apply for a group of pictures defining the scene or a group of frames defining the scene across multiple moments in time. In some embodiments, area definitions may change over time and may be signaled in a header or frame. For example, if an object is moving across the scene, an area definition for an area that comprises the object may be updated to encompass the object as the object moves across the scene. In some embodiments, an area that encompasses a moving object may be determined based on the patch of the object, such that the area is defined to be large enough to encompass the object as the object moves across or within the scene. In some embodiments, metadata comprising the determined areas and associated mesh vertices budgets and/or down-sampling factors may be provided to a server that simplifies the meshes prior to providing to a client. Also, in some embodiments, an attribute atlas and depth map may be provided to a client with metadata indicating area definitions and associated mesh vertices budgets and/or down-sampling factors. The client may then apply the mesh vertices budgets and/or down-sampling factors, when reconstructing the scene. Also, in some embodiments, a client may selectively apply the mesh vertices budgets and/or down-sampling factors based on available resources. For example, if sufficient resources are available, the client may forego down-sampling some or all of the textures associated with the attribute atlas, or simplifying the meshes generated from the depth map. However, if resources are constrained, the client may apply the mesh vertices budgets and/or down-sampling factors. In some embodiments, down-sampling may be applied differently for the depth map than for the texture. For example in some embodiments, a mesh vertices budget allocated for geometry information may require more down-sampling than is applied to a corresponding texture. Note that while a scene may include objects in three-dimensional space, the objects can be grouped into areas of the scene when viewed from a viewing perspective from which the scene is to be rendered. FIG.5illustrates a three-dimensional scene comprising different objects. A mesh analysis module on an encoder is used to identify areas of the scene comprising one or more objects with similar levels of complexity and has assigned vertices budgets to the respective areas for rendering the objects in the respective areas, according to some embodiments. Scene500includes various objects having different geometric complexities. For example, scene500includes an open space in view area1(502), a lamp in view area2(504), a person in view area3(506), and a dog in view area4(508). In some embodiments, a mesh analysis module, such as mesh analysis module802illustrated inFIG.8, may determine areas for scene500, such as areas1through4based on the geometrical complexity of objects included in the scene. For example, the open space in view area1(502) has a low complexity and may be assigned a low mesh vertices budget. The lamp in view area2(504) may have a medium geometrical complexity and may be assigned a higher mesh vertices budget than the open space in view area1(502). Also, the dog in view area4(508), may have a medium geometrical complexity and may be assigned a similar mesh vertices budget as the lamp in view area2(504). In some embodiments, view area2and view area4may be combined into a single view area with a common area definition and a shared mesh vertices budget. The person in view area3(506) may have a higher geometrical complexity than the open space, lamp, or dog and may be allocated a greater mesh vertices budget. In some embodiments, an area or sub-area with an associated mesh vertices budget/down sampling factor may correspond to an object in a scene, or may be smaller than an object in a scene. For example, in some embodiments, an area or sub-area may encompass a block of pixels in the atlas/depth map, such as an 8×8 block, or other suitable block size. FIG.6illustrates the three-dimensional scene after being rendered according to the assigned vertices budgets, wherein some objects with less complex geometries are rendered using fewer vertices than other objects with more complex geometries, according to some embodiments. As can be seen inFIG.6, the objects in scene500may be represented by meshes comprising vertices604that are connected to form polygons602, such as triangles on a surface of the objects. As can be seen inFIG.6, the person is allocated more vertices that result in smaller polygons and a finer grained surface than is the case for the other objects, such as the wall. Also, the lamp and dog are allocated more vertices than the wall, but fewer than the person. FIG.7illustrates a three-dimensional scene comprising different objects, wherein a mesh analysis module has identified areas and sub-areas of the scene and assigned vertices budgets to the respective areas and sub-areas, according to some embodiments. In some embodiments, sub-areas may be defined for portions of an area and different mesh vertices budgets may be allocated to the sub-areas. For example inFIG.7the face of the person is included in view sub-area3-1(510) and is allocated a different mesh vertices budget than is allocated for view area3. Also, the head of the dog is included in view sub-area4-1(512) and is allocated a different mesh vertices budget than view area4. FIG.8illustrates components of a mesh analysis module, according to some embodiments. Mesh analysis module802includes object/area identification module804, metadata generation module806, mesh vertices budgeting module812, and mesh reconstruction module818. In some embodiments, object/area identification module804identifies objects in a scene and determines area divisions of the scene that include objects with similar geometric complexities. Mesh vertices budgeting module812includes mesh complexity analysis module814and/or object type identifier module816. Mesh complexity module814may determine a complexity of a mesh based on reconstruction the mesh via mesh reconstruction module818, based on a number of vertices included in geometry patches (e.g. portions of a depth map) for the object, etc. Additionally, or alternatively, object type identifier module816may identify objects in a scene, for example using machine learning or a neural network and may further assign object type identifiers to the identified objects. Metadata generation module806generates metadata comprising area definitions808and area vertices budgets810, as were determined by object/area identification module804and mesh vertices budgeting module812. Metadata generation module806provides metadata820for use by a renderer to determine mesh vertices budgets for objects falling within the defined areas. Also, in some embodiments, metadata820may be provided to a server that down-samples meshes of objects falling in the defined areas based on the corresponding mesh vertices budgets, before providing a bit stream with the simplified meshes to a client device. FIG.9Aillustrates a process of generating metadata comprising mesh vertices budgets for areas of a scene, according to some embodiments. At block902, mesh analysis module, such as mesh analysis module802, receives a scene to be encoded. At block904, the mesh analysis module identifies areas of the scene having similar characteristics based on object geometrical complexity and/or object type. At block906, the mesh analysis module determines down-sampling factors and/or mesh vertices budgets to be applied to objects located in the defined areas. At block908, the mesh analysis module generates metadata indicating the identified areas of the scene and corresponding down-sampling factors and/or mesh vertices budgets for the areas. FIG.9Billustrates a process of simplifying a scene based on mesh vertices budgets and/or generating a bit stream representing the scene, according to some embodiments. At block920, an encoded mesh providing device, such as server, encoder, etc. receives metadata indicating the identified areas of the scene and down-sampling factors and/or mesh vertices budgets for the areas. At block922, the encoded mesh providing device (e.g. server, encoder, etc.) applies to the down-sampling factors and/or enforces the mesh vertices budgets to simplify meshes representing objects in the different areas of the scene. At block924, the encoded mesh providing device (e.g. server, encoder, etc.) encodes the objects for which the down-sampling factor and/or mesh vertices budget has been applied. At block926, the encoded mesh providing device (e.g. server, encoder, etc.) provides an encoded bit stream to a an encoded mesh receiving device, such as a client device, renderer, decoder, etc., wherein the encoded bit stream includes data for reconstructing the simplified meshes that have been down-sampled and/or simplified by applying the mesh vertices budgets. FIG.9Cillustrates a process of rendering a scene taking into account mesh vertices budgets for different objects in the scene located in different areas of the scene, according to some embodiments. In some embodiments, instead of applying the down-sampling factors/mesh vertices budgets at the server/encoder, the area definitions and down-sampling factors/mesh vertices budgets may be provided to a receiving device, such as a client device, renderer, decoder, etc., and may be applied during a reconstruction process for reconstructing the meshes representing the objects in the different areas of the scene. At block930, the receiving device (e.g. client device, renderer, decoder, etc.) receives an encoded bit stream for the scene comprising data defining an atlas and depth map that represent objects in the scene. Also, at932, the receiving device (e.g. client device, renderer, decoder, etc.) receives metadata indicating down-sampling factors and/or mesh vertices budgets for objects of the scene. At block934, the receiving device (e.g. client device, renderer, decoder, etc.) reconstructs the objects of the scene applying the received down sampling factors and/or mesh vertices budgets. For example, the receiving device (e.g. client device, renderer, decoder, etc.) may reduce a number of vertices to be rendered for the objects as compared to a number of vertices that would be rendered from the depth map without mesh simplification/down-sampling. Various techniques may be used to reduce the vertices for objects to be within the mesh vertices budget. For example, for a wall or open space the mesh vertices may be uniformly down-sampled to include a number of vertices within the mesh vertices budget for the object, e.g. the wall. At block936, the receiving device (e.g. client device, renderer, decoder, etc.) determines texture/attribute values for the reconstructed meshes. At block938, the receiving device (e.g. client device, renderer, decoder, etc.) renders the reconstructed meshes with the applied textures and/or attribute values. Example Bit Stream Structure FIG.10illustrates, a bit stream structure for compressed volumetric content, according to some embodiments. In some embodiments, relationship information for patch images in an image frame may be included in or derived from a bit stream. For example,FIG.10illustrates a bit stream structure for compressed volumetric content, such as scene500described above. In some embodiments, the auxiliary information may include relationship information for patch images (e.g. portions of an atlas and depth map corresponding to a same patch/view). Also, the auxiliary information may indicate which blocks of an image frame correspond to which patches. This information may be used to determine portions of an image frame that correspond to a same patch. In some embodiments, metadata820as described inFIG.8may be signaled in a stream header, wherein area definitions and mesh vertices budgets apply to multiple groups of frames of a bit stream. Also, metadata820may be signaled in a group of frames header as shown inFIG.10. While not shown inFIG.10, metadata820may also be signaled in a group of pictures header. In some embodiments, metadata820may be signaled in a group of frames header and other metadata820that is specific to a portion of an area, such as sub-areas described inFIG.7, may be signaled in the auxiliary information. For example a mesh vertices budget for the view area3(506) may be signaled in a group of frames header for area3(506) and a separate mesh vertices budget for sub-area3-1(510) may be signaled in the auxiliary information. Wherein the separate mesh vertices budget is only applied to some of the frames that include patches corresponding to the sub-area3-1(510). Example Computer System FIG.11illustrates an example computer system1100may implement an encoder or decoder or others of the components described herein, (e.g., any of the components described above with reference toFIGS.1-10), in accordance with some embodiments. The computer system1100may be configured to execute any or all of the embodiments described above. In different embodiments, computer system1100may be any of various types of devices, including, but not limited to, a personal computer system, desktop computer, laptop, notebook, tablet, slate, pad, or netbook computer, mainframe computer system, handheld computer, workstation, network computer, a camera, a set top box, a mobile device, a consumer device, video game console, handheld video game device, application server, storage device, a television, a video recording device, a wearable device such as a wrist watch or a wearable display, a peripheral device such as a switch, modem, router, or in general any type of computing or electronic device. Various embodiments of an encoder or decoder, as described herein may be executed in one or more computer systems1100, which may interact with various other devices. Note that any component, action, or functionality described above with respect toFIGS.1-10may be implemented on one or more computers configured as computer system1100ofFIG.11, according to various embodiments. In the illustrated embodiment, computer system1100includes one or more processors1110coupled to a system memory1120via an input/output (I/O) interface1130. Computer system1100further includes a network interface1140coupled to I/O interface1130, and one or more input/output devices1150, such as cursor control device1160, keyboard1170, and display(s)1180. In some cases, it is contemplated that embodiments may be implemented using a single instance of computer system1100, while in other embodiments multiple such systems, or multiple nodes making up computer system1100, may be configured to host different portions or instances of embodiments. For example, in one embodiment some elements may be implemented via one or more nodes of computer system1100that are distinct from those nodes implementing other elements. In various embodiments, computer system1100may be a uniprocessor system including one processor1110, or a multiprocessor system including several processors1110(e.g., two, four, eight, or another suitable number). Processors1110may be any suitable processor capable of executing instructions. For example, in various embodiments one or more of processors1110may be general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. Also, in some embodiments, one or more of processors1510may include additional types of processors, such as graphics processing units (GPUs), application specific integrated circuits (ASICs), etc. In some embodiments, computer system1500may be implemented as a system on a chip (SoC). For example, in some embodiments, processors1510, memory1520, I/O interface1530(e.g. a fabric), etc. may be implemented in a single SoC comprising multiple components integrated into a single chip. For example an SoC may include multiple CPU cores, a multi-core GPU, a multi-core neural engine, cache, one or more memories, etc. integrated into a single chip. In some embodiments, an SoC embodiment may implement a reduced instruction set computing (RISC) architecture, or any other suitable architecture. In multiprocessor systems, each of processors1110may commonly, but not necessarily, implement the same ISA. System memory1120may be configured to store compression or decompression program instructions1122and/or sensor data accessible by processor1110. In various embodiments, system memory1120may be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory. In the illustrated embodiment, program instructions1122may be configured to implement an encoder/decoder incorporating any of the functionality described above. In some embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media or on similar media separate from system memory1120or computer system1100. While computer system1100is described as implementing the functionality of functional blocks of previous Figures, any of the functionality described herein may be implemented via such a computer system. In one embodiment, I/O interface1130may be configured to coordinate I/O traffic between processor1110, system memory1120, and any peripheral devices in the device, including network interface1140or other peripheral interfaces, such as input/output devices1150. In some embodiments, I/O interface1130may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory1120) into a format suitable for use by another component (e.g., processor1110). In some embodiments, I/O interface1130may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard, the Universal Serial Bus (USB) standard IEEE 1394 serial bus standard, etc., for example. In some embodiments, the function of I/O interface1130may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface1130, such as an interface to system memory1120, may be incorporated directly into processor1110. Network interface1140may be configured to allow data to be exchanged between computer system1100and other devices attached to a network1185(e.g., carrier or agent devices) or between nodes of computer system1100. Network1185may in various embodiments include one or more networks including but not limited to Local Area Networks (LANs) (e.g., an Ethernet or corporate network), Wide Area Networks (WANs) (e.g., the Internet), wireless data networks, some other electronic data network, or some combination thereof. In various embodiments, network interface1140may support communication via wired or wireless general data networks, such as any suitable type of Ethernet network, for example; via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks; via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol. Input/output devices1150may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or accessing data by one or more computer systems1100. Multiple input/output devices1150may be present in computer system1100or may be distributed on various nodes of computer system1100. In some embodiments, similar input/output devices may be separate from computer system1100and may interact with one or more nodes of computer system1100through a wired or wireless connection, such as over network interface1140. As shown inFIG.11, memory1120may include program instructions1122, which may be processor-executable to implement any element or action described above. In one embodiment, the program instructions may implement the methods described above. In other embodiments, different elements and data may be included. Note that data may include any data or information described above. Those skilled in the art will appreciate that computer system1100is merely illustrative and is not intended to limit the scope of embodiments. In particular, the computer system and devices may include any combination of hardware or software that can perform the indicated functions, including computers, network devices, Internet appliances, PDAs, wireless phones, tablets, wearable devices (e.g. head-mounted displays, virtual reality displays, augmented reality displays, etc. Computer system1100may also be connected to other devices that are not illustrated, or instead may operate as a stand-alone system. In addition, the functionality provided by the illustrated components may in some embodiments be combined in fewer components or distributed in additional components. Similarly, in some embodiments, the functionality of some of the illustrated components may not be provided and/or other additional functionality may be available. Those skilled in the art will also appreciate that, while various items are illustrated as being stored in memory or on storage while being used, these items or portions of them may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software components may execute in memory on another device and communicate with the illustrated computer system via inter-computer communication. Some or all of the system components or data structures may also be stored (e.g., as instructions or structured data) on a computer-accessible medium or a portable article to be read by an appropriate drive, various examples of which are described above. In some embodiments, instructions stored on a computer-accessible medium separate from computer system1100may be transmitted to computer system1100via transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as a network and/or a wireless link. Various embodiments may further include receiving, sending or storing instructions and/or data implemented in accordance with the foregoing description upon a computer-accessible medium. Generally speaking, a computer-accessible medium may include a non-transitory, computer-readable storage medium or memory medium such as magnetic or optical media, e.g., disk or DVD/CD-ROM, volatile or non-volatile media such as RAM (e.g. SDRAM, DDR, RDRAM, SRAM, etc.), ROM, etc. In some embodiments, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic, or digital signals, conveyed via a communication medium such as network and/or a wireless link. The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of the blocks of the methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. Various modifications and changes may be made as would be obvious to a person skilled in the art having the benefit of this disclosure. The various embodiments described herein are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow.
51,556
11861790
DETAILED DESCRIPTION Techniques described herein are directed to various aspects of procedural world generation. That is, techniques described herein are directed to procedurally generating simulated worlds for use with testing, validating, or training systems and/or components used by vehicles for navigating, planning, and/or decision making. In at least some examples described herein, such generated simulated worlds may be generated to represent real world environments, and at least in some examples, as accurate as possible. Techniques described herein describe how such simulated environments may be generated. In an example, techniques described herein are directed to receiving sensor data from various sensor systems in a real environment. The sensor systems can include, but are not limited to, light detection and ranging (LIDAR) sensors, radio detection and ranging (RADAR) sensors, ultrasonic transducers, sound navigation and ranging (SONAR) sensors, Time of Flight (ToF) sensors, location sensors (e.g., global positioning system (GPS), compass, etc.), inertial sensors (e.g., inertial measurement units, accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, etc.), wheel encoders, microphones, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. Using the sensor data, techniques described herein can generate, receive, and/or otherwise access road network data associated with the real environment and a road mesh associated with the real environment. Techniques described herein can associate the road network data with the road mesh to generate a simulated environment. That is, data representative of a real environment can be used to generate a simulated environment. In some examples, the simulated environment can be supplemented with tertiary data (e.g., data from a third-party), which can be referred to herein as “supplemental data.” Techniques descried herein are further directed to procedurally rendering object(s) and surface details into the simulated environment. The resulting simulated environment can be used for testing, validating, and/or training systems and/or components used by an autonomous robotic computing device, such as an autonomous vehicle, for navigating, planning, and/or decision making. Simulated environments can be useful for enhancing training, testing, and/or validating systems (e.g., one or more components of an artificial intelligent (AI) stack) onboard an autonomous vehicle. For instance, in at least one example, simulated environments can be useful for training systems that are to be used onboard an autonomous vehicle (e.g., models used by such systems), for instance when real data is not readily available, when testing would be unsafe in a real environment, and in order to generate magnitudes more data than would otherwise be available. In at least one example, simulated environments can be used for generating training data for rare or infrequently occurring scenarios and/or objects. Moreover, simulated environments can be useful for testing performance of an autonomous vehicle (e.g., models and/or systems running thereon), for instance, when real environments are either not available or are not safe, or a ground truth is not otherwise available. Furthermore, in some examples, sensor data associated with simulated environments can be more accurate than sensor data associated real environments (e.g., due to occlusions, noise, drift, etc.) and as such, simulated environments can be used for validating observations made in association with real environments. In some examples, simulated environments can be used for calibration (e.g., of one or more sensor systems onboard an autonomous vehicle). Techniques described herein are directed to generating simulated environments and using simulated environments in various scenarios, as described above. Techniques described herein offer various computational efficiencies. For instance, by utilizing procedural rendering techniques described herein, computing devices require fewer computational resources and simulated worlds can be generated faster than what is available via conventional techniques. Conventional techniques are not scalable. For instance, generating a simulated environment for a new geographical location can take days, or even months, using conventional techniques. Generating tens, hundreds, and thousands of new simulated environments—as many as are needed for training, testing, and/or validating systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle (e.g., prior to such autonomous vehicle(s) being deployed in corresponding new real environments)—would take months, or even years, thereby limiting the ability to train, test, and/or validate such systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle prior to entering into new real environments. Techniques described herein are unconventional in that they leverage sensor data collected from real environments and supplement that data with tertiary data to generate a substantially accurate simulated environment (e.g., relative to the corresponding real environment) more efficiently than what is available with conventional techniques. Further, techniques described herein—such as customizing the look of a simulated environment by randomizing and/or parameterizing the addition of object and/or surface details—enable the generation of large, scalable simulated environments in less time and with fewer computational resources than what is available with conventional techniques. Furthermore, techniques described herein are directed to improvements in safety. That is, simulated environments resulting from generation techniques described herein can be used for testing, training, and validating systems onboard an autonomous vehicle to ensure such systems can operate autonomous vehicles safely when deployed in real environments. That is, simulated environments resulting from generation techniques described herein can be used for testing, training, and validating a planner system, which can be used by an autonomous vehicle to navigate the autonomous vehicle along a trajectory in a real environment. Thus, such training, testing, and validating enabled by techniques described herein can provide opportunities to ensure that autonomous vehicles can operate in real world environments safely. As such, techniques described herein improve safety and impact navigation. FIG.1illustrates a schematic diagram100representing procedural world generation as described herein. In an example, one or more computing devices can procedurally render a simulated environment, as described herein. In at least one example, data collection devices102can utilize sensor system(s)104to collect sensor data106associated with a real environment. As described above, the sensor system(s)104can include, but are not limited to, LIDAR sensors, RADAR sensors, ultrasonic transducers, SONAR sensors, ToF sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units, accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, etc.), wheel encoders, microphones, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. The sensor system(s)104can output the sensor data106, which can be received by the computing device(s). In some examples, the data collection devices102can be autonomous vehicles that traverse the real environment, as illustrated inFIG.1. However, the data collection devices102can be any computing device that is capable of collecting sensor data106in a real environment. The computing device(s) can receive, generate, and/or otherwise access road network data108and/or a road mesh110, which can be based at least in part on the sensor data106. In at least one example, the road network data108may, for example, be a two-dimensional (2D) or a three-dimensional (3D) representation indicating one or more of a driving lane element, a bike lane element, a parking lane element, a crosswalk element, an intersection element, a lane divider element, a traffic light element, a stop sign element, a stop line element, a yield sign element, a yield line element, a driveway element, a speed bump element, jay walking regions (e.g., a virtual crosswalk), trajectory waypoints (e.g., known trajectories), passenger pickup points, a sign location element, a geofence element, and the like. In some examples, the road network data108can be encoded with information indicating an attribute of a particular portion of the road network data. For instance, a road line in the road network data108can be encoded with information indicating that the road line is associated with a bike lane element, a parking lane element, or a crosswalk element. The road mesh110can comprise 3D tiles (which can be output by a localization system, as described below). Additional details associated with such road network data108and/or the road mesh110are described in U.S. patent application Ser. No. 15/927,806, filed on Mar. 21, 2018 and U.S. patent application Ser. No. 15/913,647, filed on Mar. 6, 2018, the entire contents of both of which are incorporated by reference herein. The computing device(s) can associate the road network data108with the road mesh110to generate a simulated environment112. Such an integration may comprise, in at least some instances, projecting the road network data108(as 2D or 3D data) into the road mesh110. That is, techniques described herein are directed to generating a simulated environment (e.g., the simulated environment112) based on real-world data (e.g., the sensor data106). The resulting simulated environment112can include accurate heights and surface details (e.g., in view of a corresponding real environment). However, in some examples, there may be holes in the simulated environment (e.g., incomplete data), for instance, due to occlusions (e.g., parked cars, tight alleyways, etc.) when constructing the road mesh110. In at least one example, the computing device(s) can access a second, alternate source of data to supplement the existing simulated environment (e.g., and fill in the holes). For instance, in at least one example, the computing device(s) can access data from third-party source(s) and/or system(s)114and can leverage such supplemental data116to supplement the existing simulated environment112. The supplemental data116can go beyond the current data set (e.g., road network data108and road mesh110), such that the supplemental data116provides information associated with the real environment that is not otherwise available to the data collection device(s)102due to occlusion(s) and/or other deficiencies associated with the data collection techniques. In at least one example, the supplemental data116can include U.S. Geological Survey (USGS) Data Evaluation Model (DEM) data, etc. The USGS DEM data can comprise a dataset with raster elevation data (e.g., a digital elevation map). The USGS DEM data may not be as accurate as the associated data (e.g., the road network data and the road mesh); however, the USGS DEM data is often more complete. That is, the USGS DEM data may not have holes and thus, such data can be used to supplement data sets that are missing data (e.g., due to occlusions or other deficiencies in data collection). In an additional or alternative example, the supplemental data116can include tree map data associated with a real environment, color imagery data associated with a real environment, map data associated with an environment, etc. Furthermore, the computing device(s) can leverage data associated with characteristics of objects in the environment to further supplement the simulated environment112. In at least one example, the computing device(s) can access a stored object footprint(s) data storage118, which stores stored object data120representative of footprints of buildings or other stationary objects. In some examples, such footprints can be associated with annotations regarding height, classification (e.g., residential, commercial, etc.), etc. The computing device(s) can utilize the footprints and associated annotations (e.g., stored object data120) as a guide mesh for generating façade pieces and rule sets. For instance, in an example, the computing device(s) can associate rule sets with individual stored object footprints. The rule sets can indicate what surface details and/or textures to associate with various portions of an object. Such rule sets can be associated with the individual stored object footprints randomly or based on one or more parameters (e.g., height, classification, etc.). Such rule sets can indicate how to generate façade pieces for the objects corresponding to the stored object footprints. For instance, a rule set can include references to textures that can be used to generate façade pieces. As a non-limiting example, a rule set may indicate using a particular mesh, texture, etc. for a first floor of a commercial office building (e.g., the building classification), a different mesh, texture, etc. for second (and subsequent) floor of such a building, etc. As such, execution of a rule set can add surface details (e.g., a façade) to an object in the simulated environment. The addition of such details enables realistic-looking simulated environments to be procedurally generated. For instance, façade details can affect shadow casting and/or reflections of windows, which can add complexity to the simulated environment so that it represents real-world conditions. In at least one example, the building footprints, heights, texturing, and classifications may be randomly defined. In those examples where data associated with such footprints does not have an indication of positioning within a map (e.g., where footprints are randomly determined, retrieved from a data storage of footprints agnostic to a map, etc.), such footprints may be aligned with, or otherwise positioned, such that at least one façade aligns with a street in a road network, is spaced according to one or more rules (e.g., placed a certain distance to a street and, based on the classification, a minimum or maximum distance from other buildings, is orientated in a particular orientation, etc.), and the like. When such rule sets are applied, plausible looking buildings can be generated automatically (e.g., without human modeling) in the simulated environment112. Utilizing such rules enables the procedural generation of different-looking simulated environments without a significant investment in designer time and/or computational resources. That is, utilizing such rules increases the efficiency with which complex simulated environments can be generated (e.g., via relatively straightforward rules). In some examples, the computing device(s) can utilize texturing data to add surface detail to the simulated environment112, for instance, during real-time rendering. In such examples, the computing device(s) can access a surface detail data storage122, which stores surface detail data106. The surface detail data106, which can also be called “texturing data,” can comprise details (e.g., defects, patches, markings, etc.) which can be added to objects in the simulated environment112to make such objects appear unique (without increasing artist workload significantly or otherwise requiring additional compute for customizing algorithmically). In at least one example, the computing device(s) can utilize sparse virtual textures to render the simulated environment112in a single draw which increases performances and reduces computational resources. In such an example, each surfel (e.g., surface element) may be associated with unique data (such as an identification), such that the individual surfels may be allocated, addressed, and assigned. Furthermore, in some examples, the computing device(s) can add a plurality of brush-stroke-like decals on each surface of an object to be rendered in the simulated environment112. In at least some examples, various decals may be applied for various regions and classifications of structures (e.g., photorealistic dirt and grime, graffiti, garbage, etc. may be applied to, for example, building façades in an alleyway) so as to modify any procedural based texturing (e.g., applying a pattern of textures over a surface given an associated classification). Techniques described herein enable designers to model a few different textures, which can be used throughout the simulated environment112. Adding surface detail to the simulated environment112can increase diversity within the simulated environment112and between the simulated environment112and other simulated environments. A resulting simulated environment134can be output for use by a simulation computing system. In at least one example, the simulated environment134can be useful for enhancing training, testing, and/or validating systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle. For instance, in at least one example, the simulated environment134can be useful for training systems that are to be used onboard an autonomous vehicle (e.g., models used by such systems), for instance when real data is not readily available, when testing would be unsafe in a real environment, and in order to generate magnitudes more data than would otherwise be available. In at least one example, the simulated environment134can be used for generating training data for rare or infrequently occurring scenarios and/or objects. Moreover, the simulated environment134can be useful for testing performance of an autonomous vehicle (e.g., models and/or systems running thereon), for instance, when real environments are either not available or are not safe, or a ground truth is not otherwise available. By having a simulated environment, exact ground truth measurements can be determined for such validations without the need for human based annotations of data. Furthermore, in some examples, sensor data associated with the simulated environment134can be more accurate than sensor data associated real environments (e.g., due to occlusions, noise, drift, etc.) and as such, the simulated environment134can be used for validating observations made in association with real environments. In some examples, the simulated environment134can be used for calibration (e.g., of one or more sensor systems onboard an autonomous vehicle). FIGS.2A-2Fillustrate non-limiting examples of various aspects of procedural rendering of a simulated environment, as described herein. As described above, data collection device(s)102can generate sensor data106associated with a real environment via sensor system(s)104. Computing device(s) can receive the sensor data106and receive, generate, and/or otherwise access road network data108, as illustrated inFIG.2A. In at least one example, the road network data108may, for example, be a 2D or a 3D representation of the real environment (or a portion thereof) indicating one or more of a driving lane element, a bike lane element, a parking lane element, a crosswalk element, an intersection element, a lane divider element, a traffic light element, a stop sign element, a stop line element, a yield sign element, a yield line element, a driveway element, a speed bump element, jay walking regions (e.g., a virtual crosswalk), trajectory waypoints (e.g., known trajectories), passenger pickup points, a sign location element, a geofence element, and the like. Additionally, the computing device(s) can generate a road mesh110. In at least one example, the sensor data106can comprise LIDAR data, which can be used to generate a 3D point cloud representative of the real environment, as illustrated inFIG.2B. In at least one example, the computing device(s) can generate the road mesh110based on the 3D point cloud. The road mesh110can comprise 3D tiles (which can be output by a localization system, as described below). The computing device(s) can associate the road network data108with the road mesh110to generate a simulated environment112. Such an integration may comprise, in at least some instances, projecting the road network data108(as 2D or 3D data) onto the road mesh110. As described above, the resulting simulated environment112can include accurate heights and surface details (e.g., in view of a corresponding real environment). However, in some examples, there may be holes in the simulated environment (e.g., incomplete data), for instance, due to occlusions (e.g., parked cars, tight alleyways, etc.) when constructing the road mesh110. In at least one example, the computing device(s) can access a second, alternate source of data to supplement the existing simulated environment (e.g., and fill in the holes).FIG.2Cillustrates anon-limiting example of supplemental data116corresponding to the same portion of the real environment as is represented inFIGS.2A and2B. For instance, in at least one example, the computing device(s) can access data from third-party source(s) and/or system(s)114and can leverage such supplemental data116to supplement the existing simulated environment112(e.g., by integrating the road network data108and/or road mesh110with the supplemental data116). As described above, the supplemental data116can include USGS DEM data associated with a real environment, tree map data associated with a real environment, color imagery data associated with a real environment, map data associated with an environment, etc. Furthermore, the computing device(s) can leverage data associated with characteristics of objects in the environment to further supplement the simulated environment112. In at least one example, the computing device(s) can access a stored object footprint(s) data storage118, which stores stored object data120representative of footprints of buildings or other stationary objects. In some examples, such footprints can be associated with annotations regarding height, classification (e.g., residential, commercial, etc.), rule sets, etc. The computing device(s) can utilize the footprints and associated annotations (e.g., stored object data120) as a guide mesh for generating façade pieces, as described above. In at least one example, the building footprints, heights, texturing, rule sets, and/or classification may be randomly defined. In those examples where data associated with such footprints does not have an indication of positioning within a map (e.g., where footprints are randomly determined, retrieved from a data storage of footprints agnostic to a map, etc.), such footprints may be aligned with, or otherwise positioned, such that at least one façade aligns with a street in a road network, is spaced according to one or more rules (e.g., placed a certain distance to a street and, based on the classification, a minimum or maximum distance from other buildings, is orientated in a particular orientation, etc.), is orientated based on one or more rules, and the like. When such rule sets are applied, plausible looking buildings can be generated automatically in the simulated environment112, as illustrated inFIG.2D. In some examples, the computing device(s) can utilize texturing data to add surface detail to the simulated environment112, for instance, during real-time rendering. In such examples, the computing device(s) can access a surface detail data storage130which stores surface detail data132. The surface detail data132, which can also be called “texturing data,” can comprise details (e.g., defects, patches, markings, etc.) which can be added to objects in the simulated environment112to make such objects appear unique (without increasing artist workload significantly). In at least one example, the computing device(s) can utilize sparse virtual textures, as illustrated inFIG.2E, to render the simulated environment112in a single draw which increases performances and reduces computational resources. In such an example, each surfel may be associated with unique data (such as an identification), such that the individual surfels may be allocated, addressed, and assigned. Though depicted as single alphanumeric characters for illustrative purposes inFIG.2E, such identifications are not meant to be so limiting. Furthermore, in some examples, the computing device(s) can add a plurality of brush-stroke-like decals on each surface of an object to be rendered in the simulated environment112. In at least some examples, various decals may be applied for various regions and classifications of structures (e.g., photorealistic dirt and grime, graffiti, garbage, etc. may be applied to, for example, building façades in an alleyway) so as to modify any procedural based texturing (e.g., applying a pattern of textures over a surface given an associated classification). FIG.2Fillustrates a non-limiting example of the resulting simulated environment134, which can be for enhancing training, testing, and/or validating systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle, as described herein. That is, in some examples, the resulting simulated environment134can be used for training, testing, and/or validating algorithm(s) used by onboard systems of an autonomous vehicle, which can be used by the autonomous vehicle to control the autonomous vehicle. FIG.3is a block diagram illustrating an example system300for procedurally rendering a simulated environment. In at least one example, a vehicle302can include one or more vehicle computing devices304, one or more sensor systems306, one or more emitters308, one or more communication connections310, at least one direct connection312, and one or more drive systems314. For the purpose of illustration, the vehicle302can be an autonomous vehicle configured to operate according to a Level 5 classification issued by the U.S. National Highway Traffic Safety Administration, which describes a vehicle capable of performing all safety-critical functions for the entire trip, with the driver (or occupant) not being expected to control the vehicle at any time. In such an example, since the vehicle302can be configured to control all functions from start to stop, including all parking functions, it can be unoccupied. This is merely an example, and the systems and methods described herein can be incorporated into any ground-borne, airborne, or waterborne vehicle, including those ranging from vehicles that need to be manually controlled by a driver at all times, to those that are partially or fully autonomously controlled. That is, in the illustrated example, the vehicle302is an autonomous vehicle; however, the vehicle302could be any other type of vehicle. In at least one example, the vehicle302can be a data collection device (e.g., of the data collection device(s)102). In an additional or alternative example, the one or more components of the AI stack described above can be associated with the vehicle302. That is, the simulated environment described herein can be used to train, test, and/or validate one or more of the components described below with reference to vehicle302. The vehicle computing device(s)304can include processor(s)316and memory318communicatively coupled with the processor(s)316. In the illustrated example, the memory318of the vehicle computing device(s)304stores a localization system320, a perception system322, a prediction system324, a planning system326, and one or more system controllers328. Additionally, the memory318can include a storage230, which can store map(s), model(s), etc. A map can be any number of data structures modeled in two dimensions, three dimensions, or N dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. Maps can be associated with real environments or simulated environments. Model(s) can include machine-trained models, as described below. In at least one example, the localization system320can determine a pose (e.g., a position and an orientation) of the vehicle302in relation to a local and/or global map based at least in part on sensor data received from the sensor system(s)306and/or map data associated with a map (e.g., of the map(s)). In at least one example, the localization system320can include, or be associated with a calibration system that is capable of performing operations for calibrating (determining various intrinsic and extrinsic parameters associated with any one or more of the sensor system(s)306), localizing, and mapping substantially simultaneously. Additional details associated with such a system are described in U.S. patent application Ser. No. 15/675,487, filed on Aug. 11, 2017, which is related to U.S. patent application Ser. No. 15/674,853, filed on Aug. 11, 2017, the entire contents of both of which are incorporated by reference herein. As described above, the localization system320can output road network data and/or a road mesh based on the sensor data received by the sensor system(s)306. In at least one example, the perception system322can perform object detection, segmentation, and/or classification based at least in part on sensor data received from the sensor system(s)306. In at least one example, the perception system322can receive raw sensor data (e.g., from the sensor system(s)306). In other examples, the perception system322can receive processed sensor data (e.g., from the sensor system(s)306). For instance, in at least one example, the perception system322can receive data from a vision system that receives and processes camera data (e.g., images). In at least one example, the vision system can utilize one or more image processing algorithms to perform object detection, segmentation, and/or classification with respect to object(s) identified in an image. In some examples, the vision system can associate a bounding box (or other semantic information, such as an instance segmentation) with an identified object and can associate a confidence score associated with a classification of the identified object. In some examples, objects, when rendered via a display, can be colored based on their perceived class. In at least other examples, similar processes (detection, classification, segmentation, etc.) may be performed by the perception system322for one or more other modalities (e.g., LIDAR, RADAR, ToF sensors, etc.). The prediction system324can access sensor data from the sensor system(s)306, map data associated with a map (e.g., of the map(s) which can be in storage230), and/or perception data output from the perception system322(e.g., processed sensor data), and can output predictions associated with one or more objects within the environment of the vehicle302. In at least one example, the planning system326can determine routes and/or trajectories to use to control the vehicle302based at least in part on sensor data received from the sensor system(s)306and/or any determinations made by the perception system322. Additional details of localizer systems, perception systems, prediction systems, and/or planning systems that are usable can be found in U.S. Pat. No. 9,612,123, issued on Apr. 4, 3017, and U.S. patent application Ser. No. 15/632,208, filed Jun. 23, 3017, the entire contents of both of which are incorporated by reference herein. In some examples (e.g., where the vehicle302is not an autonomous vehicle), one or more of the aforementioned systems and/or components can be omitted from the vehicle302. While the systems described above are illustrated as “onboard” the vehicle302, in other implementations, the systems can be remotely located and/or accessible to the vehicle302. In at least one example, the localization system320, the perception system322, the prediction system324, and/or the planning system326can process sensor data, as described above, and can send their respective outputs over network(s)332, to computing device(s)334. In at least one example, the localization system320, the perception system322, the prediction system324, and/or the planning system326can send their respective outputs to the computing device(s)334at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. In at least one example, the vehicle computing device(s)304can include one or more system controllers328, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle302. These system controller(s)328can communicate with and/or control corresponding systems of the drive system(s)314and/or other components of the vehicle302. In at least one example, the sensor system(s)306, which can correspond to sensor system(s)104, can include LIDAR sensors, RADAR sensors, ToF sensors, ultrasonic transducers, SONAR sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units, accelerometers, magnetometers, gyroscopes, etc.), cameras (e.g., RGB, IR, intensity, depth, etc.), microphones, wheel encoders, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), etc. The sensor system(s)306can include multiple instances of each of these or other types of sensors. For instance, the LIDAR sensors can include individual LIDAR sensors located at the corners, front, back, sides, and/or top of the vehicle302. As another example, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle302. The sensor system(s)306can provide input to the vehicle computing device(s)304. In some examples, the sensor system(s)306can preprocess at least some of the sensor data prior to sending the sensor data to the vehicle computing device(s)304. In at least one example, the sensor system(s)306can send sensor data, via the network(s)332, to the computing device(s)334at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. The vehicle302can also include one or more emitters308for emitting light and/or sound, as described above. The emitter(s)308in this example include interior audio and visual emitters to communicate with passengers of the vehicle302. By way of example and not limitation, interior emitters can include speakers, lights, signs, display screens, touch screens, haptic emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners, etc.), and the like. The emitter(s)308in this example also include exterior emitters. By way of example and not limitation, the exterior emitters in this example include light emitters (e.g., indicator lights, signs, light arrays, etc.) to visually communicate with pedestrians, other drivers, other nearby vehicles, etc., one or more audio emitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians, other drivers, other nearby vehicles, etc., etc. In at least one example, the emitter(s)308can be disposed at various locations about the exterior and/or interior of the vehicle302. The vehicle302can also include communication connection(s)310that enable communication between the vehicle302and other local or remote computing device(s). For instance, the communication connection(s)310can facilitate communication with other local computing device(s) on the vehicle302and/or the drive system(s)314. Also, the communication connection(s)310can allow the vehicle to communicate with other nearby computing device(s) (e.g., other nearby vehicles, traffic signals, etc.). The communications connection(s)310also enable the vehicle302to communicate with a remote teleoperations computing device or other remote services. The communications connection(s)310can include physical and/or logical interfaces for connecting the vehicle computing device(s)304to another computing device or a network, such as network(s)332. For example, the communications connection(s)310can enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as BLUETOOTH®, or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s). The direct connection312can directly connect the drive system(s)314and other components of the vehicle302. In at least one example, the vehicle302can include drive system(s)314. In some examples, the vehicle302can have a single drive system314. In at least one example, if the vehicle302has multiple drive systems314, individual drive systems314can be positioned on opposite ends of the vehicle302(e.g., the front and the rear, etc.). In at least one example, the drive system(s)314can include sensor system(s) to detect conditions of the drive system(s)314and/or the surroundings of the vehicle302. By way of example and not limitation, the sensor system(s) can include wheel encoder(s) (e.g., rotary encoders) to sense rotation of the wheels of the drive module, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure position and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive module, LIDAR sensors, RADAR sensors, etc. Some sensors, such as the wheel encoder(s), can be unique to the drive system(s)314. In some cases, the sensor system(s) on the drive system(s)314can overlap or supplement corresponding systems of the vehicle302(e.g., sensor system(s)306). The drive system(s)314can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle302, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port, etc.). Additionally, the drive system(s)314can include a drive module controller which can receive and preprocess data from the sensor system(s) and to control operation of the various vehicle systems. In some examples, the drive module controller can include processor(s) and memory communicatively coupled with the processor(s). The memory can store one or more modules to perform various functionalities of the drive system(s)314. Furthermore, the drive system(s)314also include communication connection(s) that enable communication by the respective drive module with other local or remote computing device(s). In some examples, the vehicle computing device(s)304, sensor system(s)306, emitter(s)308, and the communication connection(s)310can be implemented outside of an actual vehicle, for instance, as a simulated vehicle or as simulated systems, for use in “traversing” a simulated environment. That is, the vehicle computing device(s)304, sensor system(s)306, emitter(s)308, and the communication connection(s)310can be used as a simulated autonomous vehicle for simulation purposes as described above. As described above, the vehicle302can send sensor data to the computing device(s)334, via the network(s)332. That is, in some examples, the vehicle302can be a data collection device102as described above with reference toFIG.1. For the purpose of this discussion, the computing device(s) described above with reference toFIG.1can refer to the vehicle computing device(s)304and/or the computing device(s)334. In some examples, the vehicle302can send raw sensor data to the computing device(s)334. In other examples, the vehicle302can send processed sensor data and/or representations of sensor data to the computing device(s)334(e.g., data output from the localization system320, the perception system322, the prediction system324, and/or the planning system326). In some examples, the vehicle302can send sensor data to the computing device(s)334at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. The computing device(s)334can receive the sensor data (raw or processed) from the vehicle302and/or one or more data collection devices336(which can include other vehicles like vehicle302), as well as data from one or more third-party sources and/or systems338. In at least one example, the computing device(s)334can include processor(s)340and memory342communicatively coupled with the processor(s)340. In the illustrated example, the memory342of the computing device(s)334stores a simulation system344, a training system346, an evaluating system348, a map(s) storage350(e.g., storing one or more maps, the road network data, the road mesh, etc.), a training data storage352(e.g., storing training data accessible to the training system346), a model(s) storage354(e.g., models output by the training system346), a stored object footprint(s) data storage356, and a surface detail data storage358. In some examples, one or more of the systems and/or storage repositories can be associated with the vehicle302instead of, or in addition to, being associated with the memory342of the computing device(s)334. The simulation system344can generate simulated environments. In at least one example, the simulation system344can generate simulated environments via procedural generation (e.g., creating data algorithmically), as described above with reference toFIGS.1-2G. Additional details are also described below. In at least one example, the simulation system344can access the stored object footprint(s) data storage356and/or the surface detail data storage358to procedurally render simulated environments. In an example, the stored object footprint(s) data storage356can correspond to the stored object footprint(s) data storage118described above with reference toFIG.1and the surface detail data storage358can correspond to the surface detail data storage130, as described above with reference toFIG.1. In some examples, the stored object footprint(s) data storage356and/or the surface detail data storage358can be stored in the memory342, as illustrated inFIG.3. In additional or alternative examples, the stored object footprint(s) data storage356and/or the surface detail data storage358can be stored remotely and accessible to the computing device(s)334and/or data stored therein can be provided to the computing device(s)334from a third-party source and/or system338. In some examples, stored object data and/or surface texture data can be generated in near-real time. In at least one example, as described below, the simulation system344can procedurally render objects in simulated environments. That is, in some examples, the data described above (e.g., 3D tiles, road network data, supplemental data, etc.) still has deficiencies, when compared to a corresponding real environment. In such examples, the simulation system344can utilize various heuristics to render objects in the simulated environment. Non-limiting examples of such objects include lamp posts and/or poles for connecting the lamp post to a traffic signal, parking signs (e.g., which can be rendered based on parking lanes determined from the road network data), parking meters (e.g., which can be rendered based on parking lanes determined from the road network data), stop signs (e.g., which can be rendered based on stop lines determined from the road network data), etc. Additional details are described below with reference toFIG.6. In at least one example, the evaluating system348can evaluate how realistic a simulated environment, or a portion thereof, is relative to a corresponding real environment using the perception system322(or another system that inputs data into the perception system322(e.g., vision system, LIDAR system, etc.)). In some examples, two environments (e.g., real vs. simulated) can look different to a human, but can be perceived as the same to, for example, a robotic system (e.g., an autonomous vehicle) as defined herein (e.g., based on activations of a neural network). In at least one example, the evaluating system348can analyze the data using a machine-trained model to evaluate realism of a simulated environment. For instance, in at least one example, the evaluating system348can analyze a first intermediate output of a neural network associated with a system (e.g., a vision system, LIDAR system, etc.) (e.g., based on a simulated environment) with a second intermediate output of the neural network (e.g., based on a corresponding real environment), and can determine a similarity metric (e.g., a difference) that can be representative of how similar the neural network activations associated with the simulated environment are when compared to neural network activations associated with the corresponding real environment. In at least some examples, such activations can be compared by discretizing a region of an input space into corresponding grids and building a histogram of activations in the associated grids for input data and comparison data. Once determined, the histograms may be analyzed, for example, by a support vector machine (SVM), wherein a distance (e.g., a statistical distance) is used to determine how similar the two data sets are. In some examples, different sensor data types can be associated with different parameters of interest (e.g., different parameters that are tuned to improve the realism of a simulated environment). For instance, with the vision system, the parameters of interest can be brightness, exposure, etc., and with the LIDAR system, the parameters of interest can be angles, distance, intensity, sensor modalities, etc. In at least one example, the training system346can train a data model to learn which parameters matter for the perception system322(e.g., what parameters matter such that the perception system322can perceive the simulated environment as though it is perceiving the real environment). That is, in at least one example, the training system346can train a data model to evaluate realism based on one or more identified parameters. Additional details associated with training and/or using such model(s) are described in U.S. application Ser. No. 16/163,435, filed concurrently herewith on Oct. 17, 2018, the entire contents of which are incorporated by reference herein. As described above, in at least one example, the evaluating system348can analyze data using a machine-trained model (e.g., as described above) to evaluate realism of a simulated environment. That is, the evaluating system348can analyze simulated environments to determine whether such simulated environments activate a neural network similarly to how corresponding real environments activate the neural network. In at least one example, the evaluating system348can utilize the machine-trained model to compare a first intermediate output associated with a real environment and a second intermediate output associated with a corresponding simulated environment to determine a similarity metric (e.g., a difference, a distance, etc.) representative of the similarity between the first intermediate output and the second intermediate output. In some examples, the first intermediate output and the second intermediate output can be derived from images, portions of data (e.g., that correspond to individual objects associated with the data), etc. For instance, in at least one example, the first intermediate output can be associated with a first perceived object in an image associated with the real environment and the second intermediate output can be associated with a second perceived object in an image associated with the simulated environment. If the similarity metric (e.g., the difference, the distance, etc.) does not meet a threshold (e.g., the first intermediate output and the second intermediate output are similar), the evaluating system348can determine that the simulated environment realistically represents the real environment (e.g., neural network activations are similar). However, if the similarity metric (e.g., the difference, the distance, etc.) meets or exceeds the threshold, the evaluating system348can tune one or more parameters to observe changes to the one or more metrics. For instance, the evaluating system348can tune parameters such as brightness, exposure, etc. for improving photorealism. As described above, simulated environments can be useful for enhancing training, testing, and/or validating systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle, such as vehicle302. In at least one example, simulated environments can be useful for training data models where training data from real environments is insufficient (e.g., as is the case with rare objects, rare scenarios, etc.). In such examples, a resulting data model can be provisioned to, or accessible by, the vehicle302, and the vehicle302can utilize the data model for classifying objects in real-time (e.g., while driving or otherwise operating in the real environment). That is, the perception system322can utilize the data model (trained based on simulated data associated with a simulated environment) onboard in near real-time to classify objects. As a non-limiting example, training data from real environments is insufficient for training the vehicle302to recognize rare events/objects (e.g., traffic lights types that are not frequently seen). In at least one example, by comparing simulated environments with real environments, the data model can learn that particular parameters matter for training a traffic light classifier. For instance, such parameters can include bulb discoloration, shading, lens distortion, dirt on the light, a burnt-out filament, variation in brightness, bulb rotation, bulb intensity, etc. Based on identifying the parameters, the training system346can tune simulated environments associated with traffic lights and can train a traffic light classifier based on the tuned simulated environments. Such a classifier can be provisioned to, or accessible by, the vehicle302, and the vehicle302can utilize the data model for classifying traffic lights in real-time. For instance, the perception system322can utilize the classifier (trained based on simulated data used to generate a simulated environment) onboard in near real-time to classify traffic lights. That is, as described above, in at least one example, a classifier can be trained on simulated data and used for evaluating real data. In some examples, the classifier can be trained on real data and validated using simulated data. In such examples, identified discrepancies can be used to improve the classifier. In at least some instances, such rare examples may be identified by training, for example, a traffic light detector based on simulated image data, running the detector on real data, and determining where detections were missed. Similarly, determining that simulated parameters are not correct may comprise training an algorithm (e.g. the same detector as above) on real data, running such a detector on simulated data, and detecting missed objects. Furthermore, simulated environments can be useful for validating and/or updating a localization algorithm used by the localization system320. For instance, in real environments, GPS sensors experience positional drifts and may, as a result, accumulate error. Accordingly, to validate a localization algorithm that is used for localizing the vehicle302, the evaluating system348can use a simulated environment, where the pose of the vehicle302is known at various times (including at all times) and evaluate the sensor data associated with a corresponding real environment to validate the localization algorithm (e.g., by relying on simulated poses as position and/or orientation ground truth). In such an example, the sensor system(s)306can generate sensor data associated with the simulated environment and the sensor data can be analyzed by the perception system322. An output of the perception system322(e.g., associated with a position in a real environment) can be validated in view of the sensor data associated with the corresponding position in the simulated environment. That is, the sensor data associated with a position in a simulated environment can serve as the ground truth for the corresponding position in the real environment. As an example, LIDAR data recorded in association with a simulated environment (e.g., where the pose of the vehicle302is known) can be compared to LIDAR data recorded in association with a corresponding position in a real environment and the localization algorithm can be updated as appropriate. Furthermore, simulated environments can be useful for validating RADAR or other sensors of the sensor system(s)306. In some examples, simulated environments can offer ground truth data for calibrating sensors (e.g., of the sensor system(s)106). Other examples include, but are not limited to validating rolling shutter in simulation, calibration (e.g., of one or more of intrinsics or extrinsics) of various sensors, and the like. As would be appreciated, the techniques described herein may be used in validation, calibration, training, etc. for various other systems, subsystems, etc. The processor(s)316of the vehicle302and the processor(s)340of the computing device(s)334can be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s)316and340can comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that can be stored in registers and/or memory. In some examples, associated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices can also be considered processors in so far as they are configured to implement encoded instructions. Memory318and342are examples of non-transitory computer-readable media. Memory318and342can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory can be implemented using any suitable memory technology, such as static random access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein can include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. It should be noted that whileFIG.3is illustrated as a distributed system, in alternative examples, components of the vehicle302can be associated with the computing device(s)334and/or components of the computing device(s)334can be associated with the vehicle302. That is, the vehicle302can perform one or more of the functions associated with the computing device(s)334, and vice versa. FIGS.4-6are flowcharts showing example methods involving techniques as described herein. The methods illustrated inFIGS.4-6are described with reference to the system300shown inFIG.3for convenience and ease of understanding. However, the methods illustrated inFIGS.4-6are not limited to being performed using the system300. Moreover, the system300described herein is not limited to performing the methods illustrated inFIGS.4-6. The methods400-600are illustrated as collections of blocks in logical flow graphs, which represent sequences of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by processor(s), perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes. In some embodiments, one or more blocks of the process can be omitted entirely. Moreover, the methods400-600can be combined in whole or in part with each other or with other methods. FIG.4illustrates an example process400for generating a simulated environment and/or using the simulated environment for training, testing, validation, etc. Block402illustrates accessing road network data. The vehicle computing device(s)304can receive, generate, and/or otherwise access road network data. In some examples, the road network data can be based at least in part on the sensor data. In at least one example, the road network data may, for example, be a 2D or 3D representation indicating one or more of a driving lane element, a bike lane element, a parking lane element, a crosswalk element, an intersection element, a lane divider element, a traffic light element, a stop sign element, a stop line element, a yield sign element, a yield line element, a driveway element, a speed bump element, jay walking regions (e.g., a virtual crosswalk), trajectory waypoints (e.g., known trajectories), passenger pickup points, a sign location element, a geofence element, and the like. In some examples, the road network data can be encoded with information indicating an attribute of a particular portion of the road network data, as described above. In at least one example, the simulation system344can access the road network data. As described above, in some examples, the road network data may be stored in the map(s) data storage350. Block404illustrates accessing a road mesh. In at least one example, the vehicle computing device(s)304can generate a road mesh110. In at least one example, the vehicle computing device(s)304can receive LIDAR data, which can be used to generate a 3D point cloud representative of the real environment. In at least one example, the vehicle computing device(s)304can generate the road mesh110based on the 3D point cloud. The road mesh110can comprise 3D tiles, as described above. The simulation system344can access 3D tiles, which, in some examples, can be stored in the map(s) storage350. Block406illustrates merging the road network data and the road mesh to generate a simulated environment. The simulation system344can associate the road network data with the road mesh to generate a simulated environment. Such an association may comprise, in at least some instances, projecting the road network data (as 2D or 3D data) into the road mesh. This may be done by determining a middle of a road segment and substantially aligning the segment with corresponding regions from the road mesh upon projecting. Any outliers (e.g., surfaces which do not align with the road mesh after projection (e.g., a sidewalk projected into a tree due to errors in alignment or generation of the maps)) may be determined and smoothed as necessary. The resulting simulated environment can include accurate heights and surface details (e.g., in view of a corresponding real environment). Block408illustrates accessing supplemental data from an alternate data source. In some examples, there may be holes in the simulated environment (e.g., incomplete data), for instance, due to occlusions (e.g., parked cars, tight alleyways, etc.) when constructing the road mesh. In at least one example, the simulation system344can access a second, alternate source of data to supplement the existing simulated environment (e.g., and fill in the holes). For instance, in at least one example, the simulation system344can access data from third-party source(s) and/or system(s)338and can leverage such data to supplement the existing simulated environment by merging the supplemental data into the simulated environment, as illustrated in block410. In at least one example, the supplemental data can include USGS DEM data associated with the real environment, tree map data associated with the real environment, color imagery data associated with the real environment, map data associated with the environment, etc., as described above. In some examples, the supplemental data does not naturally align with the 3D tiles used for generating the simulated environments. Details associated with aligning the road mesh and the supplemental data are described below with reference toFIG.5. Block412illustrates rendering objects in the simulated environment. The simulation system344can leverage data associated with characteristics of objects in the environment to further supplement the simulated environment. In at least one example, the simulation system344can access data representative of footprints of buildings or other stationary objects, which can be stored in the stored object footprint(s) data storage356. In some examples, such footprints can be associated with annotations regarding height, classification (e.g., residential, commercial, etc.), rule set(s) (e.g., for texturing), etc. The simulation system344can utilize the footprints and associated annotations as a guide mesh for generating façade pieces. In at least one example, the building footprints, heights, texturing, and classification may be randomly defined. In some examples, a predefined set of textures, heights, or footprints may be defined. A resulting set of random permutations of one or more of texture, height, footprint, etc. may, in turn, be used as the set of building with which to populate the simulated environment. In those examples where data associated with such footprints does not have an indication of positioning within a map (e.g., where footprints are randomly determined, retrieved from a data store of footprints agnostic to a map, etc.), such footprints may be aligned with, or otherwise positioned, such that at least one façade aligns with a street in a road network, is spaced according to one or more rules (e.g., placed a certain distance to a street and, based on the classification, a minimum or maximum distance from other buildings, is orientated in a particular orientation, etc.), and the like. When such rule sets are applied, plausible looking buildings can be generated automatically (e.g., without human modeling) to generate a simulated environment comprising buildings. Block414illustrates rendering surface detail associated with object(s) in the simulated environment. In some examples, the simulation system344can utilize texturing data to add surface detail to the simulated environment, for instance, during real-time rendering. Such texturing adds details (e.g., defects, patches, markings, etc.) to objects in the simulated environment to make such objects appear unique (without increasing artist workload significantly). In at least one example, the simulation system344can utilize sparse virtual textures to render the simulated environment in a single draw which increases performances and reduces computational resources. In such an example, each surfel may be associated with unique data (such as an identification), such that the individual surfels may be allocated, addressed, and assigned. Furthermore, in some examples, the simulation system344can add a plurality of brush-stroke-like decals on each surface of an object to be rendered in the simulated environment. In at least some examples, various decals may be applied for various regions and classifications of structures (e.g., photorealistic dirt and grime, graffiti, garbage, etc. may be applied to, for example, building façades in an alleyway) so as to modify any procedural based texturing (e.g., applying a pattern of textures over a surface given an associated classification). In at least one example, the simulation system344can parameterize surfaces of a simulated environment to create unique texture coordinate spaces. In such an example, each surfel can be associated with a set of pixels that individually reference a texture, which can be stored in the surface detail data storage358. Accordingly, at runtime, the simulation system344can utilize a single drawcall (e.g., to a shading system) that renders the detail (e.g., texture) associated with the portion of the simulated environment that can be seen via a viewport of a simulation computing device. In some examples, a simulated environment can be associated with more than one texture (e.g., associated with different properties of a material) and each texture can be rendered via an individual drawcall at runtime. Such techniques reduce workloads on the processor(s)340, thereby offering computational efficiencies at runtime. In in least one example, if properties of the texture are changed, such changes can be stored in association with the texture in the surface detail data storage358and, at runtime, the changes can be reflected in the rendered texture. That is, a script can update the texture data associated with the texture in the surface detail data storage358, thereby affecting an update to the texture as rendered in the simulated environment at runtime, without having to revisit all of the art. Block416illustrates outputting the simulated environment. In at least one example, the simulation system344can output the simulated environment. As described above, simulated environments can be useful for enhancing training, testing, and/or validating systems (e.g., one or more components of an AI stack) onboard an autonomous vehicle, such as vehicle302. In at least one example, simulated environments can be useful for training data models where training data from real environments is insufficient (e.g., as is the case with rare objects, rare scenarios, etc.). In such examples, a resulting data model can be provisioned to, or accessible by, the vehicle302, and the vehicle302can utilize the data model for classifying objects in real-time (e.g., while driving or otherwise operating in the real environment). That is, the perception system322can utilize the data model (trained based on simulated data associated with a simulated environment) onboard in near real-time to classify objects. In at least some examples, such a data model trained using data associated with the simulated environment may output objects in a real environment based on real sensor data. A trajectory (and corresponding control) may then be determined for an autonomous vehicle to safely navigate the real environment, based at least in part on output of such data models trained on simulated data. As described above, in at least one example, the evaluating system348can analyze perception data using a machine-trained model (e.g., as described above) to evaluate realism of a simulated environment. In at least one example, the evaluating system348can utilize the machine-trained model to compare a first intermediate output (e.g., associated with a layer of a neural network) associated with a real environment and a second intermediate output (e.g., associated with the same layer of the neural network) associated with a corresponding simulated environment to determine a similarity metric (e.g., a difference, a distance, etc.) representative of the similarity between the first intermediate output and the second intermediate output. In some examples, the first intermediate output and the second intermediate output can correspond to images, portions of data (e.g., that correspond to individual objects associated with the data), etc. If the similarity metric (e.g., the difference, the distance, etc.) does not meet a threshold (e.g., the first intermediate output and the second intermediate output are similar), the evaluating system348can determine that the simulated environment realistically represents the real environment and can output the simulated environment, as described above. However, if the similarity metric (e.g., the difference, the distance, etc.) meets or exceeds the threshold, the evaluating system348can tune one or more parameters to observe changes to the one or more metrics. For instance, the evaluating system348can tune parameters such as brightness, exposure, etc. for improving photorealism. When the similarity metric (e.g., the difference, the distance, etc.) is below the threshold (e.g., the first intermediate output and the second intermediate output are similar), the evaluating system348can determine that the simulated environment realistically represents the real environment. FIG.5illustrates an example process500for merging supplemental data with the simulated environment. Block502illustrates aligning supplemental data and a road mesh associated with a simulated environment. As described above, in some examples, the simulation system344can supplement the road mesh (e.g., 3D tiles) with the supplemental data. Initially, the simulation system344can coarsely align the supplemental data and the road mesh. The supplemental data may not naturally align with the 3D tiles of the road mesh used for generating the simulated environments. Block504illustrates determining an error between the supplemental data and the road mesh associated with a simulated environment. In at least one example, the simulation system344can periodically measure error between the supplemental data and the road mesh. In at least one example, such measurements can be taken in areas where the supplemental data is expected to be accurate, such as, for example, large flat spaces (e.g., parking lots, large intersections, and the like). Such a space can correspond to a region of a real environment that meets or exceeds a threshold area (e.g., “large”) and/or is associated with a maximum change in elevation across the region that is below a threshold (e.g., “flat”). In at least one example, such measurements can be taken in an area that is visible in the road mesh and the supplemental data and/or unobstructed and/or otherwise devoid of objects such as trees, buildings, etc. (e.g., for an aerial scan from USGS, etc.). In at least one example, the error can be measured from a designated position within the road mesh. In some examples, the designated position can be a centerline, which can be derived from the road network data and/or the road mesh. For instance, a center of the driving lines, as indicted in the road mesh, can be designated as the centerline from which the error can be measured. In at least some examples, the error can be measured from a center point determined of such a region. That is, the road mesh can be determined to be the authority and the error can be measured from the authority. As a non-limiting example, an error may be associated with differences (e.g., Euclidian distances) between the road mesh and the supplemental data. In some examples, the error can be a height error, measuring a vertical distance between the road mesh and the supplemental data. The error can be a single measurement, an average of multiple measurements, a maximum over an area, a minimum over an area, total error, or another statistically significant measurement representative of the difference and/or distance between the supplemental data and the road mesh. Block506illustrates determining whether the error meets or exceeds a threshold. In at least one example, the simulation system344can compare the error to a threshold and, based on determining that the error does not meet or exceed the threshold, the simulation system344can perform blending techniques, such as, but not limited to global blending techniques, local blending techniques, linear blending techniques, smooth curve blending techniques (with a particular radius), etc., to blend the supplemental data with the road mesh, as illustrated in block508. In at least one example, such blending techniques can involve decay such that error matters less and less as the simulation system344moves away from the selected area. That is, at the centerline of the road mesh, the error can be corrected or mitigated as described below, but other portions of the simulated environment that are successively farther away from the centerline can be blended to correct or otherwise mitigate the error. Based on determining that the error meets or exceeds the threshold, the simulation system344can apply a deformation lattice to substantially align the supplemental data with the road mesh, as illustrated in block510, prior to blending. Adjustments to either data of the third-party source or map from the mapping system may be made to drive such an error in the region(s) used for determining error (e.g., large, flat regions), and propagated throughout the remaining regions of the simulated environment. In at least one example, the simulation system344can make adjustments to the supplemental data such to locally align the supplemental data to the road mesh. In an additional or alternate example, the simulation system344can make adjustments to the road mesh such to locally align the road mesh with the supplemental data. For instance, in at least one example, the simulation system344can apply barycentric weighting to reduce the error (or other interpolations in the data). As a result, the simulation system344can output a refined simulated environment that, based on the supplemental data, is more complete than the simulated environment resulting from integrating the road network data and the road mesh. That is, holes in the simulated environment can be filled thereby enhancing the simulated environment with the supplemental data. In some examples, flow may proceed first to block508from block506. At block508, deformations may be applied locally based on errors at large, flat, regions, such as intersections. An interpolation may be applied between errors from a first intersection (or region, etc.), to all neighboring intersections. In at least some examples, such interpolations may be linear, though all other interpolations are contemplated (e.g., polynomial, bicubic, etc.). For those regions of the road mesh that are away from the center points of such selected regions, barycentric coordinates (or some other bicubic, etc. interpolation) may be applied to adjust the road mesh to the supplemental data. In those examples in which flow first proceeds to block508, flow may then proceed to block510. In such examples, despite adjusting the road mesh to the supplemental data, additional errors may be found between the mesh and the supplemental data. In such examples, block510may comprise further deforming (or otherwise adjusting one or more of the mesh or supplemental data). Such deformations may be performed in accordance with local deformations (e.g., where road mesh data comprises 100% of the source of height information at a center of the selected region and 0% of the height information past some radius from the center, following an S-curve, or otherwise). By locally deforming the data (either mesh or supplemental), it is possible to have smooth transitions between boundaries of mesh data and supplemental data. FIG.6illustrates an example process600for procedurally rendering objects, such as a traffic lamp post, in a simulated environment, as described herein. In at least one example, the simulation system344can procedurally render objects in simulated environments. That is, in some examples, the data described above (e.g., 3D tiles, road network data, supplemental data, etc.) still has deficiencies, when compared to a corresponding real environment. In such examples, the simulation system344can utilize various heuristics to render objects in the simulated environment. Process600illustrates an example of such a process. Block602illustrates determining a location of a traffic signal light in a simulated environment. In at least some examples, data about traffic signal light positions may be provided, or otherwise determined, for instance in association with the road network data and/or road mesh. In at least one example, the simulation system344can determine a location of a traffic signal light based on such data. However, such data may be devoid of traffic pole information. Block604illustrates generating at least one plane which passes through the traffic signal light. In at least one example, the simulation system344can generate one or more planes which pass through the traffic signal light (e.g., at least a plane which has a surface normal pointing in the direction of illumination of the light source). In some examples, the simulation system344can generate two or more planes (e.g., a first plane that is parallel to the road under the traffic signal light and a second plane that is perpendicular to the road under the traffic light signal). Block606illustrates determining, from road network data, a sidewalk proximate the traffic signal light. In at least one example, the simulation system344can identify a sidewalk proximate the traffic signal light, for instance based on the road network data described above. Block608illustrates determining a closest point to the sidewalk. In at least one example, the simulation system344can determine the closest point to a surrounding sidewalk. Such a system may incorporate various path planning algorithms (e.g., A*, D*, etc., incorporate Manhattan constraints (enforcing 90-degree constraints), and/or minimize a number of bends of the light pole) in order to find the shortest path from the position of the traffic signal light to the closest sidewalk point. Block610illustrates rendering a lamp post and pole for connecting the lamp post to the traffic signal light in the simulated environment. The simulation system344can utilize such determined path to render a lamp post, which can be associated with a pole for connecting the lamp post to the traffic signal light. In at least one example, the simulation system344can access data from the third-party source(s) and/or system(s)338which indicates rules for the position, orientation, style, etc. of the lamp post and pole. For instance, as a non-limiting example, such data can indicate that the lamp post should be positioned approximately 20 feet to the right of the traffic signal light. While process600is directed to rendering a lamp post and pole for connecting the lamp post to a traffic signal, in additional or alternative examples, the process600can be directed to adding other objects into simulated environments. Non-limiting examples include parking signs (e.g., which can be rendered based on parking lanes determined from the road network data), parking meters (e.g., which can be rendered based on parking lanes determined from the road network data), stop signs (e.g., which can be rendered based on stop lines determined from the road network data), etc. In such examples, the simulation system344can access data from the third-party source(s) and/or system(s)338which indicates rules for the position, orientation, style, etc. of the objects. In at least one example, rendering such objects in simulated environment(s) can be parameterized such that the simulation system344can adhere to the rules as indicated by the third-party source(s) and/or system(s)338. Example Clauses A. A computer-implemented method comprising: receiving sensor data from a plurality of data collection devices in a real environment; accessing a road network data associated with the real environment, the road network data associated with the real environment; generating, based at least in part on the sensor data, a road mesh associated with the real environment; integrating the road network data with the road mesh to generate a simulated environment; accessing a data storage of stored object footprints; selecting a stored object footprint from the data storage of the stored object footprints; rendering at least one object corresponding to the stored object footprint into the simulated environment; rendering a surface detail associated with the at least one object; and outputting the simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for at least one of navigating, planning, or decision making. B. A computer-implemented method as paragraph A recites, wherein the road network data comprises a two-dimensional representation of the real environment and comprises at least one indication of a driving lane element, a bike lane element, a parking lane element, or a crosswalk element. C. A computer-implemented method as any of paragraphs A-B recite, wherein the road mesh comprises a plurality of three-dimensional tiles output from a mapping system. D. A computer-implemented method as any of paragraphs A-C recite, wherein integrating the road network data and the road mesh comprises: aligning a road segment in the road network data with a corresponding region of the road mesh; and projecting at least a portion of the road segment into the road mesh. E. A computer-implemented method as any of paragraphs A-D recite, wherein the stored object footprint is associated with at least one annotation indicating a height, a classification, or a rule set indicating a texture associated with the object; and the computer-implemented further comprises rendering the at least one object corresponding to the stored object footprint into the simulated environment based at least in part on the annotation. F. A computer-implemented method as any of paragraphs A-E recite, wherein the surface detail comprises at least one of a defect texture, a patch texture, or a marking texture. G. A computer-implemented method as any of paragraphs A-F recite, wherein the surface detail is added using sparse virtual textures in a single draw. H. A system comprising: at least one processor; and computer-readable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts comprising: accessing road network data associated with a real environment; generating a road mesh associated with the real environment; associating the road network data with the road mesh to generate a simulated environment; procedurally rendering at least one object into the simulated environment based at least in part on at least one rule; and outputting the simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for controlling the autonomous robotic computing device. I. The system as paragraph H recites, the acts further comprising: accessing a data storage comprising at least one stored object footprint; selecting a stored object footprint from the data storage; and rendering at least one object corresponding to the stored object footprint into the simulated environment based at least in part on the at least one rule. J. The system as paragraph I recites, wherein the rule comprises an indication of a distance between an individual object and a street in the road network data, an indication of a distance between individual objects of a plurality of objects, or an orientation of individual objects of the plurality of objects. K. The system as paragraph I recites, wherein the at least one stored object footprint is associated with an annotation indicating a characteristic of the object, the characteristic comprising a height or a classification. L. The system as paragraph K recites, wherein the at least one stored object footprint is associated with a rule set for texturing the object, the rule set being associated with the at least one stored object footprint based on the characteristic. M. The system as any of paragraphs H-L recite, the acts further comprising rendering surface detail associated with the at least one object using sparse virtual textures. N. The system as any of paragraphs H-M recite, the acts further comprising: determining, based at least in part on the road network data, a location of a traffic signal light; generating at least one plane which passes through the traffic signal light; determining, from the road network data, a sidewalk proximate the traffic signal light; determining, using a planning algorithm, a closest point to the sidewalk; and rendering a lamp post and pole for connecting the lamp post to the traffic signal light in the simulated environment. O. A non-transitory computer-readable medium storing instructions that, when executed, cause one or more processors to perform operations comprising: accessing road network data associated with a real environment; generating a road mesh associated with the real environment; integrating the road network data with the road mesh to generate a simulated environment; procedurally rendering at least one object into the simulated environment based at least in part on at least one rule; and outputting the simulated environment via a computing device. P. The non-transitory computer-readable medium as paragraph O recites, wherein outputting the simulated environment comprises outputting the simulated environment via a simulation computing device for at least one of testing, validating, or training an algorithm used by an autonomous vehicle for controlling the autonomous vehicle; and the operations further comprise controlling the autonomous vehicle via the algorithm in the real environment. Q. The non-transitory computer-readable medium as any of paragraphs O-P recites, the operations further comprising: accessing a data storage comprising at least one stored object footprint, the at least one stored object footprint being associated with at least one annotation indicating a height or a classification of a corresponding object; selecting a stored object footprint from the data storage; and rendering at least one object corresponding to the stored object footprint into the simulated environment based at least in part on an indication of a distance between an individual object and a street in the road network data, an indication of a distance between individual objects of a plurality of objects, or an indication of an orientation of individual objects of the plurality of objects. R. The non-transitory computer-readable medium as paragraph Q recites, the operations further comprising: associating a rule set with the stored object footprint based at least in part on the at least one annotation; and rendering a texture in association with the object based at least in part on the rule set. S. The non-transitory computer-readable medium as any of paragraphs O-R recite, the operations further comprising: accessing a data storage comprising a plurality of stored object footprints, wherein individual stored object footprints are associated with respective annotations; and combinatorially rendering one or more objects corresponding to one or more stored objects of the plurality of stored object footprints. T. The non-transitory computer-readable medium as any of paragraphs O-S recite, the operations further comprising: determining, based at least in part on the road network data, a location of a traffic signal light; generating at least one plane which passes through the traffic signal light; determining, from the road network data, a sidewalk proximate the traffic signal light; determining, using a planning algorithm, a closest point to the sidewalk; and rendering a lamp post and pole for connecting the lamp post to the traffic signal light in the simulated environment. U. A computer-implemented method comprising: receiving sensor data from a plurality of data collection devices in a real environment; accessing road network data associated with the real environment, the road network data based at least in part on the environment; determining, based at least in part on the sensor data, a road mesh associated with the real environment; associating the road network data with the road mesh to generate a simulated environment, wherein the simulated environment is incomplete with respect to the real environment; accessing supplemental data associated with the real environment, wherein the supplemental data provides information associated with the real environment that is unavailable to the plurality of data collection devices; associating the supplemental data with the simulated environment to supplement the simulated environment as a modified supplemental environment; and outputting the modified simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for at least one of navigating, planning, or decision making. V. The computer-implemented method as paragraph U recites, wherein the supplemental data comprises a geospatial file format storing a raster-based digital elevation model. W. The computer-implemented method as paragraph V recites, wherein the geospatial file format is associated with a United States Geological Survey (USGS) Data Evaluation Model (DEM) standard. X. The computer-implemented method as any of paragraphs U-W recite, wherein the simulated environment is incomplete due to an occlusion in the sensor data associated with a parked car or an alleyway. Y. The computer-implemented method as any of paragraphs U-X recite, further comprising: determining an error between a first portion of the supplemental data and second portion of the road mesh, the first portion and the second portion being associated with a same region of the real environment; determining that the error meets or exceeds a threshold amount of error; and adjusting at least one of the supplemental data or the road mesh to reduce the error. Z. The computer-implemented method as paragraph Y recites, wherein the error comprises at least one of an average error associated with the same region of the real environment. AA. The computer-implemented method as paragraph Y recites, further comprising applying a deformation lattice to one or more of at least a portion of the supplemental data or a corresponding portion of the road mesh to substantially align the supplemental data and the road mesh to reduce the error. AB. A system comprising: at least one processor; and one or more computer-readable instructions that, when executed by the at least one processor, cause the at least one processor to perform acts comprising: receiving sensor data from at least one data collection device in a real environment; accessing at least one of road network data associated with the real environment or a road mesh associated with the real environment, the road mesh being associated with the sensor data; generating a simulated environment based on the at least one of the road network data or the road mesh; associating supplemental data with the simulated environment to generate a modified simulated environment; and outputting the modified simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for at least one of controlling the autonomous robotic computing device. AC. The system as paragraph AB recites, wherein the simulated environment is incomplete due to at least one occlusion in the real environment. AD. The system as any of paragraphs AB-AC recite, wherein the supplemental data provides information associated with the real environment that is otherwise unavailable to the at least one data collection device due to at least one occlusion. AE. The system as any of paragraphs AB-AD recite, wherein the road network data comprises a two-dimensional representation of the real environment and comprises at least one indication of a driving lane element, a bike lane element, a parking lane element, or a crosswalk element. AF. The system as any of paragraphs AB-AE recite, wherein the road mesh comprises a plurality of three-dimensional tiles output from a mapping system. AG. The system as any of paragraphs AB-AF recite, wherein the operations further comprise accessing the road network data and the road mesh and associating the road network data and the road mesh based at least in part on projecting the road network data into the road mesh. AH. The system as any of paragraphs AB-AG recite, wherein the supplemental data comprises elevation data collected by a third-party source or system. AI. The system as any of paragraphs AB-AH recite, the acts further comprising: measuring a height error between a first portion of the supplemental data and second portion of the road mesh; determining that the height error meets or exceeds a threshold amount of error; and applying a deformation lattice to one or more of the first portion or the second portion to reduce the height error. AJ. A non-transitory computer-readable medium storing instructions that, when executed, cause one or more processors to perform operations comprising: accessing at least one of road network data associated with a real environment or a road mesh associated with the real environment; generating a simulated environment based on the at least one of the road network data or the road mesh; associating supplemental data with the simulated environment to generate a modified simulated environment; and outputting the modified simulated environment for at least one of testing, validating, or training an algorithm used by an autonomous robotic computing device for at least one of controlling the autonomous robotic computing device AK. The non-transitory computer-readable medium as paragraph AJ recites, wherein the supplemental data comprises elevation data collected by a third-party source or system. AL. The non-transitory computer-readable medium as any of paragraphs AJ-AK recite, wherein the supplemental data comprises a geospatial file format that is associated with a United States Geological Survey (USGS) Data Evaluation Model (DEM) standard. AM. The non-transitory computer-readable medium as any of paragraphs AJ-AL recite, the operations further comprising: measuring a difference in height between a first portion of the supplemental data and second portion of the road mesh, the first portion and the second portion being associated with a same region of the real environment that is devoid of objects; determining that the difference meets or exceeds a threshold difference; and applying a deformation lattice to one or more of the first portion or the second portion to reduce the error. AN. The non-transitory computer-readable medium as any of paragraphs AJ-AM recite, wherein the supplemental data provides information associated with the real environment that is otherwise unavailable to the at least one data collection device due to at least one occlusion. While the example clauses described above are described with respect to one particular implementation, it should be understood that, in the context of this document, the content of the example clauses can also be implemented via a method, device, system, a computer-readable medium, and/or another implementation. CONCLUSION While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein can be presented in a certain order, in some cases the ordering can be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results.
97,224
11861791
DETAILED DESCRIPTION Referring toFIG.1, a schematic diagram of a conventional tetrahedral interpolation module is shown. As shown inFIG.1and according to a principle of tetrahedral interpolation, four vertices A, B, C, and D of a tetrahedron surrounding an interpolation point T need to be found first. Then, volume VABCDof tetrahedron ABCD, volume VTABDof sub-tetrahedron TABD, volume VTBCDof sub-tetrahedron TBCD, volume VTABCof a tetrahedron TABC and volume VTACDof a sub-tetrahedron TACD are calculated respectively. Furthermore, weights of the four vertices A, B, C, D are obtained. Finally, value of the interpolation point T is calculated. A value of an interpolation point T is calculated based on following formula (1), DT=WADA+WBDB+WCDC+WDDD(1) Wherein, DTrepresents the value of the interpolation point T, DA, DB, DCand DDrepresent mapping values of four vertices A, B, C and D (in a tetrahedral interpolation calculation, mapping values of four vertices are obtained by a three-dimensional look-up table) respectively, WA, WB, WCand WDrepresent weights of four vertices A, B, C and D respectively, WA=VTBCD/VABCD, WB=VTACD/VASCD, WC=VTABD/VABCD, WD=VTABC/VABCD. As shown inFIG.1, the closer the interpolation point T is to the point A, the larger the volume of sub-tetrahedron TBCD, and the higher the weight of the point A, otherwise, the weight of the point A is lower. Due to a sum of volumes of the sub-tetrahedron TABD, the sub-tetrahedron TBCD, the sub-tetrahedron TABC and the sub-tetrahedron TACD is equal to a volume of the tetrahedron ABCD. Weights of four vertices A, B, C and D can be calculated based on following formula (2), WAWB+WC+WD=1  (2) Based on formula (2), following formula (3) can be obtained to represent a value of an interpolation point T. DT=(VTBCDDA+VTACDDB+VTABDDC+VTABCDD)/VABCD(3) For an irregular tetrahedron, a volume of the tetrahedron is proportional to an absolute value of a matrix determinant formed of homogeneous coordinates of four vertices. Therefore, volume VTABDof sub-tetrahedron TABD, volume VTBCDof sub-tetrahedron TBCD, volume VTABCof sub-tetrahedron TABC, and volume VTACDof sub-tetrahedron TACD can be calculated based on following formulas (4), (5), (6), (7) and (8). ❘"\[LeftBracketingBar]"VA⁢B⁢C⁢D❘"\[RightBracketingBar]"=16⁢abs⁡(❘"\[LeftBracketingBar]"xAxBxCxDyAyByCyDzAzBzCzD1111❘"\[RightBracketingBar]")(4)❘"\[LeftBracketingBar]"VTBCD❘"\[RightBracketingBar]"=16⁢abs⁡(❘"\[LeftBracketingBar]"xTxBxCxDyTyByCyDzTzBzCzD1111❘"\[RightBracketingBar]")(5)❘"\[LeftBracketingBar]"VTACD❘"\[RightBracketingBar]"=16⁢abs⁡(❘"\[LeftBracketingBar]"xAxTxCxDyAyTyCyDzAzTzCzD1111❘"\[RightBracketingBar]")(6)❘"\[LeftBracketingBar]"VTABD❘"\[RightBracketingBar]"=16⁢abs⁡(❘"\[LeftBracketingBar]"xAxBxTxDyAyByTyDzAzBzTzD1111❘"\[RightBracketingBar]")(7)❘"\[LeftBracketingBar]"VTABC❘"\[RightBracketingBar]"=16⁢abs⁡(❘"\[LeftBracketingBar]"xAxBxCxTyAyByCyTzAzBzCzT1111❘"\[RightBracketingBar]")(8) Formulas (4), (5), (6), (7) and (8) are brought into the formula (3), an interpolation value of the interpolation point T can be obtained. In the conventional technology, it is necessary to calculate a fourth-order matrix determinant formed of homogeneous coordinates of vertices of four tetrahedrons for each pixel in an image. Consequently, an apparatus for color gamut conversion adopting a tetrahedral interpolation often has a high time-consuming and power consumption, which cannot meet user requirements. In some embodiment of the present disclosure, based on coordinates of an interpolation point in a sampling space and a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, a volume of a tetrahedron and volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron can be calculated. In embodiments of the present disclosure, there is no need to obtain volumes of four sub-tetrahedrons for each pixel through a fourth-order matrix determinant formed of homogeneous coordinates of vertices of four tetrahedrons. After obtaining coordinates of an interpolation point in a sampling space and side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron can be obtained, which greatly reduces an amount for a tetrahedral interpolation calculation. In order to clarify the object, characteristic and advantages of embodiments of the present disclosure, the embodiments of present disclosure will be described clearly in detail in conjunction with accompanying drawings. In an embodiment of the present disclosure, a method for tetrahedral interpolation calculation is provided, referring toFIG.2, detailed process is described as in S201, S202and S203. In S201, coordinates of an interpolation point in a sampling space, a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, and mapping values of four vertices of a tetrahedron surrounding the interpolation point are obtained. In some embodiment, four vertices are four out of the eight sampling points. Referring toFIG.3, a schematic diagram of segmenting a tetrahedron in six ways according to an embodiment of the present disclosure is shown inFIG.3. In an embodiment of the present disclosure, the sampling space is a three-dimensional space in an RGB standard, and a cube formed of nearby eight sampling points surrounding an interpolation point in the sampling space can be found. It can be seen fromFIG.3that in the embodiment of the present disclosure, an RGB used for a three-dimensional coordinates axes divides a hexahedron into six different tetrahedrons, tetrahedron 1, tetrahedron 2, tetrahedron 3, tetrahedron 4, tetrahedron 5 and tetrahedron 6 in sequence. Since all sides of the cube are equal in length and the sides are perpendicular to each other, the six tetrahedrons have a same volume and a volume of a tetrahedron is equal to one sixth of the volume of the cube. Volumes of four sub-tetrahedrons can be calculated by a volume formula of a triangular pyramid, that is, multiplying a base area by one third of a height. In S202, a volume of the tetrahedron and volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron are calculated, based on coordinates of the interpolation point in the sampling space and side length of the cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space. Referring toFIG.4, a tetrahedron obtained by a fourth division as shown inFIG.3is taken as an example, a schematic diagram of a tetrahedron structure in an embodiment of the present disclosure is shown. It can be seen fromFIG.4, tetrahedron ABCD surrounds an interpolation point T. Based on an RGB spatial coordinate axis shown inFIG.4, relative coordinates of points are obtained as follows: coordinates of point A is A (0, 0, 0), coordinates of point B is B (0, 0, L), and coordinates of point C is C (0, L, 0), coordinates of point D is D (L, L, 0), and coordinates of point T is T (r, g, b), wherein L represents a side length of the cube. In some embodiment, a volume of the tetrahedron can be calculated based on following formula (9), V=L3/6  (9) Wherein, V represents the volume of the tetrahedron, and L represents the side length of the cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, that is, a side length of a cube shown in theFIG.4. In some embodiment, volumes of four sub-tetrahedrons formed of the interpolation point T and arbitrary three vertices in the tetrahedron are calculated based on following formulas (10), (11), (12) and (13) respectively. V1=b*S1/3=b*L2/6  (10) V2=r*S2/3=r*L2/6  (11) V3=h3*S3/3=(g−r)*L2/6  (12) V4=h4*S4/3=(L−b−g)*L2/6  (13) Wherein, V1represents a volume of a first sub-tetrahedron, (r, g, b) represents coordinates of the interpolation point in the sampling space, S1represents an area of a triangle formed of three vertices on a same side of the cube, L represents the side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, V2represents a volume of a second sub-tetrahedron, S2represents an area of a triangle formed of three vertices on another same surface of the cube, V3represents a volume of a third sub-tetrahedron, S3represents an area of a triangle formed of three vertices that are not all on a same surface of the cube, h3represents a distance from the interpolation point to the triangle formed of the three vertices that are not on a same surface of the cube, h3=√{square root over (2)}*(L−r)−{L−[g−(L−r)]}/√{square root over (2)}, S3=S4=√{square root over (2)}/2*L2, V4represents a volume of a fourth sub-tetrahedron, S4represents an area of another triangle formed of three vertices that are not all on a same surface of the cube, and h4represents a distance from the interpolation point to the another triangle formed of three vertices that are not on a same surface of the cube, h4=[L−(b−g)]/√{square root over (2)}−√{square root over (2)}*g. It can be seen from types of six tetrahedrons as divided inFIG.3, a calculation process of the tetrahedron and four sub-tetrahedrons mentioned above can be applied to an arbitrary divided tetrahedron and four sub-tetrahedrons corresponding to the divided tetrahedron. Thus, sub-tetrahedron TACD constitutes a first sub-tetrahedron, sub-tetrahedron TABC constitutes a second sub-tetrahedron, sub-tetrahedron TABD constitutes a third sub-tetrahedron, sub-tetrahedron TBCD constitutes a fourth sub-tetrahedron; S1represents an area of triangle ACD, S2represents an area of triangle ABC, S3represents an area of triangle ABD, S4represents an area of triangle BCD. In S203, an interpolation value of the interpolation point is obtained, based on a tetrahedral interpolation theorem formula, the volume of the tetrahedron, the volumes of four sub-tetrahedrons, and mapping values of the four vertices of the tetrahedron. In some embodiment, formulas (10), (11), (12), and (13) are brought into the formula (3), and following formula (14) is obtained after simplifying an interpolation value of a point T. D=[DAL+(DD−DC)r+(DC−DA)g+(DB−DA)b]/L(14) Wherein, D represents the interpolation value of the interpolation point T, DA, DB, DCand DDare mapping values of four vertices A, B, C and D of the tetrahedron surrounding the interpolation point T respectively, (r,g,b) represents the coordinates of the interpolation point T in the sampling space, L represents the side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, side CD is parallel to X-axis, side CA is parallel to Y-axis, and side BA is parallel to Z-axis. In some embodiment, for tetrahedrons 1 to 6 as shown inFIG.3, values of all vertices are substituted into formula (14) based on vertices positions of all tetrahedrons, such that, interpolation values of the interpolation point T corresponding to tetrahedrons can be obtained by different segmenting manners. In light of above, according to embodiments of the present disclosure, there is no need to obtain volumes of four sub-tetrahedrons for each pixel through a fourth-order matrix determinant formed of homogeneous coordinates of vertices of four tetrahedrons. After obtaining coordinates of an interpolation point in a sampling space and a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron can be obtained, which greatly reduces an amount of a tetrahedral interpolation calculation. Referring toFIG.5, according to an embodiment of the present disclosure, a device for tetrahedral interpolation calculation is provided, the device includes a first subtractor501, a second subtractor502, a third subtractor503, a first multiplier511, a second multiplier512, a third multiplier513, a first adder521, a second adder522, and a third adder523. Wherein, for the first subtractor501, a positive input terminal Switched_data_0[1] is applied with a mapping value of a fourth vertex of a tetrahedron, a negative input terminal Switched_data_1[1] is applied with a mapping value of a third vertex of the tetrahedron, and an output terminal is coupled with a first input terminal of the first multiplier511. For the second subtractor502, a positive input terminal Switched_data_0[2] is applied with the mapping value of the third vertex of the tetrahedron, a negative input terminal Switched_data_1[2] is applied with a mapping value of a first vertex of the tetrahedron, and an output terminal is coupled with a first input terminal of the second multiplier512. For the third subtractor503, a positive input terminal Switched_data_0[3] is applied with a mapping value of a second vertex of the tetrahedron, a negative input terminal Switched_data_1[3] is applied with the mapping value of the first vertex of the tetrahedron, and an output terminal is coupled with a first input terminal of the third multiplier513. For the first multiplier511, a second input terminal Weighting[0] is applied with an X-axis coordinate data of a interpolation point in a sampling space, and an output terminal is coupled with a first input terminal of the first adder521. For the second multiplier512, a second input terminal Weighting[1] is applied with a Y-axis coordinate data of the interpolation point in the sampling space, and an output terminal is coupled with a first input terminal of the second adder522. For the third multiplier513, a second input terminal Weighting[2] is applied with a Z-axis coordinate data of the interpolation point in the sampling space, and an output terminal is coupled with a second input terminal of the second adder522. For the first adder521, a second input terminal Switched_data_0[0] is applied with the mapping value of a first vertex of the tetrahedron, and an output terminal is coupled with a first input terminal of the third adder523. For the second adder522, an output terminal is coupled with a second input terminal of the third adder523. For the third adder523, an output terminal Data_out outputs an interpolation value of the interpolation point. With reference toFIG.4and formula (14), it can be seen that a device for calculating a tetrahedral interpolation provided by an embodiment of the present invention can be used for calculating the interpolation value of tetrahedron ABCD obtained by the fourth division. Wherein, Switched_data_0[1], Switched_data_0[2], and Switched_data_0[3] are applied with DD, DC, and DBrespectively; Switched_data_1[1], Switched_data_1[2], and Switched_data_1[3]are applied with DC, DA, and DArespectively; Weighting[0], Weighting[1], Weighting[2] are applied with values of r, g, and b in coordinates (r,g,b) of an interpolation point T in a sampling space respectively; Switched_data_0[0] outputs DA, and Data_out is applied with D. Since L represents a preset fixed value, before using the device conducting a tetrahedral interpolation calculation to calculate an interpolation, all input data is preprocessed based on formula (14), so that an accurate interpolation value can be obtained. In practical applications, through size comparison between an RGB data, it can be determined which type of 6 types of tetrahedrons as classified inFIG.3the tetrahedron surrounding the interpolation point T belongs to. Then an order of inputting data can be adjusted to the device for tetrahedral interpolation calculation, to meet a requirement of fast calculation of an interpolation values. In light of above, according to embodiments of the present disclosure, a tetrahedral interpolation value can be calculated based on three subtractors, three multipliers, and three adders without a plurality of transistor. Hardware cost can be reduced. And since the calculation is simple, a speed of a tetrahedral interpolation calculation can be greatly improved. Nowadays, multimedia applications such as web browsing, video playing, picture browsing, gaming and entertainment all require a participation of various screens. Furthermore, many types of color spaces are needed when a multimedia content is generated. For example, an sRGB color gamut is used in computer industry, a DCI-P3 color gamut is used in industry of film and television, and an AdobeRGB color gamut is used in an industry of publishing and printing. Currently, Liquid Crystal Display (LED) screens are mostly used in various devices. However, LCD screens usually have a narrow color gamut and cannot even cover an sRGB color gamut100%, further with a problem of partial color deviation. Consequently, when a content of a standard color gamut is displayed on a LCD screen, an under-saturation and a color cast phenomenon will occur. In addition, OLED screens which have been gradually put into use in recent years have advantages of high color saturation and high brightness. However, a great deal of wide color gamut screens does not calibrate an original sRGB color gamut. Therefore, when a multimedia content is displayed on a screen, a color gamut mismatch often happens, and causes problems of color distortion including over-saturation and color cast, which seriously affects browsing of multimedia content for users. In a conventional technology, by adjusting three color channels of RGB independently to reduce or increase three components of R, G and B, a color gamut range of an RGB color space can be adjusted. In that situation, three gains are used to adjust gray levels of three channels. A granularity using this adjustment process is relatively coarse and the adjusted output is often not ideal, which causes poor user experience. Accordingly, according to an embodiment of the present disclosure, a method for color gamut conversion is provided. With reference toFIG.6, a detailed process is described in S601, S602and S603. In S601, a particular image and color gamut information of the particular image is obtained. A color gamut is a complete subset of colors defined by a three-dimensional volume, which is usually described by a bounded volume in a uniform color space. Red, green, and blue (RGB) color space is a standard color space definition which is widely used in computer industry. Currently, color spaces in computer industry all follow the definition of RGB color space, for example, pictures created in an sRGB color gamut and videos with BT.709 format. In S602, a color gamut conversion data corresponding to the color gamut information of the particular image is selected in a preset color gamut conversion database. In some embodiment, a color gamut measurement and calibration can be conducted in a color gamut conversion database based on a wide color gamut screen characteristic and a specific color gamut data to obtain at least one color gamut conversion data. The color gamut conversion data may include at least one selected from a group consisting of: an sRGB color gamut conversion data, a DCI-P3 color gamut conversion data and an AdobeRGB color gamut conversion data. It can be understood that a user can choose other color gamut conversion data that meets a color gamut conversion requirement according to their own needs, which is not limited to the above three commonly used color gamuts. In some embodiment, following processes can be used to select a color gamut conversion data corresponding to a color gamut information of a particular image: performing an address resolution on the color gamut information of the particular image; and looking up the color gamut conversion data corresponding to the color gamut information of the particular image by a three-dimensional look-up table. In S603, a color gamut conversion on the particular image according to any one of the above method for tetrahedral interpolation calculation is performed to obtain a corresponding color gamut conversion image, based on the color gamut conversion data corresponding to the color gamut information of the particular image. In some embodiment, a multimedia content usually employs an sRGB color gamut, and a color gamut of a wide color gamut RGB screen is usually larger. Therefore, before displaying a particular image on a wide color gamut RGB screen, a corresponding displayed data needs to be adjusted to adapt to the color gamut of the wide color gamut RGB screen, and to make a color gamut range of the particular image in an RGB color space fall within a color gamut range adjusted by the wide color gamut RGB screen In some embodiment, a data selector can be employed to convert RGB values of all pixels in a particular image into RGB values suitable for a screen. The conversion process can be carried out in the RGB space, and there is no need for conversion again in other color spaces, which increases the speed of converting a color gamut, and can meet the processing speed requirements in many types of equipment. In some embodiment of the present disclosure, after determining a color gamut conversion data corresponding to the color gamut information of a particular image, an address resolution on an RGB data of a pixel of the particular image can be performed to find a position of the RGB data in three-dimensional look-up table, so that addresses of four vertices of a smallest tetrahedron surrounding the pixel can be found. Then, four addresses of the four vertices are decoded by an address decoder, and data of the four vertices and weights of all vertices are obtained by the three-dimensional look-up table. Thereafter, after data of the four vertices are adjusted during a data sequence adjustment period, the tetrahedral interpolation calculation is performed to obtain an interpolation data of this pixel. Pixel points of the particular image are calculated sequentially to obtain a color gamut conversion image corresponding to the particular image. According to an embodiment of the present disclosure, based on the color gamut conversion data in the color gamut conversion database, a color gamut for different color gamuts and different screens can be adjusted flexibly based on a color gamut of the particular image, such that a purpose of reducing a color distortion and displaying a multimedia content accurately can be achieved. Referring toFIG.7, according to an embodiment of the present disclosure, another device for calculating a tetrahedral interpolation is provided. The device includes: a first acquisition circuitry701, a first calculation circuitry702and a second calculation circuitry703. Wherein, the first acquisition circuitry701is adapted to obtain coordinates of an interpolation point in a sampling space, a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, and mapping values of four vertices of a tetrahedron surrounding the interpolation point. The first calculation circuitry702is adapted to calculate a volume of the tetrahedron, and volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron, based on the coordinates of the interpolation point in the sampling space, the side length of the cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space. The second calculation circuitry703is adapted to obtain an interpolation value of the interpolation point, based on a tetrahedral interpolation theorem formula, the volume of the tetrahedron, volumes of four sub-tetrahedrons, and the mapping values of the four vertices of the tetrahedron. In some embodiment, four vertices are four out of eight sampling points In some embodiment, a first calculation circuitry702is adapted to calculate a volume of a tetrahedron based on coordinates of an interpolation point in a sampling space, a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, and the volume of the tetrahedron is calculated based on following formula, V=L3/6, wherein, V represents the volume of the tetrahedron, and L represents the side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space. In some embodiment, a first calculation circuitry702is adapted to calculate volumes of four sub-tetrahedrons formed of an interpolation point and arbitrary three vertices in a tetrahedron, based on coordinates of the interpolation point in a sampling space, a side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, and the volumes of four sub-tetrahedrons formed of the interpolation point and arbitrary three vertices in the tetrahedron can be calculated based on following formulas respectively, V1=b*S1/3=b*L2/6, V2=r*S2/3=r*L2/6, V3=h3*S3/3=(g−r)*L2/6, V4=h4*S4/3=(L−b−g)*L2/6, wherein V1represents a volume of a first sub-tetrahedron, (r,g,b) represents the coordinates of the interpolation point in the sampling space, S1represents an area of a triangle formed of three vertices on one side of the cube, L represents the side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, V2represents a volume of a second sub-tetrahedron, S2represents an area of a triangle formed of three vertices on another side of the cube, V3represents a volume of a third sub-tetrahedron, S3represents an area of a triangle formed of three vertices that are not on a same side of the cube, h3represents a distance from the interpolation point to the triangle formed of the three vertices that are not on a same side of the cube, h3=√{square root over (2)}*(L−r)−{L−[g−(L−r)]}/√{square root over (2)}, S3=S4=√{square root over (2)}/2*L2, V4represents a volume of a fourth sub-tetrahedron, S4represents an area of another triangle formed of three vertices that are not on a same side of the cube, and h4represents a distance from the interpolation point to the another triangle formed of the three vertices that are not on a same side of the cube, h4=[L−(b−g)]/√{square root over (2)}−√{square root over (2)}*g. In some embodiment, a second calculation circuitry703is adapted to obtain an interpolation value of an interpolation point, based on a tetrahedral interpolation theorem formula, a volume of a tetrahedron, volumes of four sub-tetrahedrons and mapping values of four vertices of a tetrahedron volume of the four sub-tetrahedrons, and mapping values of four vertices of the tetrahedron. The interpolation value of the interpolation point can be calculated based on following formula, D=[DAL+(DD−DC)r+(DC−DA)g+(DB−DA)b]/L, wherein, D represents the interpolation value of the interpolation point, DA, DB, DCand DDare the mapping values of four vertices A, B, C and D of the tetrahedron surrounding the interpolation point, respectively, (r,g,b) represents the coordinates of the interpolation point in the sampling space, L represents the side length of a cube formed of nearby eight sampling points surrounding the interpolation point in the sampling space, a side CD is parallel to a X-axis, a side CA is parallel to a Y-axis, a side BA is parallel to a Z-axis. Referring toFIG.8, in an embodiment of the present disclosure, a device for converting color gamut is provided, the device includes: a second acquisition circuitry801, a selection circuitry802, a conversion circuitry803. Wherein, the second acquisition circuitry801is adapted to obtain a particular image and color gamut information of the particular image. The selection circuitry802is adapted to select a color gamut conversion data corresponding to the color gamut information of the particular image in a preset color gamut conversion database. The conversion circuitry803is adapted to performing a color gamut conversion on the particular image by the method for calculating tetrahedral interpolation mentioned above to obtain a corresponding color gamut conversion image, based on the color gamut conversion data corresponding to the color gamut information of the particular image. In some embodiment, a color gamut conversion database can be obtained by the following processes: performing a color gamut measurement and calibration based on a wide color gamut screen characteristic and a specific color gamut data to obtain at least one color gamut conversion data; the color gamut conversion data includes at least selected from a group consisting of: an sRGB color gamut conversion data, a DCI-P3 color gamut conversion data and an AdobeRGB color gamut conversion data. In some embodiment, the selection circuitry is adapted to perform an address resolution on a color gamut information of the particular image; and look up a color gamut conversion data corresponding to the color gamut information of the particular image by a three-dimensional look-up table. In some embodiment, a conversion circuitry803may also include an address decoder, which is used to calculate weights of four vertices of a tetrahedron based on positions of points to be interpolated in the tetrahedron. In an embodiment of the present disclosure, a computer-readable storage medium having computer instructions stored therein is provided, the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, wherein once the computer instructions are executed, the method for tetrahedral interpolation calculation as described above can be performed. In an embodiment of the present disclosure, a computer-readable storage medium having computer instructions stored therein is provided, the computer-readable storage medium is a non-volatile storage medium or a non-transitory storage medium, wherein once the computer instructions are executed, the method for color gamut conversion as described above can be performed. In an embodiment of the present disclosure, a device for tetrahedral interpolation calculation comprising a memory and a processor is provided, wherein the memory has computer instructions stored therein, and the method as described above can be performed, once the processor executes the computer instructions. In an embodiment of the present disclosure, a device for color gamut conversion comprising a memory and a processor is provided, wherein the memory has computer instructions stored therein, and the method as described above can be performed, once the processor executes the computer instructions. Those skilled in the art can understand that all or part of the processes in the various methods of the above-mentioned embodiments can be completed by a program instructing relevant hardware. The program can be stored in any computer-readable storage medium, and the storage medium may include: ROM, RAM, magnetic disk or CD, etc. Although the present disclosure is disclosed as above, the present disclosure is not limited to this. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the scope defined by the claims.
31,527
11861792
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS FIG.5illustrates the tiling phase of a proposed system for processing programmable tessellation primitives within a tile based rendering system embodying the invention. A vertex shading unit500and a hull shading unit505operate as described above for Dx11. The hull shading unit passes calculated edge tessellation factors to both a Domain Tessellation unit510and a connectivity tessellation unit515. The hull shader also passes processed control point data to a domain shader520. The connectivity tessellation unit515calculates vertex indices for triangles. These indices reference the vertices generated by the action of the domain shader on the generated domain points from the domain tessellation unit. The vertex indices are passed to a cache unit522which caches vertex values previously generated by the domain shading unit. It should be noted that the cache is not required but the interconnected nature of the primitives that make up the tessellated patch mean that the presence of a cache can significantly reduce the number of vertices which are processed through the domain shading unit. Where a vertex is not present within the cache it may be requested from the domain shading unit. The domain shading unit processes only the position part of the vertex data. This is the only part that is required to tile tessellated geometry. The cache unit522passes on the vertices that make up the primitives to a clipping and culling unit525which removes any back facing, off screen or sub pixel primitives (i.e. non visible primitives). Any remaining primitives are passed to a projection unit530that transforms the primitives/vertices into screen space so that they can be tiled by a tiling unit535. The tiling unit determines which primitives reside in each tile and passes a list of tile indices to an index compression unit540. The index compression unit compresses the index stream using a well known method. The compressed indices are written to per tile geometry lists545along with a reference to the hull shading unit505output which is also written. It should also be noted that the primitive indices need not be compressed but this is a preferred feature. This process is repeated for all patches within a scene. It should be noted that a scene does not need to be comprised wholly of patch based primitives and that conventional triangle, line or point based geometry may also be included within the scene, these primitives being processed as in a normal tile based rendering system. It should also be noted that although the above described method stores connectivity information for the generated primitives for the tiled geometry lists it is also possible to store only a list of unique vertices without connectivity information. This information can then be used to regenerate the connectivity information in subsequent passes. FIG.6illustrates the hidden surface removal phase of the rasterisation processes for patch based geometry which is performed afterFIG.4. A tile parameter fetch unit600fetches the hull shader generated data and a compressed list of indices for each tile from a geometry buffer545(FIG.5). The edge tessellation factor component of this hull shader generated data is passed to a domain tessellation unit610and the control point data is passed to a domain shading unit620. It should be noted that only the position part of the hull shader control point data is fetched by the tiled parameter fetch unit. The compressed list of indices for the primitives generated by the patch that lie within the current tile are passed to an decompression unit605which decompresses the indices before passing them to a cache unit615. The cache unit615contains data for vertices previously generated by the domain shading unit620so that vertices that are referenced multiple times need only be generated once. When the cache unit is missing data for a required vertex it requests it from the domain shader620which requests data from the domain tessellation unit for the underlying domain point data before generating a corresponding vertex position. It should be noted that as with the tiling phase the domain shader continues to generate only the position component of the vertices. The cache unit615passes the vertices for the primitive to the clipping and culling unit625where any clipping is performed again for the primitives. The vertices generated by the clipping and culling unit are then passed to a projection unit630which transforms then into screen space. The screen space vertices/primitives are then passed to a hidden surface removal unit635that operates in the same way as is a normal tile based rendering system. The hidden surface removal unit then passes a list of visible primitive indices and patch references to the next phase of operation. FIG.7illustrates the attribute phase of the rasterisation process for patch based data. A tile based parameter fetch unit700receives references to patch data and lists of indices for the visible primitives from the patch within a current tile. The tile based parameter fetch unit fetches the hull shader data referenced by the patch and passes the edge tessellation factors to a domain tessellation unit705and the control point data to a domain shading unit710. It should be noted that in this phase the parameter fetch unit700retrieves all data associated with the control points and passes it to the domain shading unit715. The index data for the primitive vertices is passed to a cache unit715which, as in previous phases, contains data for vertices previously generated by the domain shader. Where a vertex isn't present the cache unit requests that the domain shader unit generates it. In this phase the domain shading unit executes the code required to generate all vertex components. The cache unit then passes the primitive vertices to clipping, culling and projection units720and725as in previous phases before they are passed to a shading unit730where they are processed as in a the normal tile based rendering process. FIG.8illustrates a tile based rendering system that has been modified to support tessellation using the three phase process described above. A primitive/command fetch unit800first fetches primitives and state information from the application and provides them to a shading unit805. The shading unit performs vertex shading and hull shading as described for phase 1, fetching textures from texturing unit810. The shading unit then passes the hull shading data to a domain tessellation unit825, and a connectivity tessellation unit830and writes the same data out to a parameter buffer860. The domain tessellation unit then generates domain points which are fed back into the shading unit which applies the position part of the domain shader and feeds the generated vertices to a cache unit850. The connectivity tessellation unit830feeds primitive indices to the cache unit. The referenced vertices are then fed to a clipping and culling unit855where (non-visible) sub pixel, non sample, crossing and back facing primitives are culled and any required clipping applied. The resulting vertices are passed to a projection unit815which projects them into to screen space. The resulting screen space primitives are then passed to a tiling unit865and a per tile primitive list generated as described for phase 1 above which are written to a parameter buffer860. This process is repeated for all primitives within the scene as in normal tile based rendering. Rasterisation is performed tile by tile as with a normal tile based rendering device. Object lists are fetched by a tiled parameter fetch unit835which, for normal primitive types, supplies them tile by tile to the HSR unit as described for a normal tile based rendering device. When the tiled parameter fetch unit encounters a patch based primitive it loads the associated hull shader output data, as emitted in phase 1 of the tessellation process into the shading unit805. The tiled parameter fetch unit then feeds the indices for the vertices from the tessellated domain that the domain shader must operate on to the domain tessellation unit825. The domain tessellation unit generates u,v values within the tessellated domain corresponding to the supplied indices and passes them to a shading unit805which applies the position part of the domain shader as described for phase 2 of the tessellation process. The generated position values are then passed to a cache unit850. Domain shaded vertex positions are read from the cache850by the culling and clipping unit which passes unclipped/un-culled primitives to the projection unit870which projects the vertices into screen space. The resulting screen space primitives are then passed back to a hidden surface removal unit840where hidden surface removal is performed as per normal tile based rendering. Any primitives that are visible after hidden surface removal are passed back to the tiled parameter fetch unit835which issues the indices for the remaining primitives to the domain tessellation unit825. The resulting domain points are again passed to a shading unit805which now applies the full domain shader producing all required position and texture coordinate values. The produced values are passed down to the cache, clipping/culling and projection blocks before being passed back to the FPU845. The FPU performs any iteration of attributes which are then passed onto the shading unit805where normal pixel shading is performed as per a normal tile based rendering device.
9,552
11861793
DESCRIPTION OF THE EMBODIMENTS In order to enable a person having ordinary skill in the art to better understand the technical solutions of the disclosure, the disclosure will be further described in detail below together with the accompanying drawings. First Embodiment Hereinafter, the disclosure is going to be described in detail by takingFIG.2as an example. With reference toFIG.1, the embodiments of the disclosure provide a method for merging surface skin three-dimensional (3D) data including the following steps. In step S100, actually-measured 3D data of a workpiece to be skinned and 3D data of a design model of the workpiece are constructed. In step S200, a normal vector, a neighborhood radius, and a position of a sphere center of each point in the design model 3D data are calculated. In step S300, closest points to the design model 3D data for all points in the actually-measured 3D data are found. In step S400, a static closest distance and a dynamic closest distance from each point in the actually-measured 3D data to the corresponding closest point in the design model 3D data are calculated. Each of the static closest distance and the dynamic closest distance includes a Euclidean distance, a normal distance, and a surface adaptive distance calculated from the Euclidean distance and the normal distance. In step S500, an objective function of the surface adaptive distance is constructed. In step S600, the objective function is minimized, and a differential motion screw ξ is calculated after the objective function is minimized. In step S700, the actually-measured 3D data is updated based on the differential motion screw ξ, and the merging of the actually-measured 3D data and the design model 3D data are achieved. Step S100specifically includes the following steps. In step S110, the workpiece to be skinned is scanned from a plurality of viewing angles, and a point cloud X={x1,x2, . . . ,xi, . . . , xm} of the actually-measured 3D data including m points is obtained. The number of points m is 48,542, a point cloud spacing is 0.03 mm; and each point is a 3×1 vector. In step S120, the design model is uniformly discretized, and a point cloud Y={y1,y2, . . . , yi, . . . , ym} of the design model 3D data including n points is obtained. The number of points n is 240,959, and the point cloud spacing is 0.01 mm. The number of points m of the actually-measured 3D data and the number of points n of the design model 3D data in step S100satisfy n>3m. Step S200specifically includes the following steps. In step S210, for each point yiand i=1, n in the design model 3D data, a neighborhood search algorithm such as KNN is used to search k=13 points closest to the point yiin the point cloud Y to form a point set K={k1,k2, . . . , k13}, where the point set K⊆Y. In step S220, least squares spherical fitting is performed on the point set K. The least squares method is used to perform spherical fitting of the point set K by the least squares method. The objective function of spherical fitting is min⁢∑j=11⁢3⁢(kj-oi-ri)2. where oiis the 3×1 sphere center and riis the radius. After fitting, the position of the sphere center oiand the radius riof the sphere may be obtained. A unit normal vector of point yiis ni=(oi−xi)/∥oi−xi∥, where xirepresents the point in point cloud X of the actually-measured 3D data. The point cloud Y corresponds to a normal vector set N={n1, n2, n3, . . . , ni, . . . , nn} and a neighborhood radius set R={r1,r2,r3, . . . , ri, . . . , rn}. The positions of the sphere centers and the radii corresponding to the measuring points marked inFIG.2are shown in Table 1. TABLE 1PointPosition of SphereNumberCenterRadius1−19.37, −25.35, 280.009774.522−13.09, 41.68, 280.008390.49331.59, 71.02, 280.005523.104−13.34, −51.09, 254.4512.735−5.14, 31.07, 232.4633.27637.12, 59.71, 227.74105.08765.86, 68.14, 229.5817.798−21.11, −51.18, 211.819.329−6.18, 28.11, 187.1037.381053.01, 55.4598, 84.09429.6111−36.44, −50.94, 149.779.9012−21.75, 7.01, 107.2028.321341.48, 39.66, 112.10201.6014−47.22, −49.48, 78.5513.27156.28, 26.63, 61.1242.371675.63, 27.91, 66.3414.6717−54.42, −44.85, 23.7319.1118−35.98, −9.95, 19.4266.081923.95, 25.97, 16.4972.832076.02, 18.32, 16.5612.7421−31.20, 1.80, 2.789.242240.65, 21.96, 1.609.962315.78, −10.62, −0.00153.7024−9.82, −55.00, 0.005.232548.42, −55.00, 0.003.7826−30.82, −55.00, −9.19852.512747.10, −55.00, −8.66899.17289.82, −39.28, −31.3392.84298.68, −24.28, −48.2546.403050.23, −19.29, −45.0039.59 The number of points in the point set K in the step S210satisfies: 5<k<50. Step S300specifically is the following. Methods such as a binary tree method, an octree method, or a K-dimensional tree (KD-tree) search algorithm method is used to find the closest points in the point cloud Y of the design model 3D data for each measuring point xiin the point cloud X of the actually-measured 3D data. The closest points are denoted as yi′, yi′∈Y, a unit normal vector corresponding to yi′ is ni′, ni′∈N, a corresponding neighborhood radius is ri′, ri′∈R, and the normal vector njand the radius rjcorrespond to the point yj. Let yi′=yj, ni′=nj, and ri′=rj; then yi′ forms a set of the closest points Y′={y1′, y2′, . . . , yi′, . . . , ym′}. Each of the points in the cloud point X is in one-to-one correspondence with the points in the closest point set Y′. N′={n1′, n2′, . . . , nm′} is a normal vector set of the closest points, and R′={r1′, r2′, . . . , rm′} is a neighborhood radius set of the closest points. The elements in the normal vector set and the neighborhood radius set are in one-to-one correspondence with to the points in the measurement data. Step S400specifically includes the following steps. In step S410, for each measuring point xiin the point cloud X, the static closest distance from the measuring point xito the point yi′ is calculated, and three forms of distances, namely the Euclidean distance di_e0=∥xi−yi′∥, the normal distance di_n0=(xi−yi′)Tni′, and a distance di_r0considering neighborhood feature reconstruction are included. The distance of the neighborhood feature reconstruction is the surface adaptive distance. When a neighborhood of xiis a concave surface, di_r0=ri′−√{square root over (ri′×ri′−2ri′di_n0+di_e0×di_e0)}. When the neighborhood of xiis a convex surface, di_r0=√{square root over (ri′×ri′−2ri′di_n0+di_e0×di_e0)}−ri′. In step S420, for each measuring point xiin the point cloud X, the dynamic closest distance from the measuring point xito the point cloud Y is calculated, a 6×1 differential motion screw is defined as ξ, where the updated position is xi+=xie[ξ]when the differential motion screw ξ is provided at the measuring point xi, and the dynamic closest distance from the measuring point xi+to the point cloud Y is calculated. Three forms of distances, namely the Euclidean distance, the normal distance, and the distance considering the neighborhood feature reconstruction are included. The dynamic Euclidean distance may be expressed as di_e=∥xi−yi′+Eiξ∥, where Ei=[I3×3,−{circumflex over (x)}i] is a 3×6 coefficient matrix, I3×3represents a 3×3 unit matrix, and {circumflex over (x)}irepresents an antisymmetric matrix xˆi=[0-wiviwi0-ui-viui0] of the point xi=[ui, vi, wi]T. The dynamic normal distance may be expressed as di_n=di_n0+Niξ, where Ni=[niT,(xi×ni)T] is a 1×6 coefficient matrix. A dynamic distance considering the neighborhood feature reconstruction is di_r=ri′−√{square root over (ri′×ri′−2ri′di_n+di_e)}. Step S500specifically is the following. The objective function is F=∑i=1n⁢di_r2, and second-order Taylor expansion is performed on the objective function expressed as F=∑i=1n⁢(di_r⁢02+2⁢Ri⁢ξ+ξT(EiT⁢Ei)⁢ξ), where Ri=ri′Ni+(xi−yi′)TEi. Step S700specifically is the following. According to the point cloud X, the closest point data Y′, the unit normal vector set N′ of the closest points, and the neighborhood radius set R′ of the closest points, F is minimized, the differential motion screw ξ in step S600is solved, and the merging of the 3D data is achieved. ξ=-(∑i-1m⁢Ki)-1⁢(∑i-1m⁢biT) is denoted, where bi=(1-ri′di_r⁢0)⁢Ri,and⁢Ki=(EiT⁢Ei-ri′⁢EiT⁢Eidi_r⁢0+ri′⁢RiT⁢Ridi_r⁢03). The point cloud X of the actually-measured 3D data is updated by means of the differential motion screw ξ and the formula xi′=xie[ξ], where xi′ represents an ithpoint on the updated point cloud X, and i=1, 2, . . . , m. The merging of the actually-measured 3D data and the design model 3D data is achieved, and the updated point cloud X is the stitched actually-measured 3D data. Second Embodiment The method for merging the surface skin 3D data further includes the following steps. In step S800, the closest points in the model point set Y are calculated according to the points xiand i=1, 2, . . . , m in the cloud point X of the updated actually-measured 3D data in turn. The closest point is denoted as yi′ and yi′∈Y. The unit normal vector corresponding to yi′ is ni′∈N, and the corresponding neighborhood radius is ri′∈R. yi′ forms the set Y′={y1′, y2′, . . . , yi′, . . . , ym′} of the closest points, ni′ forms the unit normal vector set N′={n1′, n2′, . . . , ni′, . . . , nm′} of the closest points, and ri′ forms the neighborhood radius set R′={r1′, r2′, r3′, . . . , ri′, . . . , rm′} of the closest points. The points in the measured point cloud X are in one-to-one correspondence with the points in the closest point set Y′, the normal vectors in the set N′, and the radii in the set R′. The mean square error among the points is calculated: mean⁢square⁢error⁢std=∑i=1m⁢(di_r⁢0-d¯)2m, where d¯=∑i=1m⁢di_r⁢0/m. In step S900, it is determined whether the mean square error is less than a given error, steps S300to S900are repeated if the mean square error is greater than or equal to the given error, the stitched actually-measured 3D data is outputted if the mean square error is less than the given error (e.g., 0.05), and the merging is completed. The results obtained through the calculation provided in this embodiment are shown inFIG.3.FIG.3shows the comparison of merging rotation errors of different methods under different Gaussian noise conditions. Herein, both the ICP and Go-ICP merging methods are based on the Euclidean distance, the tangent distance minimization (TDM) merging method is based on the normal distance, and the NDT is merging based on the normal distribution transformation. As can be seen from the results that with the change of Gaussian noise, the merging rotation error of the method provided by the disclosure is the smallest. In the disclosure, specific examples are used to illustrate the principles and implementation modes of the disclosure, and the descriptions of the above embodiments are only used to help understand the core idea of the disclosure. It should be pointed out that improvements and modifications can be made to the disclosure by a person having ordinary skill in the art without departing from the principle of the disclosure, and these improvements and modifications also fall within the protection scope of the claims of the disclosure.
11,096
11861794
DETAILED DESCRIPTION The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be apparent to those skilled in the art that the subject technology may be practiced without these specific details. Like or similar components are labeled with identical element numbers for ease of understanding. In general, and referring to the Figures, embodiments of the subject technology provide an electronic digital environment for memorization of environments, applicable as memory palaces for memorization techniques. In some embodiments, the environments may be pre-made. The term “digital” may refer to an electronic application which may involve two or three dimensional displays. As will be discussed herein, embodiments will refer to “digital environments”, which may include electronic scenes displayed on electronic displays of computing devices and may include two dimensional graphics, augmented reality, virtual reality, and immersive virtual reality. It should be appreciated that aspects of the subject technology provide more than merely a memorization training program. The subject technology incorporates computing elements to generate a new technology to provide memory training for individuals. The underlying process creates environments that an end user engages with to develop and strengthen one's memory. The environments are digital and in some embodiments, use virtual reality or augmented reality to create the different environments (for example, rooms, scenes, or the like). Some embodiments provide the subject technology as a software application which may generate new rooms on request for the user to continually train memorization. Object generation and placement may include a computing engine to determine object location which may be repetitive for some rooms or randomly rearranged in other rooms. In addition, some embodiments may incorporate animation with some objects as a means to invoke memory association with the object as a user is memorizing an environment. As should be understood, such environment generation, object generation, and recreation of environments as will be described further herein are not reasonably performed manually. The underlying processes rely on technology to provide the digital environments and recreation of the environments. The physical counterpart would require other people to actively engage in being present as the user is memorizing the room, actively removing objects, and actively remembering themselves whether objects are replaced in their correct positions. In the context of generating new rooms on request by a user, such a physical endeavor is impractical and unreasonable given the limited number of rooms physically available to use or the physical limitations in constantly reconstructing new rooms. Referring now toFIG.1, an example architecture100for a virtual reality memory training system is shown. The architecture100includes a network106that allows an end user computing device102to communicate with other computing devices, as well as other elements that are connected to the network106, such as an item object data source112, a virtual reality (VR) server116, and the cloud120. In the context of providing an online memorization environment session, the computing device102may provide requests/input data (represented as data messages103) which are processed by the VR memorization room engine110. Generally, the VR memorization room engine110makes requests to the object data source112for objects and their associated data113(for example, object type, object animation, location position, etc.) and builds environments using the object data113. In some embodiments, the VR memorization room engine110may generate rooms with anchors indicating locations for object placement. The objects and their associated anchors may be permanent or session dependent as indicated in object data113. The network106may be, without limitation, a local area network (“LAN”), a virtual private network (“VPN”), a cellular network, the Internet, or a combination thereof. For example, the network106may include a mobile network that is communicatively coupled to a private network, sometimes referred to as an intranet that provides various ancillary services, such as communication with various application stores, libraries, and the Internet. In cloud based embodiments, resources may be gathered from different computing devices connected to the cloud network. In an illustrative embodiment, users may interface with the architecture100through a VR platform, represented by computing device102. While a generic mobile computing device is shown, it will be understood that other computing devices including interactive head worn modules may be used. In some embodiments, a software application may provide a user interface (UI) through which the user may perform memorization training using aspects of the subject technology. While the object data source112and the VR memorization room engine110are illustrated by way of example to be on different platforms, it will be understood that in various embodiments, the object data source112and the virtual reality server116may be combined. In other embodiments, these computing platforms may be implemented by virtual computing devices in the form of virtual machines or software containers that are hosted in a cloud120, thereby providing an elastic architecture for processing and storage. Referring now toFIGS.2A and2B, a process200for providing a memorization training experience though a digital environment is shown. In an exemplary embodiment, the user201accesses memorization training through a computing device which is loaded with a software program. In some embodiments, the memorization training uses a mind palace technique which generates a plurality of digital rooms. The process generates an assortment of digital (sometimes, three dimensional) objects which are placed in tracked or pre-defined locations of digital rooms. In general, the user practices associating the digital objects and their locations in a room. The user may engage in the subject digital training through the computing device which may be for example, a mobile smart phone, a personal computer, a laptop, a tablet computer, or a programmable consumer device (for example, a virtual reality headset or room, smart wearable glasses, watches, or jewelry, a smart television, entertainment hub, or brain-computer interface (BCI)). In the context of virtual reality or augmented reality, the digital rooms and objects may be virtual rooms and/or objects. In an augmented reality setting, a user may see a real life room with virtual objects generated and shown in different physical locations of the room. In an immersive virtual reality setting, the entire room may be generated, and the virtual objects may be positioned at pre-determined markers of the virtual room. From herein, use of the term “virtual” may refer to any digital environment described above which may include two and three dimensional environments. The process200may generally be grouped into two main steps: encoding205and recreation220. Encoding includes blocks206,208, and210. The Recreation phase includes the remaining blocks until the process of a memorization session is completed. To assist in illustrating the process200,FIGS.3-10show a sequence of scenarios the user201may experience while engaged in a training session. Accordingly, reference to the process steps will be made concurrently with the accompanying scenarios to better illustrate the subject technology. Generally, once the user201engages with a computing device, the user201appears in a virtual environment that may be divided into several “subenvironments”. In some embodiments, the virtual environment may resemble a house (environment) or other building structure with rooms (subenvironments). Other embodiments may generate an outdoors environment with any kind of separation into sub-zones (subenvironments). As an example, the user accesses the virtual environment, an apartment with three subenvironments, two rooms with objects inside (here and further “rooms”) applicable for use in the memory techniques and a technical subenvironment (further “technical location”), which may be for example a hallway (lobby) or empty room for the technical purposes (e.g. free space for manipulation with objects). The user proceeds to the “encoding” stage205. On this stage the user gains access to a first room as the VR memorization room engine110generates and presents206a virtual room to the user201. See for example, room scenario “1” inFIG.3. Other rooms may remain closed or inaccessible until the current room is recreated by the user. This stage includes a set of actions from the user. The actions may include: exploring the room; remembering208the objects inside the room; interacting with objects (which may include for example, activating/stopping an animation of the object for better memorization); confirming (or indicating)210the intent to recreate the room which launches the “recreation” stage220. See room scenarios 2 and 3 inFIGS.4and5. In the recreation stage220, objects disappear from their spots of a previously explored room (for example, the VR memorization room engine110removes222the objects from display). Objects reappear224in random places, in a technical location, or in a virtual user's inventory/vault and are available to be picked up (or selected from inventory) by the user201. The user201then collects these objects (or selects from the virtual inventory) and tries to place them back to their initial corresponding spots as arranged in the previously explored room. The VR memorization room engine110receives226the user selection of an object. As the user201manipulates the object for placement, the VR memorization room engine110receives228, the location of object placement. Objects may be placed in any direction/sequence within a space, including for example, both clockwise and counterclockwise placements, as well as non-sequential placement of items back in their corresponding spot. The VR memorization room engine110may determine230whether an object was placed back in its original location when the user201was first presented the environment to memorize. When an incorrect spot is selected for the object (see for example, room scenario “4” inFIG.6where object4was incorrectly placed), all previously placed correctly objects may vanish and reappear in the technical location or directly inside the digital inventory, available to be picked up again. In some embodiments, when an incorrect spot is selected for the object (see for example, room scenario “4” inFIG.6where object4was incorrectly placed) previously placed correctly objects may stay in places, and the user may be required to try to place the object again repeating blocks226and228. The user201may repeat the recreation until all objects are placed in the correct corresponding spots, which the VR memorization room engine110determines230as a successful recreation of the room. See for example, room scenario “5” inFIG.7. In some embodiments, once a room is successfully recreated, the VR memorization room engine110determines235whether a next room becomes available for the encoding and recreation. See room scenario “5” again which shows a door (or other obstruction) being removed that permits the user access to an adjoining room. The VR memorization room engine110may, in some embodiments, register290the successful memorization of a room. The VR memorization room engine110determines240whether all subenvironments have been completed. In some embodiments, the VR memorization room engine110determines245whether the last room was the final subenvironment available in the overall environment. If not, the VR memorization room engine110proceeds250to the next available subenvironment and repeats the processes of user selection, object placement, and confirmation of room memorization as needed until the last subenvironment is reached and completed. When the VR memorization room engine110determines240all rooms (subenvironments) in the available overall environment are successfully recreated (see for example, room scenario “6” inFIG.8), the user may perform260a final recreation. See for example, room scenario “7” inFIG.9. If the overall environment is too large, it may be divided into clusters of rooms and final recreation will be applied not to the whole environment, but to the cluster. A “final recreation” may be for example, the recreation of all rooms in the environment (or cluster) at once. All previously memorized objects from all rooms are gone from their spots in each room and appear in the technical location or in the digital inventory. The user may collect the objects and place them in their initial spots in all rooms (to recreate the whole environment or cluster) at once. The correct recreation of the whole environment indicates the end of the required user actions. The virtual environment was successfully “transferred” inside the user's mind as a virtual mind palace. See for example, room scenario “8” inFIG.10. In embodiments where the environment consists of a single room, a final recreation stage may be unnecessary. Some embodiments may include a user builder mode. A user builder mode may include a user interface configured to allow users to construct their own rooms or zones with digital objects tethered to locations in the user's construct. The locations may be pre-defined by the underlying software or may be user marked. Operation of a user-built room/zone may operate according to the same process as described above where users may retrieve objects from a technical location or digital inventory and place them in the user-built room/zone to recreate the user-built scene. User built rooms/zones may be made available to other users thus expanding the available memorization training content for all users. As discussed above, functions relating to virtual reality memorization training of the subject disclosure can be performed with the use of computing devices connected for data communication via wireless or wired communication, as shown inFIG.1.FIG.11is a functional block diagram illustration of a particularly configured computer hardware platform that can communicate with various networked components, such as the computing device102, VR server116, or the cloud120, etc. In particular,FIG.11illustrates a network or host computer platform1100, as may be used to implement a server, such as the VR server116ofFIG.1. The computer platform1100may include a central processing unit (CPU)1104, a hard disk drive (HDD)1106, random access memory (RAM) and/or read only memory (ROM)1108, a keyboard1110, a mouse1112, a display1114, and a communication interface1116, which are connected to a system bus1102. In one embodiment, the HDD1106, has capabilities that include storing a program that can execute various processes, such as the VR memorization room engine110, in a manner described herein. The VR memorization room engine110may have various modules configured to perform different functions. For example, the VR memorization room engine110may include an environment generator module1142for creating environments and subenvironments, an object generator module1144for creating virtual objects, an object animation module1146for creating animation effects for each object, an object placement module1148configured to determine locations in environments, generate anchors, and associate objects with locations/anchors, and a memorization judge engine1150configured to determine correct placement of objects in environments based on object/location criteria association. As will be appreciated by one skilled in the art, aspects of the disclosed invention may be embodied as a system, method or process, or computer program product. Accordingly, aspects of the disclosed invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, aspects of the disclosed invention may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. Any combination of one or more computer readable media may be utilized. In the context of this disclosure, a computer readable storage medium may be any tangible or non-transitory medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Aspects of the disclosed invention are described below with reference to block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. The previous description provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the invention. A phrase such as an “aspect” does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. A disclosure relating to an aspect may apply to all configurations, or one or more configurations. An aspect may provide one or more examples. A phrase such as an aspect may refer to one or more aspects and vice versa. A phrase such as an “embodiment” does not imply that such embodiment is essential to the subject technology or that such embodiment applies to all configurations of the subject technology. A disclosure relating to an embodiment may apply to all embodiments, or one or more embodiments. An embodiment may provide one or more examples. A phrase such an embodiment may refer to one or more embodiments and vice versa. A phrase such as a “configuration” does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. A disclosure relating to a configuration may apply to all configurations, or one or more configurations. A configuration may provide one or more examples. A phrase such a configuration may refer to one or more configurations and vice versa. The word “exemplary” is used herein to mean “serving as an example or illustration.” Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
20,690
11861795
DETAILED DESCRIPTION Reference will now be made in detail to specific example embodiments for carrying out the inventive subject matter of the present disclosure. In the following description, specific details are set forth in order to provide a thorough understanding of the subject matter. It shall be appreciated that embodiments may be practiced without some or all of these specific details. Disclosed are anamorphosis systems to generate and cause display of anamorphic media (e.g., distorted text or other anamorphically distorted images) within a presentation of a space (e.g., a room or a defined environment). The anamorphosis systems are configured to identify a set of features of a space (e.g., walls and objects), determine relative positions of the set of features, determine a perspective of the mobile device within the space based on the relative positions of the set of features in the space, retrieve anamorphic media based on the location of the mobile device, and apply the anamorphic media to a presentation of the space at the mobile device. For example, in a bare rectangular room, an example anamorphosis system on a mobile device may use an image sensor to capture one or more images of the walls of the room, identify the walls and their relative positions, and then apply anamorphic text as an augmented reality addition to images output on a display of the mobile device. The anamorphic text may, for example, be presented clearly on the display when the mobile device is in one perspective (e.g., phone position) and displayed as distorted to the point of being unreadable from another perspective (e.g., position of the mobile device). Anamorphic media may include media items such as images and videos, configured such that the media items are only visible from one or more specified perspectives. The anamorphic media may also include a stylized text string projected onto surfaces of a space such that the stylized text string is correctly displayed when viewed through a user device from a specified perspective. The anamorphic media may be generated through a process of distorting a media item by stretching, expanding, splitting, projecting, or otherwise altering the media item such that the media item may be revealed to a user from a single vantage point or through a viewing apparatus or surface (e.g., a cylindrical, conical, or flat mirror). Thus, if the anamorphic media is viewed from a perspective other than an intended perspective, the anamorphic media may appear distorted or unclear. The anamorphosis system may include a video and image capture system to perform functionality that includes at least recording and presenting images of a space, and a graphical interface configured to display a presentation of the space. In some example embodiments, to apply the anamorphic media to a presentation of a space, the anamorphosis system generates a surface model of a space. The surface model of the space is a topographical representation of the space that includes three-dimensional depictions of features, surfaces, contours, and shapes that make up the space. For example, the surface model may include a wire-mesh representation of a three-dimensional view of the space. To generate the surface model, the anamorphosis system may apply various computer vision techniques to digital images and videos of the space. For example, the anamorphosis system may acquire, process, and analyze a digital image and/or video of the space to generate the surface model. In further embodiments, the anamorphosis system may access a surface model database that includes surface models of spaces, organized based on geolocation coordinates of corresponding locations depicted by the surface models. The anamorphosis system determines a location of a mobile device (e.g., based on geographic position sensors), and retrieves the corresponding surface model from the surface model database based on the location. The anamorphosis system identifies a set of features of the space to determine a perspective of the mobile device. The set of features may include distinguishing points or features such as contours in the space, markings, or other features that may be used as graphical markers. For example, the distinguishing points or features may include landmarks as well as identifiable objects such as windows and/or doors. Having identified the set of features of the space, the anamorphosis system determines relative positions of each of the distinguishing points or features relative to one another. For example, the relative positions may indicate distances between the distinguishing points or features. The anamorphosis system thereby determines a perspective of the mobile device based on the relative positions of the distinguishing points or features. The perspective indicates a representation of the space relative to the mobile device. In some example embodiments, the anamorphosis system retrieves anamorphic media to be applied to the presentation of the space, based on the location of the mobile device. In some example embodiments, the anamorphic media may only be available to one or more specified users (e.g., based on user identifiers). For example, upon detecting a mobile device at a location the anamorphosis system may retrieve an anamorphic media assigned to the location based on location data coordinates. Upon retrieving the anamorphic media to be displayed in the presentation of the space, the anamorphosis system applies transformations to the anamorphic media based on the perspective of the mobile device, and displays the transformed anamorphic media in the presentation of the space. In this way, the anamorphic media may appear differently based on the perspective of the mobile device. Consider an illustrative example from a user perspective. A user of a mobile device causes display of a space corresponding to a current location of the mobile device (e.g., the user points a camera of the mobile device at a space adjacent to them). Based on location data retrieved from the mobile device, the anamorphosis system determines the location of the mobile device, and in response retrieves corresponding anamorphic media to be displayed in the presentation of the space. The anamorphosis system receives a set of features of the space and relative positions of the set of features within the space from the mobile device, and determines a perspective of the mobile device based on the set of features and the relative positions. The anamorphosis system thereby applies a transformation to the anamorphic media based on the perspective of the mobile device. The anamorphosis system displays the transformed anamorphic media in the presentation of the space, for viewing by the user. As the user moves within the space, the anamorphosis system recalculates the perspective of the user, and updates the display of the anamorphic media within the presentation of the space in real-time. Thus, as the user views the anamorphic media item within the presentation of the space, the user may adjust their perspective until the anamorphic media is correctly displayed. In some example embodiments, users may generate and tag their own anamorphic media to locations. A user at a location may generate anamorphic media to be displayed in a space, and assign the anamorphic media to a position in the space. For example, the user may specify the position of the anamorphic media by selecting landmarks and/or distinguishing features of the space, and specifying a viewing perspective of the anamorphic media. The anamorphosis system may store the anamorphic media in a database to be retrieved at a later time. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple client devices102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). Accordingly, each messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data which may include or be used as anamorphic media). The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, it will be appreciated that the location of certain functionality either within the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. In some embodiments, this data includes, message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. In other embodiments, other data is used. Any such data may be used as part of or to generate anamorphic media in accordance with different embodiments described herein. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server(s)118, which facilitates access to a database(s)120in which is stored data associated with messages processed by the application server112. Dealing specifically with the Application Program Interface (API) server110, this server receives and transmits message data (e.g., commands and message payloads) between the client device102and the application server112. Specifically, the Application Program Interface (API) server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The Application Program Interface (API) server110exposes various functions supported by the application server112, including account registration, login functionality, the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104, the sending of media files (e.g., images or video) from a messaging client application104to the messaging server application114, and for possible access by another messaging client application104, the setting of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the adding and deletion of friends to a social graph, the location of friends within a social graph, opening and application event (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including a messaging server application114, an image processing system116, a social network system122, and an anamorphosis system124. The messaging server application114implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes an image processing system116that is dedicated to performing various image processing operations, typically with respect to images or video received within the payload of a message at the messaging server application114. The social network system122supports various social networking functions services, and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph304within the database(s)120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. The anamorphosis system124provides functionality to generate and cause display of anamorphic media within a presentation of a space. The application server112is communicatively coupled to one or more database server(s)118, which facilitates access to a database(s)120in which is stored data associated with messages processed by the messaging server application114. FIG.2is block diagram illustrating further details regarding the messaging system100, according to example embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of some subsystems, namely an ephemeral timer system202, a collection management system204and an annotation system206. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a SNAPCHAT story), selectively display and enable access to messages and associated content such as anamorphic media via the messaging client application104. Further details regarding the operation of the ephemeral timer system202arc provided below. The collection management system204is responsible for managing collections of media (e.g., collections of text, image video and audio data). In some examples, a collection of content (e.g., messages, including anamorphic media, images, video, text and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content such as anamorphic media displayed at specific locations relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The collection management system204furthermore includes a curation interface208that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface208enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user generated content into a collection. In such cases, the curation interface208operates to automatically make payments to such users for the use of their content. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The annotation system206operatively supplies a media overlay (e.g., a SNAPCHAT filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as, social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include anamorphic media, pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying, or projecting an anamorphic media item over a presentation depicting a space. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay including text that can be overlaid on top of a photograph or video stream generated taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database(s)120and accessed through the database server(s)118. In one example embodiment, the annotation system206provides a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The annotation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another example embodiment, the annotation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system206associates the media overlay of a highest bidding merchant with a corresponding geolocation for a predefined amount of time FIG.3is a schematic diagram300illustrating data which may be stored in the database(s)120of the messaging server system108, according to certain example embodiments. While the content of the database(s)120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database(s)120includes message data stored within a message table314. The entity table302stores entity data, including an entity graph304. Entities for which records are maintained within the entity table302may include individuals, corporate entities, organizations, objects, places, events etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The database(s)120also stores annotation data, in the example form of filters, in an annotation table312. Filters for which data is stored within the annotation table312are associated with and applied to videos (for which data is stored in a video table310) and/or images (for which data is stored in an image table308). Filters, in one example, are overlays (e.g., anamorphic media items) that are displayed as overlaid on an image or video during presentation to a recipient user. For example, the overlay may include an anamorphic media item displayed within a presentation of a space, such that the anamorphic media item appears to be projected over a set of three dimensional surfaces of a space, following the contours of the surfaces of the space. Filters may be of varies types, including a user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filers include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application104, based on geolocation information determined by a GPS unit of the client device102. Another type of filer is a data filer, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Example of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102or the current time. Other annotation data that may be stored within the image table308is so-called “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video. As mentioned above, the video table310stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table314. Similarly, the image table308stores image data associated with messages for which message data is stored in the entity table302. The entity table302may associate various annotations from the annotation table312with various images and videos stored in the image table308and the video table310. A story table306stores data regarding collections of messages and associated image, video or audio data, which are compiled into a collection (e.g., a SNAPCHAT story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table302) A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application104may include an icon that is user selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story.” which is a collection of content from multiple users that is created manually, automatically or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users, whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). FIG.4is a schematic diagram illustrating a structure of a message400, according to some in some embodiments, generated by a messaging client application104for communication to a further messaging client application104or the messaging server application114. The content of a particular message400is used to populate the message table314stored within the database(s)120, accessible by the messaging server application114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application server112. The message400is shown to include the following components:A message identifier402: a unique identifier that identifies the message400.A message text payload404: text, to be generated by a user via a user interface of the client device102and that is included in the message400.A message image payload406: image data, captured by a camera component of a client device102or retrieved from memory of a client device102, and that is included in the message400.A message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102and that is included in the message400.A message audio payload410: audio data, captured by a microphone or retrieved from the memory component of the client device102, and that is included in the message400.A message annotations412: annotation data (e.g., filters, stickers or other enhancements) that represents annotations to be applied to message image payload406, message video payload408, or message audio payload410of the message400.A message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client application104.A message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).A message story identifier418: identifier values identifying one or more content collections (e.g., “stories”) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.A message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.A message sender identifier422: an identifier (e.g., a messaging system identifier, email address or device identifier) indicative of a user of the client device102on which the message400was generated and from which the message400was sentA message receiver identifier424: an identifier (e.g., a messaging system identifier, email address or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g. values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table308. Similarly, values within the message video payload408may point to data stored within a video table310, values stored within the message annotations412may point to data stored in an annotation table312, values stored within the message story identifier418may point to data stored in a story table306, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table302. FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data including anamorphic media) or a content collection (e.g., an ephemeral message story504) may be time-limited (e.g., made ephemeral). For example, an ephemeral message502may include an anamorphic media item which may be displayed for a period of time specified by the story timer514. An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client application104. In one embodiment, where the messaging client application104is a SNAPCHAT application client, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer512, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer512is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. The ephemeral message502is shown inFIG.5to be included within an ephemeral message story504(e.g., a personal SNAPCHAT story, or an event story). The ephemeral message story504has an associated story duration parameter508, a value of which determines a time-duration for which the ephemeral message story504is presented and accessible to users of the messaging system100. The story duration parameter508, for example, may be the duration of a music concert, where the ephemeral message story504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the story duration parameter508when performing the setup and creation of the ephemeral message story504. Additionally, each ephemeral message502within the ephemeral message story504has an associated story participation parameter510, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message story504. Accordingly, a particular ephemeral message story504may “expire” and become inaccessible within the context of the ephemeral message story504, prior to the ephemeral message story504itself expiring in terms of the story duration parameter508. The story duration parameter508, story participation parameter510, and message receiver identifier424each provide input to a story timer514, which operationally determines, firstly, whether a particular ephemeral message502of the ephemeral message story504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message story504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the story timer514operationally controls the overall lifespan of an associated ephemeral message story504, as well as an individual ephemeral message502included in the ephemeral message story504. In one embodiment, each and every ephemeral message502within the ephemeral message story504remains viewable and accessible for a time-period specified by the story duration parameter508. In a further embodiment, a certain ephemeral message502may expire, within the context of ephemeral message story504, based on a story participation parameter510. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message story504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message story504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message story504based on a determination that it has exceeded an associated story participation parameter510. For example, when a sending user has established a story participation parameter510of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message story504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message story504either when the story participation parameter510for each and every ephemeral message502within the ephemeral message story504has expired, or when the ephemeral message story504itself has expired in terms of the story duration parameter508. In certain use cases, a creator of a particular ephemeral message story504may specify an indefinite story duration parameter508. In this case, the expiration of the story participation parameter510for the last remaining ephemeral message502within the ephemeral message story504will determine when the ephemeral message story504itself expires. In this case, a new ephemeral message502, added to the ephemeral message story504, with a new story participation parameter510, effectively extends the life of an ephemeral message story504to equal the value of the story participation parameter510. Responsive to the ephemeral timer system202determining that an ephemeral message story504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client application104to cause an indicium (e.g., an icon) associated with the relevant ephemeral message story504to no longer be displayed within a user interface of the messaging client application104. Similarly, when the ephemeral timer system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client application104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message502. FIG.6is a block diagram600illustrating components of the anamorphosis system124, that configure the anamorphosis system124to cause display of anamorphic media in a presentation of a space, according to various example embodiments. The anamorphosis system124is shown as including a location module602, a presentation module604, an identification module606, and an anamorphosis module608, all, or some, configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of these modules may be implemented using one or more processors610(e.g., by configuring such one or more processors to perform functions described for that module) and hence may include one or more of the processors610. Any one or more of the modules described may be implemented using hardware alone (e.g., one or more of the processors610of a machine) or a combination of hardware and software. For example, any module described of the anamorphosis system124may physically include an arrangement of one or more of the processors610(e.g., a subset of or among the one or more processors of the machine) configured to perform the operations described herein for that module. As another example, any module of the engagement tracking system610may include software, hardware, or both, that configure an arrangement of one or more processors610(e.g., among the one or more processors of the machine) to perform the operations described herein for that module. Accordingly, different modules of the anamorphosis system124may include and configure different arrangements of such processors610or a single arrangement of such processors610at different points in time. Moreover, any two or more modules of the anamorphosis system124may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices. FIG.7is a flowchart illustrating various operations of the anamorphosis system124in performing a method700for causing display of anamorphic media in a presentation of a space, according to certain example embodiments. Operations of the method700may be performed by the modules described above with respect toFIG.6. As shown inFIG.7, the method700includes one or more operations702,704,706,708,710,712, and714. Operation702may be performed by the presentation module604. At operation702, the presentation module604causes display of a presentation of a space within a graphical user interface (GUI) of a mobile device (e.g., client device102). The mobile device may include a camera that captures images of a surrounding area. The images may thereby be displayed at within a GUI displayed on the mobile device. Operation704may be performed by the location module602. At operation704, the location module602determines a location of the mobile device, wherein the location corresponds to the space displayed in the GUI. The location module602may determine the location based on GPS coordinates, or in some example embodiments, based on the images of the space captured by the camera of the mobile device. For example, the location module602may determine the location of the mobile device based on image recognition. The location module602may compare the images collected by the camera of the mobile device (e.g., client device102) with a catalog of preloaded images depicting locations. Based on the comparison, the location module602determines a location of the mobile device. Operation706may be performed by the identification module606. At operation706, the identification module606identifies a set of features of the space. The set of features may include landmarks or other distinguishing features, such as windows, doors, wall outlets, identifying markings (e.g., a painted “X”), edges of walls, or the like. For example, the identification module606may employ computer vision and/or feature detection techniques known to persons of ordinary skill in the art, wherein the identification module606may collect image data that include visual images (e.g., through a camera element of the client device102). In some example embodiments, the identification module606identifies at least three distinct features. Operation708may be performed by the identification module606. At operation708, the identification module606determines relative positions of each of the set of features identified, based on relative positions of the set of features. In some example embodiments, the identification module606may determine distances between each features, and a position of each feature in the display. For example, the identification module606may apply triangulation techniques to determine relative positions of each of the set of features. Operation710may be performed by the identification module606. At operation710, the identification module determines a perspective of the mobile device based on the relative positions of each of the set of features. The perspective of the mobile device indicates a position and vantage point of the mobile device at the location. Operation712may be performed by the anamorphosis module608. At operation712, the anamorphosis module608retrieves anamorphic media based on the location of the mobile device. The anamorphic media includes images and video that may be displayed in a presentation of a space, and which appear distorted unless viewed from a specific viewing point in the location. In some example embodiments, the anamorphosis module608may access a database of anamorphic media that includes anamorphic media categorized based on location, and wherein each anamorphic media item is to be viewed from a specific viewing point at a corresponding location. The anamorphic media may include an image to be displayed at a location, wherein the image is only discernable if viewed from a specific position at the location. For example, the anamorphic media may include a stylized text string wherein the text string is not legible unless viewed from a specific viewing location (e.g., from a specified perspective), or in further embodiments, the anamorphic media may include a video or animation that plays once the user views the anamorphic media from a specific perspective. Operation714may be performed by the presentation module604. At operation714, the presentation module604causes display of the anamorphic media in the presentation of the space based on the perspective of the mobile device. FIG.8is a diagram illustrating various operations of the anamorphosis system124in performing a method800for causing display of the anamorphic media in the presentation of the space, according to certain example embodiments. Operations of the method800may be performed by the modules described above with respect toFIG.6. As shown inFIG.8, the method800includes one or more operations802,804, and806that may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation714of the method700, according to some example embodiments. Operation802may be performed by the anamorphosis module608. At operation802, the anamorphosis module608accesses a surface model of the space based on the location of the mobile device, wherein the surface model includes a three-dimensional representation of the space, such as a wire-mesh form. In some example embodiments, the surface model may be generated by the anamorphosis module608based on computer vision. The surface model may be a geometric representation that includes a three-dimensional representation of a space based on a set of vertices and edges that together form polygons depicting the space. In further embodiments, the anamorphosis module608may access a surface model database that includes a set of pre-generated surface models categorized based on location data. Operation804may be performed by the anamorphosis module608. At operation804, the anamorphosis module608causes display of the anamorphic media in the presentation of the space based on the three-dimensional representation of the space as depicted by the surface model and the perspective of the mobile device. FIG.9is a flowchart illustrating various operations of the anamorphosis system124in performing a method900receiving anamorphic media, according to certain example embodiments. Operations of the method900may be performed by the modules described above with respect toFIG.6. As shown inFIG.9, the method900includes one or more operations902,904, and906that may be performed as part (e.g., a precursor task, a subroutine, or a portion) of the method700, according to some example embodiments. At operation902, the anamorphosis system124receives anamorphic media from a client device102(e.g., a second mobile device of a second user). In some example embodiments, a second user may submit media data to be converted into anamorphic media by the anamorphosis system124. For example, the user may provide the anamorphosis system124with media data (e.g., pictures, videos), as well as location data indicating a location in which to assign the media data, and positioning data to indicate a position to display the media data in a presentation of a space corresponding to the location. The positioning data may include a perspective specified by the user, wherein the perspective may be defined by relative positions of a set of features in a space. Consider an illustrative example from a user perspective. A user may provide the anamorphosis system124with media data that includes a media item such as a digital image or video to be converted and displayed as anamorphic media at a specified location. The user may tag a media item with location data, and specify a display configuration of the media item in a presentation of the location. For example, the user may specify that the media item is to be displayed so that it is viewable from a specific viewing location (e.g., based on location data and a specified perspective of the client device102). Upon receiving the display configuration, the anamorphosis system124may apply a transformation to the media item in order to generate the anamorphic media. The transformation may include stretching, distorting, or altering the media item, such that the media item may be projected onto a surface of the space, and be visible from a perspective specified by the user. For example, the anamorphosis system124may apply transformations to the media item such that the media item is projected onto various surfaces on a space. In some example embodiments, the second user may provide an input to the anamorphosis system124specifying that the anamorphic media is only visible/made available to “friends.” or “connections” of the second user within a social media platform. In further embodiments, the second user may specify that the anamorphic media is only available/displayed to a first user (e.g., based on a user identifier of the first user). At operation904, the anamorphosis system124assigns the anamorphic media to the location based on location data such as GPS coordinates. For example, the user may provide the anamorphosis system124with GPS coordinates of the location and in response, the anamorphosis system124may geo-tag the anamorphic media to the location. At operation906, the anamorphosis system124detects a client device102of a user at the location. For example, the anamorphosis system124may maintain a geofence around the location and detect a mobile device as the mobile device transgresses a threshold of the geofence. At operation908, the anamorphosis system124retrieves the anamorphic media in response to detecting the user at the location, and causes display of the anamorphic media in a presentation of the space within the mobile device of the user. The display of the anamorphic media may vary based on the perspective of the user. FIG.10is an example of anamorphic media1020displayed in a presentation1000of a space1010, according to certain example embodiments. The presentation1000may be displayed within a GUI at a client device102according to the method700ofFIG.7. As shown inFIG.10, the anamorphosis system124may display the anamorphic media1020in the presentation1000based on a perspective of a client device102displaying the presentation1000. As the perspective of the client device102changes (e.g., the user moves to a different viewing location), the anamorphosis system124may alter the display of the anamorphic media124based on the changes in the perspective. FIG.11is an example of anamorphic media1120displayed in a presentation1100of a space1110, from a first perspective, according to certain example embodiments. The presentation1100may be displayed within a GUI at a client device102according to the method700ofFIG.7. As shown inFIG.11, the anamorphosis system124may display the anamorphic media1120in the presentation1100based on a perspective of a client device102displaying the presentation1100. As the perspective of the client device102changes (e.g., the user moves to a different viewing location), the anamorphosis system124may alter the display of the anamorphic media124based on the changes in the perspective. FIG.12is an example of anamorphic media1120displayed in a presentation1200of a space1110, from a second perspective, according to certain example embodiments. The presentation1200may be displayed within a GUI at a client device102according to the method700ofFIG.7. As shown inFIG.12, the anamorphosis system124may display the anamorphic media1120in the presentation1200based on a perspective of a client device102displaying the presentation1200. As the perspective of the client device102changes (e.g., the user moves to a different viewing location), the anamorphosis system124may alter the display of the anamorphic media124based on the changes in the perspective. Software Architecture FIG.13is a block diagram illustrating an example software architecture1306, which may be used in conjunction with various hardware architectures herein described.FIG.13is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture1306may execute on hardware such as machine1300ofFIG.13that includes, among other things, processors1304, memory1314, and I/O components1318. A representative hardware layer1352is illustrated and can represent, for example, the machine1300ofFIG.13. The representative hardware layer1352includes a processing unit1354having associated executable instructions1304. Executable instructions1304represent the executable instructions of the software architecture1306, including implementation of the methods, components and so forth described herein. The hardware layer1352also includes memory and/or storage modules memory/storage1356, which also have executable instructions1304. The hardware layer1352may also comprise other hardware1358. In the example architecture ofFIG.13, the software architecture1306may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1306may include layers such as an operating system1302, libraries1320, applications1316and a presentation layer1314. Operationally, the applications1316and/or other components within the layers may invoke application programming interface (API) API calls1308through the software stack and receive a response as in response to the API calls1308. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware1318, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1302may manage hardware resources and provide common services. The operating system1302may include, for example, a kernel1322, services1324and drivers1326. The kernel1322may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1322may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1324may provide other common services for the other software layers. The drivers1326are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1326include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1320provide a common infrastructure that is used by the applications1316and/or other components and/or layers. The libraries1320provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system1302functionality (e.g., kernel1322, services1324and/or drivers1326). The libraries1320may include system libraries1344(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries1320may include API libraries1346such as media libraries (e.g., libraries to support presentation and manipulation of various media format such as MPREG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D in a graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1320may also include a wide variety of other libraries1348to provide many other APIs to the applications1316and other software components/modules. The frameworks/middleware1318(also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications1316and/or other software components/modules. For example, the frameworks/middleware1318may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware1318may provide a broad spectrum of other APIs that may be utilized by the applications1316and/or other software components/modules, some of which may be specific to a particular operating system1302or platform. The applications1316include built-in applications1338and/or third-party applications1340. Examples of representative built-in applications1338may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. Third-party applications1340may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications1340may invoke the API calls1308provided by the mobile operating system (such as operating system1302) to facilitate functionality described herein. The applications1316may use built in operating system functions (e.g., kernel1322, services1324and/or drivers1326), libraries1320, and frameworks/middleware1318to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as presentation layer1314. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.14is a block diagram illustrating components of a machine1400, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically.FIG.14shows a diagrammatic representation of the machine1400in the example form of a computer system, within which instructions1410(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1400to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions1410may be used to implement modules or components described herein. The instructions1410transform the general, non-programmed machine1400into a particular machine1400programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine1400operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1400may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1400may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch, head mounted device, VR goggles), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1410, sequentially or otherwise, that specify actions to be taken by machine1400. Further, while only a single machine1400is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1410to perform any one or more of the methodologies discussed herein. The machine1400may include processors1404, memory memory/storage1406, and I/O components1418, which may be configured to communicate with each other such as via a bus1402. The memory/storage1406may include a memory1414, such as a main memory, or other memory storage, and a storage unit1416, both accessible to the processors1404such as via the bus1402. The storage unit1416and memory1414store the instructions1410embodying any one or more of the methodologies or functions described herein. The instructions1410may also reside, completely or partially, within the memory1414, within the storage unit1416, within at least one of the processors1404(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1400. Accordingly, the memory1414, the storage unit1416, and the memory of processors1404are examples of machine-readable media. The I/O components1418may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1418that are included in a particular machine1400will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1418may include many other components that are not shown inFIG.14. The I/O components1418are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1418may include output components1426and input components1428. The output components1426may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1428may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components1418may include biometric components1430, motion components1434, environmental environment components1436, or position components1438among a wide array of other components. For example, the biometric components1430may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components1434may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environment components1436may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1438may include location sensor components (e.g., a Global Position system (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1418may include communication components1440operable to couple the machine1400to a network1432or devices1420via coupling1422and coupling1424respectively. For example, the communication components1440may include a network interface component or other suitable device to interface with the network1432. In further examples, communication components1440may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1420may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components1440may detect identifiers or include components operable to detect identifiers. For example, the communication components1440may include Radio Frequency Identification (RFID) tag reader components. NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1440, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth. Glossary “ANAMORPHOSIS” in this context refers to distortions and transformations applied to a media items such as images and videos, such that the media items appear normal when viewed from a particular point or through a suitable viewing device, mirror, or lens. “PERSPECTIVE” in this context refers to a viewing angle of a user at a particular location. “CARRIER SIGNAL” in this context refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such instructions. Instructions may be transmitted or received over the network using a transmission medium via a network interface device and using any one of a number of well-known transfer protocols. “CLIENT DEVICE” in this context refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smart phones, tablets, ultra books, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “COMMUNICATIONS NETWORK” in this context refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology. General Packet Radio Service (GPRS) technology. Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks. Universal Mobile Telecommunications System (UMTS). High Speed Packet Access (HSPA). Worldwide Interoperability for Microwave Access (WiMAX). Long Term Evolution (LTE) standard, others defined by various standard setting organizations, other long range protocols, or other data transfer technology. “EMPHEMERAL MESSAGE” in this context refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “MACHINE-READABLE MEDIUM” in this context refers to a component, device or other tangible media able to store instructions and data temporarily or permanently and may include, but is not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EEPROM)) and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. “COMPONENT” in this context refers to a device, physical entity or logic having boundaries defined by function or subroutine calls, branch points, application program interfaces (APIs), or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an Application Program Interface (API)). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “PROCESSOR” in this context refers to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”. “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC) or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. “TIMESTAMP” in this context refers to a sequence of characters or encoded information identifying when a certain event occurred, for example giving date and time of day, sometimes accurate to a small fraction of a second.
77,307
11861796
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION FIG.1Aillustrates a scene of a room100in which a head-mounted AR system101is used. The head-mounted AR system101may be a mixed reality device that presents to the user an interface for interacting with and experiencing a mixed reality world. The mixed reality world can include computer-generated content and real world physical objects in the user's physical environment. The head-mounted AR system101can provide images of virtual objects intermixed with physical objects in a field of view of the user, for example. The mixed reality world can be seen by the user through eye piece(s) of the head-mounted AR system101. For example, a monocular eye view102as seen through a right eye piece104of the head-mounted AR system101is shown. The eye pieces (including the right eye piece104) of the head-mounted AR system101can be at least partially transparent and serve as “optical see-through” displays through which the user can view real objects in the user's physical environment. The user's physical environment is the physical surroundings of the user as the user moves about and views real world objects through the head-mounted AR system101. For example, the user's physical environment, as seen in the monocular eye view102, includes a chair106, a table108, and bookshelves110. The bookshelves110are cubicle shelves that include a grid of individual shelves, including a shelf111. The head-mounted AR system101can be configured to present to the user, through the eye pieces of the head-mounted AR system101, virtual content that can be perceived as augmentations to physical reality. For example, the head-mounted AR system101can produce images of virtual objects which are transposed onto partially transparent physical surfaces. For instance, virtual content112can be virtually displayed, through the eye pieces of the head-mounted AR system101, on a top surface114of the table108, when the head-mounted AR system101determines that the table108is within a field of view of the head-mounted AR system101. Virtual displaying of content, as described herein, refers to displaying content on a display system or device (such as the right eye piece104), such that the content appears to the user to be displayed as an overlay at a particular three dimensional location in the physical environment. The head-mounted AR system101can be configured such that when the user turns their head or looks up or down, display devices within the head-mounted AR system101continue to render the virtual content112so that it appears to remain affixed to the top surface114of the table108. Virtual content may include or correspond to web pages, blogs, digital pictures, videos, news articles, newsletters, or music, to name a few examples. Virtual digital content may correspond to content stored on a storage device that is accessible by the head-mounted AR system101or virtual content may be a presentation of streaming content, such as a live video feed. Multiple items of virtual content may be virtually displayed on multiple, different physical surfaces in the room100, to increase an overall workspace area for the user onto which virtual content may be displayed. Various planar surfaces may be designated as virtual displays, for example. The head-mounted AR system101can determine that a new content item is available for virtual display by the head-mounted AR system101. The head-mounted AR system101can make an internal determination that new content is available and/or may receive a notice from an external system that new content is available. The head-mounted AR system101may subscribe to a bot or another content source, for example. The new content may be various types of pushed content. Pushed content can be content that can be rendered into the user's environment without the user having to search for or select the content. For example, pushed content may include (a) notifications from various applications such as stock notifications, newsfeeds, etc.; (b) prioritized content, for example, updates and notifications from social media applications, email updates, or messages from contacts; (c) messages targeting broad target groups and/or specific target groups; or d) other types of content or content streams. In response to determining that a new content item is available for virtual display, a notification116can be displayed to the user. The notification116indicates that a new stock chart content item is available. The notification116is also prompting the user to select a location within the physical environment upon which to virtually display the new stock chart. The notification116can be virtually displayed to the user by the head-mounted AR system101. The notification116can be shown on a particular physical surface within the room100, or can be displayed as if appearing “in space.” As another example, the notification116can be displayed within the monocular eye view102relative to one or more edges of the eye piece104. Other examples include the notification116being displayed on a physical display of a computing device (not shown) or as an audio notification to the user (e.g., as played through speakers of the head-mounted AR system101or another connected computing device). The user can configure various settings for controlling the display of the notification116and other new content notices. For instance, the user can configure a particular location (e.g., physical surface, physical computing device display) on which to virtually display new content notifications. As another example, the user can configure notification frequency (e.g., display immediately, display periodically) and content type(s) for which to display notices. The user can interact with the notification116to indicate acceptance/confirmation of the notification116and/or to initiate selection of a location (e.g., within the physical environment of the room100) on which to virtually display the new content or content stream. The user can select a particular location in a variety of ways. One approach for selecting a location is by using a user interface displayed on a mobile device of the user. FIG.1Billustrates an example mobile device user interface140. The user interface140can be displayed on a user device142of a user in response to a user receiving a notification (e.g., the notification116described above with respect toFIG.1A) for a new content item that can be displayed in an AR environment. The user interface140displays a spatial three-dimensional (3D) environment that includes a 3D model144of the room100described above with respect toFIG.1A. The 3D model144can be a simplified geometric model of the room100. For example, the 3D model144can be a digital 3D planar map (e.g., a “mini map”) of the room100. The 3D model144can be a static preconstructed representation of the room100. As another example, the 3D model144can be generated in near real time based on information generated by the head-mounted AR system101ofFIG.1A. For instance, the head-mounted AR system101can generate and send 3D model information that can be used by the user device142to display the 3D model144. In some implementations, the head-mounted AR system101can respond to changes in head-pose and/or eye movements of the user and can send updated 3D model information to the user device142, so that an updated 3D model144can be displayed in the user interface140(e.g., with the updated 3D model144representing a view of the room100from the current perspective of the user). The user interface140, which can present a simplified, processed view of the room100, can include representations of physical objects in the room100. For example, the user interface400includes a chair object146, a table object148, and a bookshelves object150corresponding to the chair106, the table108, and the bookshelves110, respectively. The user can zoom and pan the user interface140to focus on a particular area of the 3D model144. The user device140can be used as a controller, control panel, and/or navigator for the AR environment. For instance, the user can interact with the user interface140for placement of virtual content. Existing virtual content can be represented in the user interface140by an object or icon. For instance, the virtual content112ofFIG.1Ais represented in the user interface140as an icon152. The user can select the icon152and drag the icon152to another location within the user interface140. In response to a moving of the icon152to another location within the user interface140, the virtual content112can be displayed as an overlay at a corresponding location within the AR environment as seen by the user using the head-mounted AR system101. In some implementations, certain surfaces of objects in the user interface140are identified (by the user device142or the head-mounted AR system101) as suggested surfaces for content placement. For instance, a chair back154of the chair object146can be shown in a highlighted manner (e.g., in a particular color) to indicate that the back of the chair106is a recommended location for content. As another example, certain shelves of the top two rows of the bookshelves object150can be highlighted, such as shelves156,158,160, and162, to indicate that corresponding individual shelves of the bookshelves110are suitable for placing content. The lower row of shelves of the bookshelves110might not be identified as suggested locations for content, since the head-mounted AR system101may determine that the lower row of shelves of the bookshelves110are not currently visible to the user (e.g., the table108may block a view of the lower shelves). In general, suggested surfaces can be determined based on a variety of factors, including field of view with respect to current head-pose or eye gaze, surface contour, surface texture, surface size, type of content that may be placed on the surface (e.g., type(s) of existing or new content), or whether the surface is currently occupied by virtual content. In some implementations, suggested surfaces are highlighted in response to a notification of new content, as surfaces that may be amenable for virtual display of the new content. For example, suggested surfaces can be highlighted in the user interface in response to a notification164that instructs the user to select a location for a new stock chart content item. The notification164corresponds to the notification116and can be displayed automatically in the user interface140in conjunction with the display of the notification116within the view102or in response to user interaction with (e.g., confirmation of) the notification116. The user can, in response to the notification164, select a surface of a particular object in the user interface140as a location for virtual display of the new stock chart. The user can select a suggested surface, some other surface, or some other location within the 3D model144, such as a location that represents a location “floating in space” in the room100. As shown by a finger166of the user, the user selects the shelf158of the bookshelves object150, to indicate that the shelf111of the bookshelves110is a selected location for virtual display of the new stock chart. An indication of the selection of the shelf158of the bookshelves object150can be transmitted to the head-mounted AR system101. For example, 3D information such as coordinates, region identifier, etc. of the shelf158can be sent to the head-mounted AR system101. In some implementations, in response to selection of the shelf158as the location for virtual display of the new stock chart, a new icon representing the new stock chart is displayed in the user interface140, in/on the shelf158. In some implementations, in response to (or in conjunction with) the display of the notification164, an icon representing the new stock chart is displayed at an initial/default location within the user interface140. The user can move the new icon to a desired location, to finalize selection of the location for the new stock chart. The initial/default location can be, for example, the center of the user interface140, a particular suggested surface, such as a surface that has a highest suitability score, or a surface or location that is selected based on a configured user preference. If a new icon is displayed representing the new stock chart, the new icon can have a different appearance (e.g., different color, flashing, other emphasis) from the icon152, at least until the user finalizes a location for the new stock chart. The user interface140can be configured such that when moving an icon to a particular surface, the icon “snaps” to the surface. As another example, the user can move an icon by using a “drag and drop” operation, and in some cases, a user can confirm an end destination/location after the drag and drop operation has completed. In response to selection of the shelf158, the notification164can be removed from the user interface140. In some implementations, a selection confirmation notice (e.g., “stock chart location confirmed,” “thank you for selecting the location of the stock chart”) is displayed in response to selection of the shelf158. After selection of the shelf158, the 3D model144can remain displayed in the user interface140, to enable the user to perform other interactions with the AR environment, to receive future notifications of new content, etc. FIG.10illustrates an updated scene of a room180in which a head-mounted AR system182virtually displays a new content item. The room180corresponds to the room100, and a view184seen through an eye piece186of a head-mounted AR system182has been updated in response to selection of a location at which to virtually display a new content item. For instance, the head-mounted AR system182can receive information indicating user selection of the shelf158in the user interface140ofFIG.1B. The head-mounted AR system182can, in response to receiving information indicating user selection of the shelf158as a location for a new stock chart content item, identify a shelf188of bookshelves190as corresponding to the shelf158of the bookshelves object150. The head-mounted AR system182can virtually display a virtual stock chart content item192in/on the shelf188. For example, the virtual stock chart content item192can be rendered and superimposed substantially over the real world shelf188, so that the virtual stock chart content item192appears to the user to be displayed as an overlay on top of the shelf188. The head-mounted AR system182can maintain a one to one mapping of the shelf188to the virtual stock chart content item192. Using the head-mounted AR system182, the user can view the virtual stock chart content item192as if it appears on the mapped/matched shelf188. The virtual stock chart content item192can be displayed to appear as physically attached on the shelf188, through projection and as perceived by the user. FIG.2illustrates an example implementation of an AR system200that includes mobile device integration. A head-mounted AR system202can be configured to present to a user virtual content that can be perceived as augmentations to physical reality. For example, an AR environment generator204can provide images of virtual objects that can be intermixed with physical objects in a field of view of the user. The head-mounted AR system202can receive a notification from a new content notifier206that a new content item is available for virtual display by the head-mounted AR system202. A new content subscriber208can subscribe to and receive notifications from the new content notifier206, for example. As another example, the head-mounted AR system202can make an internal determination that new content is available for virtual display to the user. The AR environment generator204can generate a virtual content item for displaying the new content by the head-mounted AR system202. The virtual content item can be displayed by the head-mounted AR system202so that it virtually appears to the user at a default location or the virtual content item can be displayed so that it virtually appears to the user at a user-selected location. The head-mounted AR system202can send information to a mobile device210, using a communication interface212, to enable the user to use the mobile device210to select a location within the environment of the user for virtually displaying the virtual content item. The AR environment Generator204can generate 3D information about the environment of the user and the 3D information can be sent to the mobile device210using the communication interface212. The head-mounted AR system202can also send information about the new content item (e.g., a description of the content) and a request for selection of a location for the new content item. The mobile device210can receive, using a communication interface214, the 3D information about the user's environment and the information about the new content item. A 3D model renderer216of an AR controller application218can render a 3D representation of the user's environment within the AR controller application218. The 3D representation can include representations of physical objects within the environment and, if applicable, representations of existing virtual content that are currently being virtually displayed by the head-mounted AR system202in the user's environment. The AR controller application218can display an instruction to the user to select a location for virtual display of the new content item. A content location selector220can enable the user to select a location on the 3D representation of the user's environment, as corresponding to a location for virtual display of the new content item. The location can be a surface of a rendered object that corresponds to a physical object in the user's environment or the location can be 3D coordinates that may or may not correspond to a physical object. The AR controller application218can send, using the communication interface214, location information for the selected location of the new content item to the head-mounted AR system202. The location information can be 3D coordinates, region information, object information, or other types of information or identifiers. The AR environment generator204can present or project the virtual content item so that the virtual content item appears to the user to be located at a location in the user's physical environment that corresponds to the location information received from the mobile device210. FIG.3depicts a flow chart of a method300for displaying content within an AR system. A notification regarding availability of new content to display in an augmented reality system is received (310). The augmented reality system is configured to present content on a display system so that the content appears to a user to be affixed at an assigned location in a physical environment of the user. The notification can be received from an external source that is external to the augmented reality system. As another example, the notification can be received as a result of an internal determination by the augmented reality system that new content is available. The notification can be displayed on the display system. A confirmation input that indicates acceptance of the new content is received (320). The confirmation input can be an interaction with the notification, a voice command, or some other type of input. The notification can be removed from the display system in response to receiving the confirmation input. In response to receiving the confirmation input, three dimensional information that describes the physical environment is provided to an external computing device external to the augmented reality system (330). The three dimensional information can be provided to enable the external computing device to be used for selecting an assigned location in the physical environment for the new content. The external computing device can be a mobile device or some other type of computing device. The three dimensional information can include three dimensional information for candidate assigned locations for the new content. The candidate assigned locations can be locations that are determined to be suitable for overlaying a display of the new content. Candidate assigned locations can correspond to physical surfaces in the physical environment. The three dimensional information can include assigned locations of existing content currently being displayed on the display system. The three dimensional representation can correspond to a static representation of the physical environment. As another example, the three dimensional information can correspond to a current perspective of the user as seen through the display system. Updated three dimensional information can be generated and provided to the external computing device in response to detection of a change in user perspective. Location information that indicates the assigned location in the physical environment for the new content is received, from the external computing device (340). The location information can be 3D coordinates, region identifier(s), object identifiers, or some other type of location information. The location information can correspond to a location the user selects on a three dimensional representation of the physical environment that is displayed on the external computing device. A display location is determined, based on the location information (350). The display location is a location on the display system at which to display the new content so that the new content appears to the user to be displayed as an overlay at the assigned location in the physical environment. The new content is displayed on the display system at the display location (360). Additionally or alternatively, an updated assigned location for a first existing content item can be received from the external computing device. An updated display location on the display system can be determined, based on the updated assigned location. The first existing content item can be displayed on the display system at the updated display location, so that the first existing content item appears to the user to be displayed as an updated overlay at the updated assigned location in the physical environment. The described systems, methods, and techniques may be implemented in digital electronic circuitry, computer hardware, firmware, software, or in combinations of these elements. Apparatus implementing these techniques may include appropriate input and output devices, a computer processor, and a computer program product tangibly embodied in a machine-readable storage device for execution by a programmable processor. A process implementing these techniques may be performed by a programmable processor executing a program of instructions to perform desired functions by operating on input data and generating appropriate output. The techniques may be implemented using one or more computer programs or non-transitory computer-readable storage media that includes instructions that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. Each computer program may be implemented in a high-level procedural or object-oriented programming language, or in assembly or machine language if desired; and in any case, the language may be a compiled or interpreted language. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, a processor will receive instructions and data from a read-only memory and/or a random access memory. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example, semiconductor memory devices, such as Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and Compact Disc Read-Only Memory (CD-ROM). Any of the foregoing may be supplemented by, or incorporated in, specially designed ASICs (application-specific integrated circuits). Computer-readable medium may be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus may include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus. A computer program, also known as a program, software, software application, script, plug-in, or code, may be written in any form of programming language, including compiled or interpreted languages, and it may be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data in a single file dedicated to the program in question, or in multiple coordinated files. A computer program may be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network. The processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. Elements of a computer may include a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer may not have such devices. Moreover, a computer may be embedded in another device, e.g., a tablet computer, a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a VAR system, to name just a few. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry. While this specification contains many specifics, these should not be construed as limitations on the scope of the disclosure or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and may even be claimed as such, one or more features from a claimed combination may, in some cases, be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. For example, although the mapping operation is described as a series of discrete operations, the various operations may be divided into additional operations, combined into fewer operations, varied in order of execution, or eliminated, depending on the desired implementation. Similarly, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. For example, although some operations are described as being performed by a processing server, one of more of the operations may be performed by the smart meter or other network components. Terms used herein and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including, but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes, but is not limited to,” etc.). Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together. The term “and/or” is also intended to be construed in this manner. The use of the terms “first,” “second,” “third,” etc., are not necessarily used herein to connote a specific order or number of elements. Generally, the terms “first,” “second,” “third,” etc., are used to distinguish between different elements as generic identifiers. Absent a showing that the terms “first,” “second,” “third,” etc., connote a specific order, these terms should not be understood to connote a specific order. Furthermore, absence a showing that the terms “first,” “second,” “third,” etc., connote a specific number of elements, these terms should not be understood to connote a specific number of elements. For example, a first widget may be described as having a first side and a second widget may be described as having a second side. The use of the term “second side” with respect to the second widget may be to distinguish such side of the second widget from the “first side” of the first widget and not to connote that the second widget has two sides.
32,846
11861797
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures. DETAILED DESCRIPTION The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of various embodiments of the disclosure as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness. The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following description of various embodiments of the disclosure is provided for illustration purpose only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents. It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces. For the same reason, in the accompanying drawings, some components are exaggerated, omitted, or schematically illustrated. Furthermore, the size of each element does not entirely reflect an actual size thereof. In the drawings, like reference numerals refer to the same or corresponding elements throughout. Advantages and features of the disclosure and methods of accomplishing the same will be more readily appreciated by referring to the following description of embodiments and the accompanying drawings. However, the disclosure may be embodied in many different forms and should not be construed as being limited to the embodiments set forth below. Rather, the embodiments are provided so that the disclosure will be made thorough and complete and will fully convey the concept of the disclosure to those of ordinary skill in the art to which the disclosure pertains, and the disclosure will only be defined by the appended claims. Throughout the specification, like reference numerals refer to like elements. Throughout the disclosure, the expression “at least one of a, b or c” indicates only a, only b, only c, both a and b, both a and c, both b and c, all of a, b, and c, or variations thereof. Examples of a terminal may include a user equipment (UE), a mobile station (MS), a cellular phone, a smartphone, a computer, a multimedia system capable of performing a communication function, or the like. In the disclosure, a controller may also be referred to as a processor. Throughout the specification, a layer (or a layer apparatus) may also be referred to as an entity. It will be understood that each block of the flowchart in the drawings and combinations of blocks of the flowchart may be performed by computer program instructions. These computer program instructions may be loaded into a processor of a general-purpose computer, special-purpose computer, or other programmable data processing equipment, and thus, the instructions performed by the processor of the computer or another programmable data processing equipment create a unit for performing functions specified in the flowchart block(s). The computer program instructions may also be stored in a computer-executable or computer-readable memory capable of directing a computer or another programmable data processing equipment to implement functions in a specific manner, and thus, the instructions stored in the computer-executable or computer-readable memory are capable of producing items including instruction means for performing the functions described in the flowchart block(s). The computer program instructions may also be loaded into a computer or another programmable data processing equipment, and thus, instructions for operating the computer or the other programmable data processing equipment by generating a computer-executed process when a series of operations are performed in the computer or the other programmable data processing equipment may provide operations for performing the functions described in the flowchart block(s). In addition, each block may represent a portion of a module, segment, or code that includes one or more executable instructions for executing specified logical function(s). It should also be noted that, in some alternative implementations, functions mentioned in blocks may occur out of order. For example, two blocks illustrated in succession may be executed substantially simultaneously, or the blocks may sometimes be executed in reverse order depending on functions corresponding thereto. As used herein, the term “unit” denotes a software element or a hardware element such as a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC), and performs certain functions. However, the term “unit” is not limited to software or hardware. The ‘unit’ may be configured to be in an addressable storage medium or configured to operate one or more processors. Thus, the term ‘unit’ may include, for example, elements such as software elements, object-oriented software elements, class elements, and task elements, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, micro-codes, circuits, data, a database, data structures, tables, arrays, and variables. Functions provided by the elements and “units” may be combined into a smaller number of elements and “units”, or may be further divided into additional elements and “units”. Furthermore, the elements and “units” may be embodied to reproduce one or more central processing units (CPUs) in a device or security multimedia card. In addition, in an embodiment of the disclosure, the “unit” may include one or more processors. The disclosure may be applied to various devices and XR services. For example, the disclosure may be applied to fields such as augmented reality (AR), AR wearable devices (e.g., AR glasses, a head mounted display (HMD), etc.), mobile AR wearable devices, standalone AR wearable devices, three-dimensional (3D) object modeling, 3D teleconferencing, session setup and establishment for an XR service, cloud assisted session management for providing an XR service, virtual reality monitor (VRM) mobile VR, TV VR, etc. The fields of extended reality (XR) to which the disclosure may be applied may be variously determined without being limited to the above examples. In the disclosure, the term XR is a term including at least one of VR, AR, or mixed reality (MR). For example, AR glasses, AR objects, and VR services may be respectively referred to as XR glasses, XR objects, and XR services. In the disclosure, XR media content may include various types of media content. For example, XR media content may include 360-degree video content and 3D object based media content (a point cloud and a mesh). In the disclosure, unless otherwise described, XR media, XR media content, XR content, XR services, etc. pertain to 3D content. In the disclosure, a “user's device” refers to one or more devices that are located around a user and obtain, process, or transmit or receive data to provide XR services to the user. In the disclosure, an “XR device” refers to a device that includes a display and provides XR content to a user via the display. The shape and properties of a display of an XR device may be variously determined. For example, the display may be transparent, semi-transparent, or opaque, and may be a flexible display, a foldable display, or a rigid display with display elements being organic light-emitting diodes (OLEDs), LEDs, liquid crystals (LCs), or the like. The shape and properties of the display of the XR device may be variously determined without being limited to the above examples. Furthermore, the XR device may be a wearable device (e.g., a HMD, XR glasses, etc.) that a user is able to wear. In the disclosure, a “component device” refers to a device that performs at least one of “rendering”, “vision”, or “capturing” function to provide an XR service. A component device may be a collective term referring to a rendering device, a vision device, and a capturing device. Each of the functions will be described in detail below with reference toFIG.4. A component device may be an independent device or a device block included in another device. Any one or more of various communication technologies may be used as a communication technology that may be used for communication between component devices and communication between a component device and a UE. For example, device-to-device (D2D) communication technologies such as Wi-Fi, Wi-Fi Direct, D2D communication, 5G sidelink, Bluetooth, tethering, and other short-range communication technologies may be used. Communication technologies that may be used for communication between component devices and communication between a component device and a UE may be variously determined without being limited to the above-described examples. In the disclosure, a “UE” refers to a device having a network capability (e.g., a 5thgeneration (5G) modem capability) to transmit or receive data to or from another user's device via a network. For example, the UE may communicate with another UE via a server, and may include a communication module or communication application for communicating with the server or the other UE. As a communication technology available for use in communication between UEs, any one or more of various communication technologies may be used. For example, a UE may communicate with other UEs by using a communication technology that is compliant with the 3rdGeneration Partnership Project (3GPP) standards, such as long-term evolution (LTE) or 5G, or a communication technology such as Wi-Fi. A communication technology that may be used for communication between UEs may be variously determined without being limited to the above-described examples. In the disclosure, device names such as “XR device”, “component device”, and “UE” are used to logically classify a user's device according to its function. Thus, a device may be referred to by one or more device names. For example, when a first device includes a display and is capable of displaying XR content to a first user, transmitting or receiving data to or from a UE of a second user, and capturing an object via a built-in camera, the first device may be referred to as any one of an XR device, a capturing device (component device), or a UE according to circumstances. A method of providing 3D XR media content proposed by the disclosure includes the following:End to end (E2E) flow and architecture for XR services The E2E flow and architecture for XR services may include distribution of media processing across services among multiple devices, and may be composed of processing entities within a cloud (one or more cloud servers, edge clouds, mobile edge computing (MEC) servers, etc.).A functional component architecture of a terminal or UE for XR servicesA UE architecture that supports a configuration including a plurality of devices (hereinafter, referred to as a multi-device configuration) For example, the plurality of devices may be connected (or tethered) to one another via wired tethering, wireless tethering, or other wired/wireless networks. In addition, the multi-device configuration may include a standalone wearable device (e.g., a HMD, XR glasses, etc.).UE session setup and establishment procedures for various XR services on various devicesNecessary information for enabling use cases of XR conversational services (e.g., device pose information, reference points, functionality types of a device (e.g., vision, capture, etc.) and media properties (e.g., object size, etc.)Definition of UE features and capabilities used to determine session establishmentDetermination of cloud assistance based on services, UE capabilities, and requirements (according to a UE or service management entity) However, the above description is merely for convenience of understanding, and embodiments presented in the disclosure will be described throughout the disclosure. Hereinafter, a method and apparatus according to the disclosure will be described with reference to the attached drawings. FIG.1Ais a diagram for describing two-dimensional (2D) video streaming101and a 2D video call according to an embodiment of the disclosure. Referring toFIG.1A, in the 2D video streaming101, 2D video content may be directly transmitted to a UE. The 2D video call102is a service in which images (or an image) of a first user and/or a second user are displayed as 2D video images on a 2D display of each of first and second UEs when the first user using the first UE makes or receives a video call with the second user using the second UE. In the 2D video call102, the 2D video image of the first user of the first UE and the 2D video image of the second user of the second UE are simply overlaid on each other in the display of each of the first and second UEs, but the two video images may not be correlated with each other. FIG.1Bis a diagram for describing a VR content providing method103and an AR content providing method104, according to an embodiment of the disclosure. Referring toFIG.1B, in the VR content providing method103, a viewport may be created based on a viewpoint of a UE, and VR content may be generated based on the viewport. A viewport refers to a polygon representing a spatial region, and an object may be rendered inside the viewport. In other words, in the VR content providing method103, only an object included in an area viewed by the UE from a viewpoint of the UE may be generated as VR content. The generated VR content may be displayed on a display of the UE or transmitted to another UE. In the AR content providing method104, vision information may be generated based on a pose of the UE, and AR content may be generated based on the vision information. The vision information is information about the surrounding environment of the UE. In other words, in the AR content providing method104, vision information including information about a position and a direction where a first UE is located with respect to the surrounding environment of the first UE may be generated based on a pose of the first UE, and an image of the first UE or an image of an object surrounding the first UE may be provided to a second UE by taking into account the surrounding environment of the first UE. For example, when the first UE lies on a sofa, the second UE may display an image of the first UE lying on the sofa or on any object. In other words, in the AR content providing method104, it may be determined, based on the vision information, where an object existing in the surrounding environment of the first UE is to be displayed on a display of a second UE (where the object will be augmented). In contrast to a use case of 2D media content, the disclosure provides a method for enabling 360-degree video and 3D media related services for XR applications. The 360-degree video and 3D media related services of the disclosure may include XR conversational services. The XR conversational services are services in which AR objects, VR objects, or the like generated in advance or in real-time during a real-time conversation (e.g., a call) between users using XR devices are provided to a user in real-time. For XR conversational services, additional pre/post-processing may be required in end-to-end (E2E) flow to support VR or AR applications. Furthermore, additional information related to configurations and settings of devices that may be used in use cases for XR conversational services may be required for correct rendering and display of objects. For example, requirements for metadata and additional pre-processing and post-processing may be determined depending on the following factors.Application programs and use cases (from a user's perspective)Combinations and form factors of various devices that may be used in a use case (both from a user's perspective and a system perspective)Media coordination that may be needed due to E2E constraints or bottlenecks in a service chain (e.g., constraints on a network bandwidth or processing/functionalities of a device) In order to support real-time services enabled through 360-degree video and 3D media (e.g., services, such as XR conversational services, in which content is captured and processed in real-time and delivered in real-time to a network processing entity or another user), metadata (e.g., a pose, a camera type, etc.) may be required for processing and display of the 360-degree video and 3D media. In addition, there may be a need to optimize processing operations related to metadata across the entire E2E flow so that requirements of devices, services and networks are all met. According to an embodiment of the disclosure, a UE architecture for XR services is defined. The UE architecture of the disclosure may enable various services (e.g., AR conversational services) having various requirements (media type, media quality, latency, etc.) under various network capabilities and environments (e.g., variable network bandwidths, MEC/cloud processing capabilities, etc.) across different device configurations (or settings) and different capabilities (capture, vision, rendering, display and processing capabilities, and processing speed/power). FIG.2Ais a diagram for describing a method, performed by a first UE, of providing or receiving 3D XR media content to or from a second UE, according to an embodiment of the disclosure. Referring toFIG.2A, information related to a UE includes vision information indicating a surrounding environment of the UE and information about an XR object included in the surrounding environment of the UE. In order for the second UE to provide a first XR object included in the surrounding environment of the first UE to user of the second UE as 3D XR media content, information about the first XR object and first vision information about the surrounding environment of the first UE may be required. The first vision information may be used by the second UE to determine where to display the first XR object. In a 3D XR content providing method201according to an embodiment of the disclosure, the first UE may provide first space set information of the first UE to the second UE so that the second UE may obtain the first XR object and the first vision information and display the first XR object thereon. The first space set information is information about a space surrounding the first UE, and the second UE may use the first space set information to recognize the surrounding environment of the first UE and display the first XR object. In addition, the second UE may further use second space set information of the second UE to display the first XR object by taking into account both the surrounding environments of the first and second UEs. Similarly, the first UE may receive the second space set information from the second UE, and display a second XR object based on second vision information. Alternatively, some or all of the vision information may not be shared between UEs, and each UE may display an XR object based on received space set information. The space set information will be described in detail below with reference toFIG.6. FIG.2Bis a diagram for describing a method, performed by a first user211, of sharing XR media content with a second user221, according to an embodiment of the disclosure. The embodiment of the disclosure to be described with reference toFIG.2Bis merely an example, and a method of providing 3D XR media content according to the disclosure is not limited to the embodiment of the disclosure illustrated inFIG.2B.FIG.2Bshows an example in which XR glasses are used as an XR device. The XR glasses may be transparent, semi-transparent, or opaque. Except when the XR glasses are opaque, a user of the XR glasses may see objects actually existing in a user's field of view (FOV) directly through the lenses, and additionally see 3D media object displayed by the XR glasses. Referring toFIG.2B,202ofFIG.2Billustrates a situation where an XR call (or AR call) is performed between the first and second users211and221. The XR call may be initiated by a call request and a call response between first and second UEs214and222. The first user211may see a 3D video object215representing the second user221and a shared object216via first XR glasses212, while the second user221may see a 3D video object225representing the first user211and a shared object226via second XR glasses (or the second UE)222. The first XR glasses212, a first camera213, and the first UE214may exist around the first user211as devices for XR services. The first XR glasses212may render an XR object to be displayed on a display thereof. In addition, the first XR glasses212may include a vision camera, and may capture the surrounding environment210of the first user211by using the vision camera. The first camera213may capture an image of the first user211in real-time, and may be used to transmit a real-time 3D image of the first user211to the second user221. The first UE214may control the XR call with the second UE222, receive and process data from the second UE222for transmission to the first XR glasses212, and receive and process images captured from the first XR glasses212and the first camera213for transmission to the second UE222. Similarly, the second XR glasses (or the second UE)222and a second camera223may exist around the second user221as devices for XR services within the second user's environment220. Such a configuration is different from a configuration of devices surrounding the first user211in that the second XR glasses also serve as the second UE222capable of transmitting and receiving data to and from the first UE214and managing and processing various pieces of data. The shared object216or226may be an object that actually exists in the surroundings of the first or second user211or221or an object that is created virtually or shared by the first or second user211or221. In addition, the first or second user211or221is capable of manipulating (or interacting with) the shared object216or226. For example, the second user221may move or rotate the shared object226displayed on the second XR glasses (or the second UE222), and accordingly, the shared object216may also be moved or rotated in the display of the first XR glasses212. In the situation where the XR call is performed as shown in202and203ofFIG.2B, the following features may exist. At least some of the following features may be different from those of the 2D video call ofFIG.1A.One or more 3D media objects may be delivered (and/or shared) between two users. Objects may be captured in real-time or pre-captured prior to an AR call.A user may see a 3D media object via an XR device.When rendered on an XR device, 3D media objects may be realistically augmented to a user's environment or background.User interaction such as rotation and placement of 3D media objects by the user in the user's environment is possible.A 3D media object may be pre-made and shared between users on the call (e.g., like the shared object216) or captured and delivered in a real-time live manner (e.g., like a 3D video object representing the first or second user211or221).A user's UE may consist of one or more hardware devices with different processing functions and capabilities, or may be connected to the one or more hardware devices. For example, one or more hardware devices may include a capturing camera, a vision camera, rendering XR glasses, a mobile device that performs certain processing and has 5G capabilities, etc.One or more hardware devices may be at different locations in the user's environment, and the locations of the hardware devices may be static or dynamicMedia processing required for services may be distributed among other devices and entities (e.g., cloud and MEC servers, etc.) within an E2E flow. According to an embodiment of the disclosure, in order to augment and display a 3D media object in the user's environment, XR glasses, a UE, and a camera may exchange information with one another. The second UE222may obtain, via the second camera223, information about the second user221, information for generating a second space set, and information about an object (e.g., the second user221) existing in the second space set. The first UE214may receive, from the second UE222, information about the second space set around the second UE222and information about the second user221. The first UE214may transmit the information received from the second UE222to the first XR glasses212after or without processing the received information. The first XR glasses212may augment and display a 3D media object for the second user221and the shared object216in an environment of the first user211by using a display based on the received information. In order for the first XR glasses212to augment and display the 3D media object, information about the surroundings of the first user211obtained from the first camera213may be further used. In the same manner, a second XR device may augment and display the first user211and the shared object226in the environment of the second user221. Reference numeral203ofFIG.2Brepresents a field of view of the first user211wearing the first XR glasses212. A desk that is the shared object216and the 3D video object215representing the second user221may be displayed on the display of the first XR glasses212. Furthermore, an XR call may be made among three or more users. For example, referring to203ofFIG.2B, a total of five users including the first user211participate in an XR call, and the second user221, a third user, and a fourth user are displayed on the first XR glasses212. Because the desk that is the shared object216and a fifth user217actually exist in the surroundings of the first user211, the desk and the fifth user217may be directly visible to the first user211without being separately displayed on the transparent or semi-transparent display of the first XR glasses212. Hereinafter, a 3D XR media content providing method according to the disclosure for providing various 3D XR services such as the XR call described with reference toFIG.2Bwill be described with reference toFIGS.3through11.FIG.3is a diagram for describing a media flow for providing XR services, andFIG.4illustrates various device configurations used to provide XR services.FIG.5illustrates an XR service session establishment for providing XR services, andFIG.6illustrates a user space set and user space set parameters used for providing XR services.FIG.7illustrates a flow of media data and metadata in an XR service session,FIG.8illustrates a XR media architecture of a UE for providing XR services, andFIG.9is a flowchart of an XR service providing method.FIGS.10and11illustrate configurations of devices for providing XR services. FIG.3is a diagram for describing an XR service flow in which an object (hereinafter, referred to as a ‘first object’) existing in an environment of a first user is provided to a second user as a 3D media object, according to an embodiment of the disclosure. Referring toFIG.3, the first object may be provided to a second user through 3D media processes300. The 3D media processes300may be performed by a first user's device including a first UE and a second user's device including a second UE, and some of the 3D media processes300may be performed by a cloud or MEC server. For example, capturing310may be performed by the first user's device, and at least some of 3D modeling320, XR encoding330, and XR formatting340may be performed by the first user's device or a cloud or MEC server. At least one of XR decoding360or XR rendering370may be performed by the second user's device or a cloud or MEC server. Thus, management of an XR service session may be required in order to distribute service processing between entities, i.e., determine, in a distributed manner, which of the entities is to perform a certain process among the 3D media processes300. For example, a UE may determine to request a cloud or MEC server to perform processing (i.e., cloud-assisted processing) for at least some of the 3D media processes300, based on at least one of a capability of a component device, a capability of the UE, or a capability of an XR device. Furthermore, the cloud or MEC server may receive a device capability report from the UE, and determine whether to perform cloud-assisted processing for at least some of the 3D media processes300, based on at least one of the capability of the component device, the capability of the UE, or the capability of the XR device. Criteria for evaluating the capability of the user's device may include a storage capacity of the device, the number of processible operations per second, the number of clocks of a processing device, or information about whether hardware equipment specialized for particular processing is included. The criteria for evaluating the capability of the user's device may be variously determined without being limited to the above-described examples. An example of detailed operations in the 3D media processes300is as follows.Capturing (or capture)310: An operation of capturing content (e.g., a scene, an object, a combination of both, or the like depending on a service application) in real-time via one or more cameras. The one or more cameras may include not only an RGB camera (outputting a 2D video) but also a camera capable of capturing a depth property and other properties such as reflectance that may be used to capture data (e.g., a depth map) needed for 3D modeling. Properties that are capturable by a camera are not limited to the above-described examples and may include various other properties. In addition, processing of data captured by a camera may require, in addition to the captured data, other data (e.g., intrinsic and extrinsic parameters of the camera) that may be obtained during the capture.3D modeling320: Data output in the capturing310may be used for performing 3D modeling to generate and output content in the form of a 3D model data bit stream. A 3D model data bit stream, such as polygon file format (PLY) data, may represent 3D media data in the form of a point cloud or a mesh. For example, data output in the capturing310may be processed as PLY data as follows.multiple RGB+depth→a single PLY representing one objectmultiple RGB+depth→a plurality of PLYs (a plurality of object sub-parts)→A single PLY representing one objectXR encoding330: An output in the 3D modeling320may be encoded to compress large amounts of raw data. Point cloud encoding or mesh encoding may be performed using various encoding techniques (e.g., moving pictures expert group (MPEG) video based point cloud compression (V-PCC), Google Draco, etc.). The encoding may be lossy encoding or lossless encoding. To support decoding of compressed data, a decoder corresponding to an encoder used in the XR encoding330may have to be used.XR formatting (or format)340: For transmission of data over a network such as a 5G network, the compressed data output in the process of XR encoding330may need to be formatted and/or encapsulated. For example, an MPEG international organization for standardization base file format (ISOBMFF) for file encapsulation, an MPEG media transport protocol (MMTP) payload format, a real-time transport protocol (RTP) payload format for preparation before delivery of data, or the like may be used as a format technology.Delivery350: Compressed and formatted media may be delivered to the second UE over a 5G network, etc. by using hypertext transfer protocol (HTTP), RTP, MPEG dynamic adaptive streaming over HTTP (DASH), MPEG media transport (MMT), or other delivery mechanisms.XR decoding360: The compressed data may be received by an XR decoding entity, and the XR decoding entity may decapsulate and decode the compressed bitstream to restore a PLY bitstream that has not been compressed.XR rendering370: After the XR decoding360, the 3D data bitstream may be transmitted to a renderer. The renderer may render a 2D viewport of the 3D data according to intent of the first user or the first UE, which is received from the first UE, or pose information of the second user using the second UE (e.g., a user offset position, a pose, an orientation, a view frustum, and a viewport). The intent of the first user or the first UE may be delivered to the second UE through, for example, some metadata. The 3D media processes300illustrated inFIG.3are merely an example of an XR 3D service flow, and 3D media may be provided through media processes that are slightly different from the 3D media processes300. In addition, the first and second UEs may each include one or more component devices or may be connected to one or more component devices. For example, one or more component devices may be connected or tethered to the first or second UE by using Bluetooth, Wi-Fi Direct, 5G sidelink, or other communication technologies. FIG.4is a diagram for describing various device configurations that may be used to provide 3D XR media, according to an embodiment of the disclosure. Before describing the device configurations, a syntax of each device according to an embodiment of the disclosure will be described. Each device may be specified using the following syntax or identifier.UE ID, device description, and device functionality type description (UEx: device description:devicefunctionalitytypedescription) Here, when each device has a network capability (hereinafter referred to as a data network capability) to transmit and receive data to and from another user's device, the corresponding device has a unique UE ID, or otherwise, a syntax for the corresponding device may include a UE ID of another device having a data network capability among devices connected to the device in a wired or wireless manner. For example, in a first device configuration401ofFIG.4, because only a mobile phone has a data network capability, UE IDs of all devices surrounding a user are “UE1” which is the UE ID of the mobile phone. On the other hand, in a third device configuration403, because standalone XR glasses, a third camera, and a mobile phone each have data network capabilities, they may have “UE1”, “UE2” and “UE3” as UE IDs, respectively. According to an embodiment of the disclosure, when the UE is capable of accessing a 5G system, at least one of a subscription permanent identifier (SUPI), a permanent equipment identifier (PEI), or a 5G global unique temporary identifier (5G-GUTI) may be used as the UE ID. In addition, a correlation between the UE ID and each of SUPI, PEI, and 5G-GUTI which are generated using separate algorithms, may be determined according to a preset algorithm, and the UE may be provided with the correlation. The correlation may be determined, for example, by the UE or a server, and may be provided to the UE or the server. According to an embodiment of the disclosure, “device functionality type description” corresponding to a device may be classified based on a role in the device configuration and may be defined as follows.Rendering: A device corresponding to a rendering functionality type may render an XR object on a display. The rendering device may render an XR object by using metadata/necessary information for functionality processing related to XR rendering. Functionality processing related to rendering may include, for example, 2D/3D media decoding, post-processing, presentation, and rendering to a 2D/3D display. Necessary information for XR rendering may include not only media data but also pose information of the rendering device itselfVision: A device corresponding to a vision functionality type may obtain and provide information about a user's surroundings (i.e., vision information) to enable accurate rendering of 2D or 3D media for XR services. For example, the vision device may obtain essential input data for computer vision processing such as simultaneous localization and mapping (SLAM) by using an RGB camera or other cameras, so that the user's surroundings may be recognized and analyzed. To realistically overlay an XR environment onto the user's environment, accurate analysis of the user's surroundings as well as 3D media objects may be required. Use cases in which the overlay is realistically represented may be, for example, placing a 3D media point cloud of a dog (a 3D media object) on a surface of a floor (the user's surroundings) or on a sofa in a user's living room (the user's surroundings).Capturing: A device corresponding to a capturing functionality type may obtain and provide essential input data for capturing a 3D object in the user's environment (e.g., 3D models of the user's head, body, or other object). FIG.4illustrates three device configurations capable of providing 3D media content to a user via XR devices according to an embodiment of the disclosure. Relative positions of various devices shown inFIG.4in relation to one another may be static or dynamic. Referring toFIG.4, the first device configuration401consists of XR glasses that are tethered to the mobile phone, a first camera included in the XR glasses, and three external cameras. A vision camera UE1:camera1:vision may be located on or inside the XR glasses (rendering device). Capturing cameras UE1:camera3:capturing, UE1:camera4:capturing, and UE1:camera5:capturing may be located outside the XR glasses to capture objects around the user. The mobile phone having a data network capability may be connected to the XR glasses in a wired manner or according to a wireless communication method (e.g., Bluetooth, tethering, etc.). In addition, the mobile phone may render an XR object on the mobile phone's display (and thus, may be identified as “UE1:phone:rendering”) and capture objects around the mobile phone via its camera (and thus, may be identified as “UE1:camera2:capturing”). A second device configuration402consists of standalone XR glasses, a first camera included in the standalone XR glasses, a second camera that is dockable, and an external camera. Unlike in the first device configuration401, the standalone XR glasses are referred to as such because they have a data network capability to transmit and receive data to and from another users' device without a separate mobile phone. The second camera that is dockable is a camera detachable from the standalone XR glasses. In other words, the first camera may be attached to or included in the standalone XR glasses as a component thereof, while the second camera may be attached to the standalone XR glasses or detached therefrom to be located at a different position. The first camera may perform a vision function, and the second camera may perform both a vision function and a capturing function. The third device configuration403consists of the standalone XR glasses, two external cameras, and the mobile phone. The third device configuration403consists of a plurality of devices (the standalone XR glasses, the third camera, and the mobile phone) having data network capabilities. Thus, each of the devices having data network capabilities may transmit data related to XR services to a target destination (e.g., another user's device or server) via other devices or directly to the target destination without going through the other devices. Moreover,FIG.4only illustrates three examples of various device configurations, and the disclosure is not limited to the examples ofFIG.4and may include various other device configurations. For example, whether each device has a data network capability may be determined in various ways. Even when a first device has a data network capability, the first device may transmit data to a second device having a data network capability, and the second device may process the received data and transmit it to a server or another device. In addition, the number of devices that may be included in a device configuration is not limited to the examples ofFIG.4and may be variously determined. Types of functionalities (capturing, vision, and rendering) that each device has may also be variously determined. FIG.5is a diagram for describing a process by which an XR service session is established and an XR service is provided, according to an embodiment of the disclosure. Referring toFIG.5, a UE51may communicate with one or more component devices52and an XR service provider53belonging to an environment of a user of the UE. An XR service session may be established based on the communication among the UE, the one or more component devices, and the XR service provider. The user of the UE may transmit or receive XR 3D media content to or from a user of another UE in real-time by using the established XR service session. The XR service provider may include at least one server, and transmit XR service related data or metadata to the UE. For example, the XR service provider may include a cloud, an MEC server, etc. In operation501, each component device may transmit, to the UE, its device description as an initial capability report. The one or more component devices may include, for example, AR glasses, a camera, etc. The initial capability report may be transmitted to the UE when the corresponding component device is initially installed/connected to the UE. In operation502, the UE may request the XR service provider to transmit information associated with an XR service list. For example, a request for the XR service list may be initiated when the user of the UE requests an XR call from another user via the UE or receives an XR call request from another user. The UE may assume that the XR service may provide one or more representations of an object or scene according to device capabilities or network capabilities. In operation503, the XR service provider may provide the XR service list to the UE as a response. The XR service list may include capability requirements for each XR service. XR services that may be included in the XR service list may be variously determined. For example, the XR services may include an XR conference, an AR conference, a video call, etc. Furthermore, the XR services may include a plurality of services (e.g., a high-capability XR call service and a low-capability XR call service) having different capability requirements for the same type of service (e.g., an XR call). In addition, for a given XR service, the XR service provider may perform network media processing to support a UE having an insufficient processing capability. For example, the XR service provider may perform processing such as encoding or decoding of XR media data instead of the UE and transmit the resulting data to the UE. The XR service list may also include information about whether network-assisted media processing is available or required for each XR service. In operation504, the UE may request a device status report from each component device. In operation505, the corresponding component device may transmit a device status report to the UE. For example, the device status report may include the following device status information or device capability information:A physical location and a facing direction of a device (e.g., a camera pose)Hardware capabilities of the device (e.g., for a camera, a RGB resolution, a depth resolution, and an FOV; for XR glasses, encoder and decoder functions, a 3D modeling function, a display resolution, a display FOV, etc.) In operation506, the UE may select at least one XR service from the XR service list based on initial capability reports received in operation501, the XR service list received in operation503, and device status reports received in operation505. The UE may collect device status reports received from the one or more component devices in operation505, and select, from the XR service list, an XR service having capability requirements that match a status or capability of each component device. In operation507, the UE may determine, based on the initial capability report received in operation501and the device status report received in operation505, capability information and status information of a corresponding component device related to the selected XR service, and transmit, to the XR service provider, the determined capability information and status information of the component device as a device capability report. The device capability report may include camera information, processing performance of the component device, position and orientation information of the component device, etc. According to an embodiment of the disclosure, the device capability report may include user space set parameters. The UE may determine the user space set parameters based on the initial capability report received in operation501and the device status report received in operation505. A syntax and semantics for the user space set parameters will be described in detail below with reference toFIG.6. Furthermore, the UE may determine to request the XR service provider to perform processing (i.e., network-assisted processing) for at least some processes from among 3D media processes related to the selected XR service, based on at least one of a capability of the corresponding component device, a capability of the UE, or a capability of an XR device. According to an embodiment of the disclosure, the device capability report may include information for requesting network-assisted processing for at least some of the 3D media processes related to the selected XR service. In operation508, the XR service provider may provide the UE with device configuration information and a service entry point (e.g., a manifest in the form of dynamic adaptive streaming over HTTP (DASH) media presentation description (MPD), etc.). The device configuration information may include operation-related configuration information (e.g., a display resolution, an uplink media profile, necessary metadata, etc.) of the component device related to the selected XR service. The service entry point may include identification information (e.g., address) of a data network that is accessible by the UE to receive the selected XR service. In addition, the XR service provider may determine to perform at least some of the 3D media processes related to the selected XR service, based on a UE's request or capabilities of the user's devices (the capability of the component device, the capability of the UE, or the capability of the XR device) included in the device capability report received in operation507. According to an embodiment of the disclosure, the XR service provider may transmit, to the UE, information about which of the 3D media processes a network will support in operation508. In operation509, the UE may transmit device configuration information to each component device. Each component device may transmit a configuration acknowledgment (ACK) response to the UE (operation510). The configuration ACK response may include details of a response indicating that the corresponding component device has configured or is able to configure itself according to the received device configuration information. In addition, the component device may transmit, to the UE, media data and metadata required for a session of the selected XR service. In operation511, the UE may establish an XR service session via access to the service entry point received in operation508. When the XR service session is established, in operation512, the UE and the XR service provider may proceed with uplink/downlink streaming of media data and metadata. According to an embodiment of the disclosure, operations501through505may be performed only when the UE is initially connected to each component device. In other words, after the initial connection, the UE establishes a first XR service session to provide a first XR service to the user, and when a second XR service session needs to be established after the first XR service session ends, operations501through505may be skipped. In addition, the device description initially reported in operation501may instead be reported in operation505. Due to not only the importance of a physical environment itself but also diversity of user device configurations that depends on the user's physical environment, device capability information and metadata related to an environment of a user of a component device may be required for high quality XR experiences. The disclosure defines device capability information and metadata related to an environment of a user of a component device, which are required in an XR service session. The device capability information and metadata may be used by entities participating in the XR service session to provide XR services to a user. In addition, the disclosure proposes a “user space set” used to take into account a user's environment in defining device capability information and metadata. The user space set may be an information set including at least one of information about positions and orientations of various devices located around the user and used to provide XR services, capability information of the devices, or information about a physical environment surrounding the user. The various devices located around the user may be used to define the user space set together with the physical environment surrounding the user. A user space set may exist for each user. In other words, there may be a user space set corresponding to each user. FIG.6is a diagram for describing a user space set according to an embodiment of the disclosure. According to an embodiment of the disclosure, a user space set may include various parameters indicating an environment around a user (hereinafter, referred to as ‘user space set parameters’). The user space set may include information about a space and information about various devices that are located around the user and used to provide XR services. At least some of the devices (i.e., a UE) participating in an XR service session may obtain or process pieces of information necessary for providing the XR services based on various parameters included in the user space set. For example, the UE may receive captured data or vision data from a nearby camera, and process the received captured or vision data based on a user space set. The processed data may be transmitted to a server or another UE, together with the user space set, and may be used to provide other users with 3D media data regarding the user's surrounding environment. Referring toFIG.6, the user space set may be represented using a right-handed Cartesian coordinate system in which an origin is defined as a reference point and an x-axis direction is defined as a space set reference orientation. However, this is merely an example, and the user space set may be represented according to various other representation techniques for representing a 3D space. In the disclosure, for convenience, the user space set will be described using a right-handed Cartesian coordinate system as an example of a representation technique. According to the example ofFIG.6, there may be XR glasses601, a first capture camera602, and a second capture camera603in the user space set, which perform a functionality as a UE, a vision functionality, and a rendering functionality. Furthermore, the user space set may include one or more subspace sets. According to an embodiment of the disclosure, a vision subspace set defines a space in which 3D media is rendered within a vision subspace and realistically augmented so that the 3D media may be experienced by the user as being a realistic part of scene/background that exists in the vision subspace. One or more vision subspace sets may exist within a single user space set. A vision subspace set may be implemented using one or more vision cameras whose FOVs may overlap or not. In addition, there may be a vision subspace set corresponding to each component device that performs a vision functionality. According to an embodiment of the disclosure, a capture subspace set defines a space in which a real 3D object may be captured volumetrically by one or more capture cameras. When only a part of the real 3D object exists within the capture subspace set, only the part of the real 3D object may be captured. One or more capture subspace sets may exist within a single user space set. A capture subspace set may be implemented using one or more capture cameras whose FOVs may overlap or not. Furthermore, there may be a capture subspace set corresponding to each component device that performs a capturing functionality. In addition, although it is described below for convenience that a user space set and a subspace set are each in the shape of a cuboid, the shapes of a user space set and a subspace set are not limited to a cuboid but may be variously determined. Furthermore, a user space set or a subspace set may be static or may be dynamically changed. For example, a shape, size, shape, etc. of a user space set or subspace set may be modified for various reasons (e.g., relocation of a user or device, etc.), and a location of a user space set or subspace set may be changed. According to an embodiment of the disclosure, a subspace set may be provided for each device or for each functionality type of a device. For example, a vision subspace set610may exist for the XR glasses601, a first capture subspace set620for the first capture camera602, and a second capture subspace set630for the second capture camera603. In addition, the XR glasses601may further perform a capturing functionality, and in this case, a separate capture subspace set may be provided for the XR glasses601as well. Position and orientation information in the user space set600may be determined relative to a reference point605of the user space set600. In addition, position and orientation information in a subspace set may be determined relative to a reference point in the subspace set, and the reference point in the subspace set may be determined relative to the reference point605of the user space set600. For example, a reference point611in the vision subspace set610, a reference point621in the first capture subspace set620, and a reference point631in the second capture subspace set630may be determined relative to the reference point605in the user space set600. According to an embodiment of the disclosure, pieces of information necessary for providing XR services may include at least one of pieces of the following information:The number of devices in a user space setA device description for each deviceA device functionality type of each device. A device functionality type may be used as a basis for using pose information of each device. A device functionality type of each device may include at least one of rendering, vision, or capturing.A position and an orientation of each device (e.g., a position and orientation of the front of a camera). In other words, a pose of each device relative to a reference point defined in the user space set.A reference point The reference point may be one of the following:arbitrary absolute position coordinates based on real-world coordinates such as global positioning system (GPS) coordinatesa reference point bound to one of the devices in the user space set.an indication of whether a position and/or an orientation of a device is static or dynamic;for a camera device:FOV/angle of viewminimum and maximum values of a sensor depth sensitivityintrinsic parametersextrinsic parameters According to an embodiment of the disclosure, user space set parameters may include pieces of necessary information for XR services as described above, and may be represented using the following syntax. First, the user space set parameters may be classified into a first parameter group and a second parameter group. According to an embodiment of the disclosure, the second parameter group may be or may not be obtained based on the first parameter group. According to an embodiment of the disclosure, a syntax for representing the first parameter group is as follows. The first parameter group may include all or some of the parameters described below. In other words, some parameters may be omitted. In addition, the syntax for the first parameter group below is merely an example, and parameters having the same or similar semantics as those in the first parameter group may also be represented according to a different syntax. class NonDerivedParameters( ) {SpaceSetReferenceStruct( )unsigned int( )   num_components;for (i=0; i<num_components; i++){InitialPose( )unsigned int( )     device_id.unsigned int( )     device_description;unsigned int( )     dynamic_pose;unsigned int( )     num_functionalities;for (j=0; j<num_functionalities; j++){unsigned int( )     pose_functionality_type;InitialPose( )if (pose_functionality_type=1,2){signed int( )   hor_field_of_view;signed int( )   ver_field_of_view;signed int( )   minimum_depth;signed int( )   maximum_depth;}if (pose_functionality_type=2){CameraParameters( )}}}}class SpaceSetReferenceStruct( ) {unsigned int( )    space_set_reference_type;if (space set reference type=0){signed int( )   gps_latitude;signed int( )   gps_longitude;signed int( )   world_orientation;}if (space_set_reference_type=1){unsigned int( )     device_id.}unsigned int( )    dynamic_reference;}class InitialPose( ) {signed int( )  initial_position_x;signed int( )  initial_position_y;signed int( )  initial_position_z;signed int( )  initial_orientation_x;signed int( )  initial_orientation_y;signed int( )  initial_orientation_z;signed int( )  initial_orientation_w;}class CameraParameters( ) {IntrinsicParamters( )ExtrinsicParameters( )} According to an embodiment of the disclosure, the semantics of each parameter represented according to the above syntax is as follows.num_components: It specifies the number of components (component devices) in a user space set.initial_position_x, initial_position_y, initial_position_z: They specify x, y, and z coordinate values corresponding to coordinates of an initial position of a component device with respect to a user space set reference point. A unit in which coordinate values are expressed may be, for example, centimeter or millimeter, but is not limited thereto and may be variously determined. When a component device is used as a reference device in the user space set (when device_id of the component device matches a device_id value specified as space_set_reference_type=1 in the SpaceSetReference Struct), three coordinate values are all set to 0.initial_orientation_x, initial_orientation_y, initial_orientation_z, initial_orientation_w: They respectively specify x, y, z, and w elements of an orientation quaternion (or Hamilton number) indicating an initial orientation of a component device. w is the real part of the quaternion, and x, y and z are the imaginary parts of the quaternion. When the component device is used as a reference device in the user space set, values of these parameters define a unit quaternion with zero rotation in an orientation of the component device. In this case, initial_orientation_x may indicate a direction of an x-axis of a space set coordinate system (e.g., a right-handed Cartesian coordinate system), and initial_orientation_y may indicate a direction of a y-axis pointing vertically upwards.space_set_reference_type: It specifies how to define a reference point in the user space set, i.e., an origin (0, 0, 0) and a reference orientation. All the other pieces of pose information for the first parameter group may be defined with a reference point as an origin. The reference orientation may define a direction of an x-axis of a space set coordinate system (e.g., the right-handed Cartesian coordinate system). When a value of space_set_reference_type is 0, the reference point (ground level) and the reference orientation may be respectively defined as real-world GPS coordinates and a real-world orientation. For a component device with the value of space_set_reference_type set to 1, a pose (position coordinates and an orientation) of the component device may be used as a reference point and a reference orientation for the user space set.gps_latitude, gps_longitude: They specify, in units of decimal degrees (DD), lines of latitude and longitude for GPS coordinates of a reference point (origin) of the user space set coordinate system. world_orientation: It specifies a world compass orientation in degrees, which is defined as a reference orientation of a space set coordinate system (e.g., an x-axis of the right-handed Cartesian coordinate system) (0 degree corresponds to true north in the real world). The coordinate system may be the right-handed Cartesian coordinate system with a y-axis perpendicular to the x-axis and pointing upwards. A default direction (x-axis) may be true north.device_id: It specifies a unique identifier of a component device.dynamic_reference: A flag that specifies whether a reference point in the user space set is static (when a flag value is 0) or dynamic (when the flag value is 1).device_description: It specifies a description of a component device. The description of the component device may be specified as 1) a description in a predefined list (e.g., “0=glasses, 1=mobile phone, 2=camera”) or 2) a description string entry.dynamic_pose: A flag that specifies whether a pose of a component device is static (when a flag value is 0) or dynamic (when the flag value is 1).num_functionalities: It designates the number of functionalities (functionalities defined by pose_functionality_type) for which a component device and pose information of the component device are used. A component device identified by one device_id may include one or more functionalities. In other words, a component device may include only one functionality, both capturing and vision functionalities, both capturing and rendering functionalities, both vision and rendering functionalities, or all of the capturing, vision, and rendering functionalities.pose_functionality_type: It specifies a pose functionality type of a component device. A value of 0 indicates a pose functionality for rendering, a value of 1 specifies a pose functionality for vision, and a value of 2 indicates a pose functionality for capturing.hor_field_of_view, ver_field_of_view: They respectively specify horizontal and vertical FOV capture or vision capabilities of a component device (e.g., a camera). A unit of FOV may be, for example, radians.minimum_depth, maximum_depth: They respectively specify minimum and maximum values of a depth capture or vision capability of a component device (e.g., a camera) for a designated functionality. A unit of depth may be, for example, millimeters.IntrinsicParameters( ), ExtrinsicParameters( ): They respectively specify a list of internal parameters and a list of external parameters for each component device (camera). For example, the internal parameters are parameters for a camera device itself and may include a focal length, a principal point, a skew coefficient, etc., while the external parameters are parameters for describing a transformation relationship between a camera coordinate system and a real-world coordinate system and may include rotation or translation parameters between the two coordinate systems. Next, according to an embodiment of the disclosure, a syntax for representing the second parameter group is as follows. The second parameter group may include all or some of the parameters described below. In other words, some parameters may be omitted. In addition, the syntax for the second parameter group below is merely an example, and parameters having the same or similar semantics as those in the second parameter group may also be represented according to a different syntax. class SpaceSetSizeStruct( ) {unsigned int( )     spacesetsize_cuboid_dx;unsigned int( )     spacesetsize_cuboid_dy;unsigned int( )     spacesetsize_cuboid_dz;}class VisionSubSpaceStruct( ) {SubSpaceReferencePointStruct( )unsigned int( )     visionsubspacesize_cuboid_dx;unsigned int( )     visionsubspacesize_cuboid_dy;unsigned int( )     visionsubspacesize_cuboid_dz;}class CaptureSubSpaceStruct( ) {SubSpaceReferencePointStruct( )unsigned int( )     capturesubspacesize_cuboid_dx;unsigned int( )     capturesubspacesize_cuboid_dy;unsigned int( )     capturesubspacesize_cuboid_dz;}class SubSpaceReferencePointStruct( ) {signed int( )   offset_x;signed int( )   offset_y;signed int( )   offset_z;} According to an embodiment of the disclosure, the semantics of each parameter represented according to the above syntax is as follows:spacesetsize_cuboid_dx, spacesetsize_cuboid_dy, spacesetsize_cuboid_dz: They specify sizes of a user space set having the form of a cuboid in directions of x-, y-, and z-axes of the Cartesian coordinate system. A reference point in the user space set may be, for example, a center of the cuboid when space_set_reference_type=0, and a center of a bottom face of the cuboid when space_set_reference_type=1. However, this is merely an example, and the position of the reference point may be variously determined.visionsubspacesize_cuboid_dx, visionsubspacesize_cuboid_dy, visionsubspacesize_cuboid_dz: They specify sizes of a vision subspace set having the form of a cuboid in directions of x-, y-, and z-axes of the Cartesian coordinate system. The sizes in the x, y, and z-axis directions are specified relative to a reference point in the vision subspace set. The reference point in the vision subspace set may be defined by SubSpaceReferencePointStruct( ) included in a vision subspace structure. For example, the reference point (or anchor point) in the vision subspace set may be determined by an edge closest to the reference point in the user space set among edges of the cuboid representing a vision subspace set. The anchor point in the vision subspace set is not limited to the above-described example but may be variously determined.capturesubspacesize_cuboid_dx, capturesubspacesize_cuboid_dy, capturesubspacesize_cuboid_dz: They specify sizes of a capture subspace set having the form of a cuboid in directions of x-, y-, and z-axes of the Cartesian coordinate system. The sizes in the x, y, and z-axis directions are specified relative to a reference point in the capture subspace set. The reference point in the capture subspace set may be defined by SubSpaceReferencePointStruct( ) included in a capture subspace structure. For example, the reference point (or anchor point) in the capture subspace set may be determined by an edge closest to the reference point in the user space set among edges of the cuboid representing a capture subspace. The anchor point in the capture subspace set is not limited to the above-described example but may be variously determined. Next, according to an embodiment of the disclosure, third parameter group representing a 3D media object captured by a capture camera in a user space set is described. The third parameter group may be determined based on at least one of the first or second parameter group for the user space set. According to an embodiment of the disclosure, a syntax for representing the third parameter group is as follows. The third parameter group may include all or some of the parameters described below. That is, some parameters may be omitted. In addition, the syntax for the third parameter group below is merely an example, and parameters having the same or similar semantics as those in the third parameter group may also be represented according to a different syntax. class ObjectSizeStruct( ) {unsigned int( )   real_size_dx;unsigned int( )   real_size_dy.unsigned int( )   real_size_dz;}class DefaultOrientationStruct( ) {unsigned int( )   object_default_orientation_x;unsigned int( )   object_default_orientation_y;unsigned int( )   object_default_orientation_z;unsigned int( )   object_default_orientation_w;}class DefaultRenderingParamStruct( ) {unsigned int( )   min_rendering_distance;unsigned int( )   max_rendering distance;unsigned int( )   default_rendering_distance;} According to an embodiment of the disclosure, the semantics of each parameter represented according to the above syntax is as follows:real_size_dx, real_size_dy, real_size_dz: They respectively specify real sizes of a 3D media in x-, y-, and z-directions, which corresponds to a coded cuboid used to represent 3D media data (e.g. a 10-bit bounding box for a V-PCC compressed point cloud). A unit of size may be, for example, millimeter.object_default_orientation_x, object_default_orientation_y, object_default_orientation_z, object_default_orientation_w: They specify elements of an orientation quaternion representing default rendering orientations of a 3D media object in relation to the coded cuboid used to represent 3D media data (e.g. a 10-bit bounding box for a V-PCC compressed point cloud). For V-PCC coded data, a default rendering orientation may match a pi_front [d] supplemental enhancement information (SEI) message (in the V-PCC specification, pi_front [d] indicates a value of a d-axis of a unit vector representing a front direction of a reconstructed point cloud sequence in units of 2-16. When pi_front[d] does not exist, a default rendering orientation may be assumed to represent a unit vector (0.0, 1.0, 0.0).min_rendering_distance specifies a minimum distance between a user display and a 3D media object, at which the 3D media object may be rendered and presented to the user. A unit of distance may be, for example, centimeter or millimeter.max_rendering_distance specifies a maximum distance between a user display and a 3D media object, at which the 3D media object may be rendered and presented to the user. A unit of distance may be, for example, centimeter or millimeter.default_rendering_distance specifies a default rendering distance between a user display and a 3D media object, at which the 3D media object is rendered and presented to the user upon initial playback. A unit of distance may be, for example, centimeter or millimeter. The first parameter group, the second parameter group, or the third parameter group described with reference toFIG.6are shared among a first UE, a server and/or a second UE as user space set parameters or subspace set parameters, so that the first UE/second UE or the server may understand a space surrounding the second UE/first UE and process an object around the second UE/first UE to control it to be displayed as a 3D XR media object on an XR device. FIG.7is a diagram for describing a flow of media data and metadata according to an embodiment of the disclosure. FIG.7illustrates a flow of media data and metadata, a flow of user space set parameters, and a flow of additional media metadata for XR services (e.g., object size, default orientation, or some or all of the user space set parameters), the flows being among entities participating in an XR service session. Referring toFIG.7, for convenience, a cloud, a server, an MEC server, and the like are collectively referred to as a cloud. According to an embodiment of the disclosure, all or some parameters included in the first, second, or third parameter group may be selected and transmitted between UEs according to the flow of metadata illustrated inFIG.7. According to an embodiment of the disclosure, the flow of media data and metadata may be described using the following syntax. UE1: Source Device→UE2: Target Device Here, although each user may have one or more UEs (i.e., a device having a network capability (e.g., a 5G modem capability) to transmit and receive data to and from another user's device over a network), it is assumed for convenience of description that first and second users each have one UE. Thus, UE1 and UE2 refer to a first UE of the first user and a second UE of the second user, respectively. According to an embodiment of the disclosure, each flow of media data and metadata described using the syntax is as follows. (Operation701) Pose Information of First UserUE1:glasses→cloud [Purpose: Split rendering]UE1:glasses→UE2:phone [Purpose: view dependent partial capturing & delivery)/rate adaptation]user_pose_parameters: SpaceSetReferenceStruct(pose_functionality_type=0) Split rendering is the process of performing some rendering operations in the cloud. (Operation701a) Pose Information of Second UserUE2:glasses→UE1:phone [Purpose: View dependent partial capturing & delivery)/rate adaptation]UE2:glasses→cloud [Purpose: Split rendering]user_pose_parameters: SpaceSetReferenceStruct(pose_functionality_type=0) (Operation702) 3D/2D Media DataUE2:phone→UE1:phone [3D data]cloud→UE1:glasses [2D data, Purpose: Split rendering] (Operation702a) 2D Media DataUE1:camera→UE1:phoneUE1:camera→cloud [Purpose: 3D modeling in cloud] (Operation702b) 3D Media DataUE1:phone→UE2:phonecloud→UE2:phone (Operation703) Vision Information of First UserUE1:phone→UE2:phone/glasses [Purpose: Support of rendering and rate adaptation in UE2]UE1:phone→cloud [Purpose: Support of cloud-based 3D modeling and split renderingvision_cam_paramters: SpaceSetReferenceStruct(pose_functionality_type=1) [unprocessed data or first parameter group]space_set_size: SpaceSetSizeStruct( ) vision-processed data]space_set_reference_point: SpaceSetReferenceStruct( ) [vision-processed data]light_source_direction [vision-processed data]augmentation_type [vision-processed data] (Operation703a) Vision information of second user: It may be inferred from the vision information of the first user obtained in operation703by replacing UE1 and UE2 with each other. (Operation704) 3D Modeling ParameterUE1:camera→UE1:phone [Passing information between user's devices]UE1:camera/phone→cloud [Purpose: Cloud 3D modeling]capture_cam: SpaceSetReferenceStruct(pose_functionality_type=2)Intrinsic_param: IntrinsicParamters( )Extrinsic_param: ExtrinsicParameters( ) (Operation705) 3D Model InformationUE1:phone→UE2:phone/glasses [When 3D modeled in UE]Cloud→UE2:phone/glasses [When 3D modeled in cloud]Object size, default orientation, default rendering size, priority When movement of the first user in a first user's space set needs to be mapped correctly to a second user's space set (by being scaled or non-scaled), pose information (or space set information) of the first user such as the first or second parameter group for the first user may be transmitted directly to the second user and used for rendering by the second user. Furthermore, when an object is shared between two users and is visible to the two users, the two users are able to both know exactly in which FOV (from which direction and distance) the other user is looking at the shared object through knowledge about pose information and vision information (space set, etc.) of the other user. Sharing pose information of each user with each other may be useful in real-time use cases such as a case where the two users need to view the shared object at the same distance and angle. FIG.8is a diagram for describing an XR media architecture of a UE, according to an embodiment of the disclosure. Referring toFIG.8, it illustrates an XR media architecture including an XR interaction controller830of a first UE81. The first UE81of a first user may transmit and receive XR service related information to and from a server (or a second UE)82via one or more interfaces. The server82may be, for example, a cloud, a MEC server, a data network entry point, or the like. The first UE81may transmit data to the server82, or to the second UE directly or via the server82. Each entity in the XR media architecture of the first UE81may be a logical entity or a hardware entity. A logical entity may be implemented by various hardware configurations. The XR interaction controller830may process and control pose information of component devices (a rendering device, a vision camera, a capture camera, etc.). The pose information of the component devices may be used by an entity such as an XR media player840or an XR media generator850. In addition, when pose information is needed for partial transmission or partial rendering within an XR conversational service, at least a part of the pose information (processed or unprocessed) may be transmitted to the server82or directly to the second UE. A function of each entity in the XR media architecture according to an embodiment of the disclosure is as follows.XR aware application810: It may control other XR entities in the XR media architecture.XR media session handler820: By communicating with the server (or second UE)82, it may perform capability negotiation for an XR service configuration, establish an XR session, and control (manage) and support the XR session. For example, the capability negotiation may involve determining, based on capabilities of a user's device and requirements of an XR service, an XR service with a level of quality that may be supported based on the capabilities of the user's device between a UE and a server, or determining to perform, in the user's device, only processes which are supportable based on the capabilities of the user's device from among 3D media processes related to the XR service while performing the rest of them in the server.XR interaction controller830: It may manage interaction-based services by communicating with the server (or the second UE)82. The XR interaction controller830may provide relevant data to the XR-aware application810for interaction control, provide relevant data to the XR media session handler820for interaction report control, provide relevant data to an XR media player840for vision-based interaction playback, and provide the relevant data to an XR media generator850for media generation.XR media player840: It may receive XR media content by communicating with the server (or the second UE)82. The XR media player840may provide relevant data to the XR aware application810for media playback (media access, depacketization, decapsulation, decoding, rendering, etc.), provide relevant data to the XR media session handler820for media session control, and provide relevant data to the XR interaction controller830for session interaction.XR media generator850: It may produce XR media content by communicating with the server (or the second UE)82. The XR media generator850may provide relevant data to the XR-aware application810for media generation (capture, 3D modeling and pre-processing, encoding, encapsulation, packetization, etc.), provide relevant data to the XR media session handler820for media session control, and provide relevant data to the XR interaction controller830for XR media generation interaction. According to an embodiment of the disclosure, interface parameters (including metadata that may be shared among interfaces) that may be transmitted by an interface between entities in the XR media architecture are as follows. However, the following is only an example of possible metadata. For example, XR media processing, XR media flow, XR services, etc. according to an embodiment of the disclosure may be enabled via interface parameters transmitted between entities.Parameters of a first interface (801): metadata required to process data in the server82. Because both remote rendering and remote content creation may be supported by the server82, both metadata about the first user and metadata about the second user may be included therein.User pose information (Purpose: remote rendering):NonDerivedParameters(pose_functionality_type=0);Vision camera information (for remote rendering):NonDerivedParameters(pose_functionality_type=1);Capture camera information (for cloud based 3D modeling and encoding):NonDerivedParameters(pose_functionality_type=2);User space set information (for remote rendering/3Dmodeling):SpaceSetReferenceStruct( )SpaceSetSizeStruct( )VisionSubSpaceStruct( )CaptureSubSpaceStruct( )SubSpaceReferencePointStruct( )Media object capture information (uplink: for when most processing done on devices; downlink: for when most rendering is done on device)ObjectSizeStruct( )DefaultOrientationStruct( )DefaultRenderingParamStruct( )Parameters of a second interface (802): metadata transmitted between the XR interaction controller830and the XR media player840. The metadata is generally metadata information related to the second user. However, it may not be necessary for the XR media player840to reproduce the metadata related to the second user. According to an embodiment of the disclosure, the XR media player840may not generally have space set related processing capabilities, and may have vision information processing capabilities. However, processing capabilities may be flexibly shared between the XR interaction controller830and the XR media player840. In other words, information is shared between the XR interaction controller830and the XR media player840, so that data that cannot be processed individually may be processed in a collaborative manner.User pose information (for media playback):NonDerivedParameters(pose_functionality_type=0);Vision camera information (for media playback):NonDerivedParameters(pose_functionality_type=1);User space set information (for media playback):SpaceSetReferenceStruct( )SpaceSetSizeStruct( )VisionSubSpaceStruct( )(CaptureSubSpaceStruct( ))SubSpaceReferencePointStruct( )Media object capture information (uplink: for when most processing done on a user's device rather than the server; downlink: for when most rendering is done on a user's device rather than the server)ObjectSizeStruct( )DefaultOrientationStruct( )DefaultRenderingParamStruct( )Parameters of a third interface (803): According to an embodiment of the disclosure, the XR media generator850may not have powerful processing capabilities. Accordingly, the XR media generator850may offload 3D media generation and encoding, etc. According to an embodiment of the disclosure, the metadata may be transmitted directly to the server (or the second UE)82via a fifth interface (805), or may be transmitted thereto via the first interface (801) after going through the XR interaction controller830via the third interface (803). The pose information and vision information of the second user, which are input to the XR media generator850via the first interface (801) and the third interface (803), may be used to perform view based partial capturing, generation, delivery, or rendering on a second user's media data for the first user.Capture camera information (for remote 3D modeling and encoding, etc.):NonDerivedParameters(pose_functionality_type=2);User space set information (optional):SpaceSetReferenceStruct( )SpaceSetSizeStruct( )(VisionSubSpaceStruct( ))CaptureSubSpaceStruct( )SubSpaceReferencePointStruct( )Media object capture information (if all processing is performed by the XR media generator850)ObjectSizeStruct( )DefaultOrientationStruct( )DefaultRenderingParamStruct( )Parameters of a fourth interface (804): commonly received media manifests such as DASH MPD.Parameters of the fifth interface (805): When latency is important, particular metadata may be transmitted directly between the XR media generator850and the server (or the second UE)82. In other words, media data may be transmitted directly to the server (or the second UE)82via the fifth interface (805) without going through the XR interaction controller830. FIG.9is a diagram for describing a method, performed by a first UE, of transmitting 3D XR media data to a second UE, according to an embodiment of the disclosure. Referring toFIG.9, in operation910, the first UE may receive, from at least one component device, a capability and status report on the at least one component device. In operation920, the first UE may transmit a device capability report regarding an XR service to a server based on the capability and status report. In operation930, the first UE may receive device configuration information for the XR service from the server. In operation940, the first UE may establish an XR service session based on the device configuration information. In operation950, the first UE may process 3D media data and metadata related to the XR service, which are obtained by controlling the at least one component device. In operation960, the first UE may transmit the processed 3D media data and metadata to the second UE via the server. FIG.10is a diagram for describing a configuration of a UE or component device according to an embodiment of the disclosure. The UEs, XR devices, or component devices described with reference toFIGS.1A,1B,2A,2B, and3to9may each have a configuration as illustrated inFIG.10. Alternatively, some component devices may include components that are different from those inFIG.10(e.g., a camera, a low-power processor, a display, a short-range communication module, etc.). Hereinafter, for convenience of description, a UE will be described as an example. Referring toFIG.10, the UE may include a transceiver1020, a memory1030, and a processor1010. However, the components of the UE are not limited thereto. For example, the UE may include more or fewer components than the components described above. For example, the UE may not include the memory1030. Furthermore, the transceiver1020, the memory1030, and the processor1010may be implemented as a single chip. Furthermore, the processor1010may include one or more processors. The transceiver1020collectively refers to a receiver and a transmitter, and may transmit and receive signals to and from a server, a component device, an XR device, or another UE. For example, the transceiver1020may transmit and receive control signals, media data, and metadata. To achieve this, the transceiver1020may include an RF transmitter for up-converting and amplifying a frequency of a signal to be transmitted and an RF receiver for low-noise amplifying a received signal and down-converting its frequency. However, this is merely an example of the transceiver1020, and components of the transceiver1020are not limited to the RF transmitter and the RF receiver. Furthermore, the transceiver1020may receive a signal via a radio channel and output the signal to the processor1010and transmit a signal output from the processor1010via a radio channel. The memory1030may store data and programs necessary for operations of the UE. Furthermore, the memory1030may store control information or data included in a signal obtained by the UE. The memory1030may include storage media such as read-only memory (ROM), random access memory (RAM), hard discs, compact disc (CD)-ROM, and digital versatile discs (DVDs), or a combination thereof. In addition, the memory1030may not exist separately and may be included in the processor1010. The processor1010may control a series of processes so that the UE may operate according to the embodiments of the disclosure. For example, the processor1010may receive control signals, media data, and metadata through the transceiver1020, and process the received control signals, media data, and metadata. In addition, the processor1010may transmit the processed control signals, media data, and metadata through the transceiver1020. The processor1010may include a plurality of processors and execute a program stored in the memory1030to perform an operation of controlling the components of the UE. FIG.11is a diagram for describing a configuration of a server according to an embodiment of the disclosure. The cloud, server, or MEC server described with reference toFIGS.1A,1B,2A,2B, and3to9may have a configuration as illustrated inFIG.11. Hereinafter, for convenience of description, a server will be described as an example. Referring toFIG.11, the server may include a transceiver1120, a memory1130, and a processor1110. However, the components of the server are not limited thereto. For example, the server may include more or fewer components than those described above. For example, the server may not include the memory1130. Furthermore, the transceiver1120, the memory1130, and the processor1110may be implemented as a single chip. Furthermore, the processor1110may include one or more processors. The transceiver1120collectively refers to a receiver and a transmitter, and may transmit and receive signals to and from a UE, a component device, an XR device, or another server. For example, the transceiver1120may transmit and receive control signals, media data, and metadata. To achieve this, the transceiver1120may include an RF transmitter for up-converting and amplifying a frequency of a signal to be transmitted and an RF receiver for low-noise amplifying a received signal and down-converting its frequency. However, this is merely an example of the transceiver1120, and components of the transceiver1120are not limited to the RF transmitter and the RF receiver. Furthermore, the transceiver1120may receive a signal via a radio channel and output the signal to the processor1110and transmit a signal output from the processor1110via a radio channel. The memory1130may store data and programs necessary for operations of the server. Furthermore, the memory1130may store media data or metadata included in a signal obtained by the server. The memory1130may include storage media such as ROM, RAM, hard discs, CD-ROM, and DVDs, or a combination thereof. In addition, the memory1130may not exist separately and may be included in the processor1110. The processor1110may control a series of processes such that the server may operate according to the embodiments of the disclosure. For example, the processor1110may receive control signals, media data, and metadata through the transceiver1120, and process the received control signals, media data, and metadata. In addition, the processor1110may transmit the processed control signals, media data, and metadata through the transceiver1120. The processor1110may include a plurality of processors and execute a program stored in the memory1130to perform an operation of controlling the components of the server. FIG.12is a flow chart illustrating a method performed by a first terminal according to an embodiment of the disclosure. Referring toFIG.12, in operation1201, the first terminal may identify capabilities of the first terminal connected to at least one component device. For example, the at least one component device may include at least one of a camera, a speaker, a display and a sensor. In operation1203, the first terminal may establish via a server, a session associated with an augmented reality (AR) service based on the capabilities of the first terminal. For example, the first terminal may communicate with the server to establish the session and the AR service may include an AR call between the first terminal and a second terminal. In an embodiment, a type of the session and a configuration of the session are identified based on the capabilities of the first terminal. During the establishment of the session, a format associated with the 3D media data is determined. In operation1205, the first terminal may perform pre-processing on 3D media data acquired by the at least one component device. For example, the pre-processing may include a format conversion. In an embodiment, the pre-processed 3D media data is encoded before being transmitted to the second terminal. In operation1207, the first terminal may transmit, to the second terminal, the pre-processed 3D media data in a real-time. FIG.13is a flow chart illustrating a method performed by a second terminal according to an embodiment of the disclosure. Referring toFIG.13, in operation1301, the second terminal may identify capabilities of the second terminal connected to at least one component device. For example, the at least one component device may include at least one of a camera, a sensor, a display and a speaker. In operation1303, the second terminal may establish, via a server, a session associated with an augmented reality (AR) service based on the capabilities of the second terminal. For example, the second terminal may communicated with the server to establish the session and the AR service may include an AR call between a first terminal and the second terminal. In an embodiment, a type of the session and a configuration of the session are identified based on the capabilities of the second terminal. During the establishment of the session, a format associated with the 3D media data is determined. In operation1305, the second terminal may receive, from the first terminal, 3D media data in a real-time. In operation1307, the second terminal may perform post-processing on the 3D media data. For example, the post-processing may include a format conversion. In operation1309, the second terminal may render the post-processed 3D media data on the second terminal. In an embodiment, the post-processed 3D media data is decoded before the rendering. According to an embodiment of the disclosure, a method, performed by a first user equipment (UE), of transmitting 3D XR media data to a second UE includes: receiving, from at least one component device, a capability and status report on the at least one component device; transmitting, to a server, a device capability report regarding an XR service based on the capability and status report; receiving device configuration information for the XR service from the server; establishing an XR service session based on the device configuration information; processing 3D media data and metadata related to the XR service, which are obtained by controlling the at least one component device; and transmitting the processed 3D media data and metadata to the second UE via the server. The at least one component may include: one or more vision camera devices configured to obtain 3D information about a surrounding environment of a first user of the first UE; one or more capturing camera devices configured to obtain 3D information about an object surrounding the first user; a rendering device configured to render 3D media data related to an XR service of the second UE; and an XR device displaying the rendered 3D media data. The capability and status report may include at least one of position information, orientation information, or hardware capability information of the at least one component device. The device capability report may include user space set parameters, and the user space set parameters may include information about a space surrounding the first user of the first UE and information about a position and an orientation of the at least one component device within the surrounding space. The user space set parameters may include at least one subspace set parameter, and the at least one subspace set parameter may include at least one of one or more vision subspace set parameters or one or more capturing subspace set parameters. The one or more vision subspace set parameters may represent a target space where one or more vision camera devices from among the at least one component device obtain 3D information about a surrounding environment of the first user, and the one or more capturing subspace set parameters may represent a target space where one or more capturing camera devices from among the at least one component device obtain 3D information about an object surrounding the first user. The method may further include: receiving, from the server, a list of a plurality of XR services including requirement information regarding each XR service; selecting, based on the capability and status report, one or more XR services from the list of the plurality of XR services; and transmitting a device capability report regarding the selected one or more XR services to the server. The method may further include transmitting, to the server, a request for at least some of a plurality of 3D media processes for processing the 3D media data to be performed by the server, based on the capability and status report and requirement information regarding the selected one or more XR services. The method may further include receiving, from the server, information for configuring at least some of a plurality of 3D media processes for processing the 3D media data to be processed by the server. The method may further include: receiving, from the second UE, 3D media data related to a second user of the second UE and user space set parameters associated with the second user; generating a 3D media object by processing the 3D media data related to the second user based on the user space set parameters associated with the second user; and controlling a display of the first UE or an XR device to display the 3D media object. According to another embodiment of the disclosure, a first UE for transmitting 3D XR media data to a second UE includes: a transceiver; and at least one processor configured to: control the transceiver to receive, from at least one component device, a capability and status report on the at least one component device; control the transceiver to transmit, to a server, a device capability report regarding an XR service based on the capability and status report; control the transceiver to receive device configuration information for the XR service from the server; establish an XR service session based on the device configuration information; process 3D media data and metadata related to the XR service, which are obtained by controlling the at least one component device; and control the transceiver to transmit the processed 3D media data and metadata to the second UE via the server. The methods according to the embodiments of the disclosure described in the appended claims or specification thereof may be implemented in hardware, software, or a combination of hardware and software. When the methods are implemented in software, a computer-readable storage medium storing one or more programs (software modules) may be provided. The one or more programs stored in the computer-readable storage medium are configured for execution by one or more processors within an electronic device. The one or more programs may include instructions that cause the electronic device to execute the methods according to the embodiments of the disclosure described in the claims or specification thereof. Furthermore, a computer program product storing one or more programs may be provided. These programs (software modules or software) may be stored in RAM, non-volatile memory including a flash memory, ROM, electrically erasable programmable ROM (EEPROM), a magnetic disc storage device, CD-ROM, DVDs or other types of optical storage devices, and a magnetic cassette. Alternatively, the programs may be stored in a memory that is configured as a combination of some or all of the memories. Furthermore, multiple such memories may be included. Furthermore, the programs may be stored in an attachable storage device that may be accessed through a communication network such as the Internet, Intranet, a local area network (LAN), a wide LAN (WLAN), or a storage area network (SAN), or a communication network configured in a combination thereof. The storage device may access a device for performing operations according to the embodiments of the disclosure via an external port. Furthermore, a separate storage device on a communication network may also access a device for performing the operations according to the embodiments of the disclosure. In the above-described specific embodiments of the disclosure, a component included in the disclosure is expressed in a singular or plural form depending on a presented embodiment of the disclosure. However, singular or plural expressions are selected to be suitable for situations presented for convenience of description, and the disclosure is not limited to the singular or plural form. An element expressed in a plural form may be configured as a single element, or an element expressed in a singular form may be configured as a plurality of elements. The embodiments of the disclosure presented in the specification and the accompanying drawings have been provided only as particular examples in order to easily describe technical details according to the disclosure and assist in understanding the disclosure and are not intended to limit the scope of the disclosure. In other words, it is obvious to those of ordinary skill in the art that other modifications may be implementable based on the technical spirit of the disclosure. Furthermore, the embodiments of the disclosure may be combined with one another for operation when necessary. For example, parts of an embodiment of the disclosure and other embodiments of the disclosure are combined with one another so that a UE, a component device, an XR device, and a server may be operated. Furthermore, embodiments of the disclosure may be applicable to other communication systems as well, and other modifications based on the technical spirit of the embodiments of the disclosure may also be implementable. While the disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents.
101,837
11861798
Like reference numbers and designations in the various drawings indicate like elements. DETAILED DESCRIPTION Mobile devices have realized a revolution in imagery and motion picture entertainment. Mobile devices may feature sophisticated image capture, positional and orientation systems, and high quality displays. Together, these devices can be used to support emerging applications, often labelled augmented reality, that render virtual objects on a display in the real-world physical environment of the user operating an augmented reality system (e.g., a wireless phone or head-mounted display). These augmented reality applications may be configured to achieve varying degrees of reality, as if the virtual objects were actually present in the physical environment of the user. Alternatively, the virtual objects may be modified to reflect certain aspects of the local environment while also including some distinguishing aspects that support safety and other objectives. An educational environment may be supported through various applications that present a user experience from other remote locations. For example, a classroom of school children may use augmented reality technologies in order to bring animals from a safari experience (e.g., the great migration) into a classroom. Alternatively or in addition, portions of a classroom can be brought into portions of Tanzanian landscape replete with animals. In this example, the virtual animals brought into this location may be generated off of a detailed and sophisticated model previously developed by a photographer in the field. This model may include rich imagery that is assembled to develop a three dimensional structure for each creature (object). The object may have texture and color developed from the underlying imagery and video footage. The model also may capture and model behavior from creatures (objects) in the field environment. However, underlying capture may not account for lighting conditions (e.g., location, time of day, and atmospheric settings) that reflect similar lighting useful for rendering realistic images in a recreation. The model of a creature (object) may be specified to reflect the ambient lighting as captured (e.g., location, time of day, and atmospheric settings)). Alternatively or in addition, the captured imagery may be transformed into a normative model. This normative model may genericize the underlying object to a neutral rendering. The neutral rendering then may be further modified so that later a relatively simple transformation may be performed relative to the neutral rendering in order to achieve location-specific rendering. This may be used to reduce the computational complexity of transforming an object in a first environment to accurately reflect the conditions of the second environment. Such a transformation may reduce the number of operations that are later performed. In some configurations, the transformation to a genericized model may reduce the likelihood of inaccuracies or discrepancies tied to circumstances of the initial capture. This specification describes a system that generates composite images depicting one or more virtual objects in an environment. FIG.1is a diagram of an example system100. The system100is an example of a system implemented as computer programs on one or more computers in one or more locations, in which the systems, components, and techniques described below can be implemented. The system100includes a physical object110(in this example, a human). Images of a scene that includes the physical object110can be captured by a first device130and/or a second device140. That is, the “scene” referenced in the below descriptions is a scene of the physical object110from the perspective of the first device130or the second device140. The first device130and/or the second device140can capture a still image of the scene or a video of the scene composed of multiple frames. The first device130and the second device140can then send the captured images to an image processing system150to generate composite images. That is, the image processing system150is configured to generate composite images131aand131bfor display on the first device130and a composite image141for display on the second device140. In some implementations, one device can capture images of the scene, and a different device can display the images. That is, the first device130and/or the second device140can each be composed of two different devices, one of which captures images and the other of which displays images. The first device130is a stereoscopic device. That is, the first device130captures images of the scene and displays the captured images (or composite images generated from the captured images) in stereo. In other words, the first device130captures images from two different perspectives that both correspond to a respective eye of the user. The first device130includes a first display132aand a second display132b, and a first camera134aand a second camera134b. In some implementations, the first device130can have more than two cameras. The first camera134aand the second camera134bare separated by a distance on the first device130so that the two cameras can capture the scene in stereo, correlating to the two distinct perspectives of the eyes of the user. The first composite image131adepicts the scene from the perspective of the first camera134a, and the second composite image131bdepicts the scene from the perspective of the second camera134b. The second device140is a monoscopic device. That is, the second device140captures images of the scene and displays the captured images (or composite images generated from the captured images) monoscopically. The first device includes a display142and a camera144that perform similar functions to the displays and cameras of the first device130. The third image141depicts the scene from the perspective of the camera144. The first device130can include a tracker component136and the second device140can include a tracker component146. Each tracker component can be used to track the location and orientation of the corresponding device in a common coordinate system of the system100. For example, the tracker components can use a global positioning system (GPS) or a cellular network to determine the location and/or orientation of the corresponding device. As another example, the tracking components136and146can interact with a tracking base station160to determine the location and orientation of the devices continuously in real-time. The tracking base station160, optionally included in the system100, is a master tracking device that allows the location of every object in the system100that has a tracker component to have its position and/or orientation determined. In some implementations, the tracking base station160determines the location of each object; in some other implementations, each object determines its own location using the tracker base station460. The first device130and the second device140can send the respective captured images of the scene to the image processing system150. In some implementations, the image processing system150is on-site, e.g., in the same building or in the same room as the devices130and140. In some other implementations, the image processing system150is off-site, e.g., on the cloud. In some other implementations, the image processing system150is a component of the first device130and/or the second device140. In other words, each of the devices can include a respective version of the image processing system150, so that the initial images of the scene can be processed on-device. The image processing system150can insert a first virtual object122(in this example, a dog) and a second virtual object124(in this example, a cat) into the images captured by the first device130and the second device140. In particular, the image processing system150can maintain data characterizing the location and orientation of the virtual objects122and124within the common coordinate system of the system100. The image processing system150can then process the respective captured images to insert depictions of the virtual objects122and124into the positions in the captured images corresponding to the locations of the virtual objects122and124within the common coordinate system of the system100. In some implementations, the depictions of the virtual objects122and124in the composite images131a-band141can depend on the respective locations of the virtual objects within the common coordinate system of the system100. This process is described in more detail below with reference toFIG.2AandFIG.2B. In some implementations, the depictions of the virtual objects122and124in the composite images131a-band141can depend on the respective locations of the devices130and140. This process is described in more detail below with reference toFIG.2AandFIG.2B. In some implementations, the image processing system150can further process the composite images131a-band141to change the depiction of the entire scene according to the respective locations of the devices130and140and/or according to the common coordinate system of the system100. This process is described in more detail below with reference toFIG.2AandFIG.2B. In some implementations, the image processing system150can determine that a distance between one of the devices130or140and one of the virtual objects122or124satisfies a threshold distance, and trigger the virtual object122or124to execute an animation in the corresponding composite image. This process is discussed in more detail below with reference toFIG.2AandFIG.2B. The image processing system150can provide the composite images131a-band141to the devices130and140, respectively, for display to users of the devices130and140. The image processing system150can perform this process repeatedly in order to generate a sequence of composite images. For example, the image processing system150can perform this process repeatedly in order to generate a video sequence of composite images in real-time or in pseudo-real-time, i.e., so that the video sequence of composite images is perceived by the users of the devices130and140as being real-time. In particular, if a user moves the first device130or the second device140, the image processing system150can continuously generate images that depict the scene from the updated different locations and orientations within the common coordinate system of the system100. FIGS.2A and2Bare diagrams of example environments200and250, respectively. InFIG.2A, the environment200is defined by a common coordinate system having two dimensions212and214. InFIG.2B, the environment250is defined by a common coordinate system having three dimensions222,224, and226. Referring toFIG.2A, the environment200includes a device210, a first virtual object220, and a second virtual object230. The device210is configured to capture images of the environment200and send the captured images to an image processing system, e.g., the image processing system150depicted inFIG.1, for processing to generate composite images that include the virtual objects220and230. The image processing system can generate different composite images according to the respective locations, within the two-dimensional common coordinate system of the environment200, of (i) the device210, (ii) the virtual objects220and230, or (iii) both. As a first example, the depictions of the virtual objects220and230in the composite images can depend on the respective locations of the virtual objects within the two-dimensional common coordinate system of the environment200. For example, the depiction of the virtual objects220and230can depend on the position of the virtual objects along the first dimension212and/or the second dimension214of the environment200. In some implementations, the respective depictions of the virtual objects220and230depend only on a single dimension. For example, as the virtual object220moves along the first dimension212, the depiction of the virtual object220can change (e.g., the virtual object220can be depicted at a different time of day), but as the virtual object220moves along the second dimension214, the depiction of the virtual object can remain constant. The dimension along which the depictions of the virtual objects220and230change can be selected to be any appropriate dimension of the environment200. For example, given a coordinate system of the environment200defined by the first dimension212and the second dimension214, a third dimension can be defined on which the depictions of the virtual objects220and230depend, e.g., a third dimension that is diagonal with respect to the first dimension212and the second dimension214(i.e., is a weighted mean between the first dimension212and the second dimension214). As a particular example, the dimension can be defined with reference to a light source in the environment200, e.g., by defining the dimension to be the same direction as the light source. This process is described below with reference toFIG.3. In some other implementations, the respective depictions of the virtual objects220and230depend on both dimensions212and214. For example, as the virtual object220moves along the first dimension212, the depiction of the virtual object can change (e.g., the virtual object220can be depicted at a different time of day), and as the virtual object220moves along the second dimension214, the depiction of the virtual object220can change in a different way than the first dimension212(e.g., the virtual object220can be depicted at a different time of year). In some implementations, the image processing system maintains multiple different model of the virtual objects220or230, e.g., by storing the multiple different models in a data store of the image processing system. Then, when generating a depiction of a virtual object220or230according to the location of the virtual object220or230within the environment200, the image processing system can obtain the model of the virtual object220or230that corresponding to the location of the virtual object within the environment200. In some other implementations, the image processing system maintains a single respective model of the virtual objects220or230. Then, when generating a depiction of a virtual object220or230according to the location of the virtual object220or230within the environment200, the image processing system can obtain the single model of the virtual object220or230and process the single model to update the depiction of the virtual object according to the location of the virtual object within the environment200. In some implementations, the respective models representing the virtual objects220and/or230can be generated using sensor data characterizing a real-world object corresponding to the virtual object. For example, a model generation system can be configured generating the models of the virtual object220using sensor data characterizing a real-world dog, e.g., using one or more of: one or more videos of a real-world dog, one or more sets of LIDAR data of a real-world data, one or more audio recordings of a real-world dog, and so on. As a particular example, the model generation system can be configured to receive image data (e.g., RGB images or LIDAR images) depicting the dog from multiple different angles, and to process the image data to generate a model of the virtual object220. The sensor data can further characterize the dog performing one or more different actions, e.g., walking towards the sensor, walking away from the sensor, and so on. The model generation system can use the sensor data to generate animations for the model of the virtual object220. In some such implementations, the respective models representing the virtual objects220and/or230can be generated according to one or more stereoscopic videos of the corresponding real-world object, where each stereoscopic video includes a sequence of stereoscopic frames that each depict the real-world object from multiple slightly different points of view, as described above with reference toFIG.1. In one example, the environment200can include a ruin of an ancient building, and the depiction of each of multiple components of the ancient building in the composite image can depend on the location of the component in the environment200. As a particular example, each component of the ancient building can be depicted as the component would have looked in a time period corresponding to the location of the component, so that the user can view, in a single composite image, both how the building used to look and how the ruin currently looks. In a second example, the respective depictions of the virtual objects220and230in the composite images can depend on the location of the device210in the common coordinate system of the environment200. For example, the respective depictions of the virtual objects220and230can depend on the position of the device210along the (i) first dimension212of the environment200, (ii) the second dimension214of the environment200, or (iii) both. In some implementations, the respective depictions of the virtual objects220and230depend only on the position of the device210along a single dimension. For example, as the device210moves along the first dimension212, the depiction of the virtual object220can change (e.g., such that the virtual object220is depicted at a different time of day), but as the device210moves along the second dimension214, the depiction of the virtual object220can remain constant. For example, as a user walks into the Roman Colosseum using an augmented reality display configured as described above, the display can gradually augment the imagery so that the user is transformed to perceive the early days of the venue during the Roman Empire. The imagery can be augmented using location-based triggers so that the transition becomes more immersive as the user progresses further into the venue. For example, a first model showing the construction of the Colosseum can be displayed to the uesr during the first 10 meters. As the user progresses another 10 meters, a completed venue can be rendered. When the user reaches a viewing platform more than 20 meters into the venue, one of the historic gladiator fights or naval battles can be rendered. The model and systems can be configured to support one or more safety protocols. For example, animated action can be stopped (or limited) until the user's velocity stops or is reduced below a predetermined threshold. These safety features also can reduce the intensity or brightness of the display as the user walks along a safety rail or other users. When the system conveys safety and position information allowing a computational determination that the user is safe or unlikely to collide with other objects, the intensity can be increased. Similarly, the user model can render animated action when the system determines that there is less than a threshold likelihood of collision (or that the user is in a “safe” location). As another example, the system can maintain data characterizing one or more locations within the environment200as “dangerous” locations, i.e., locations which the user is disallowed or discouraged from going. For example, the set of dangerous locations can include a ledge off of which the user might fall or a boundary of the environment200that the user is not permitted to pass. The system can then present one or more alerts to the user when the user approaches a dangerous location to inform the user of the danger. As a particular example, the system can display a first alert on the screen of an augmented reality display device, e.g., the display device130or140depicted inFIG.1, when the distance between the user and the dangerous location passes below a first threshold (e.g., the system can display a pop-up alert on the display device). That is, the system can continuously obtain the current location of the user in the environment200, and compare the current location of the user against the predetermined dangerous location within the environment. Instead of or in addition to displaying a visual alert, the system can emit an audible alert, e.g., a warning beep, when the user passes below the first threshold distance. The system can then display a second alert to the user when the distance between the user and the dangerous location passes below a second threshold that is lower than the first threshold. The second alert can be more attention-grabbing (e.g., a larger visual alert or a louder audible alert) than the first alert, in order to inform the user that the user is even closer to the dangerous location than before. The system can display any number of alerts corresponding to different thresholds. A final alert (corresponding to the lowest threshold) can be to disable the augmented reality system altogether so that the user can view the environment without any additional virtual objects or animations, thus allowing the user to have a more complete view of the true environment and, in particular, the dangerous location. The augmented reality display can include navigational assistance to illustrate the triggering point or direction for the next display. An arrow or spatial boundary can be used to identify where the user can perceive or trigger the next depiction. A label on the arrow (or spatial boundary) can be used to illustrate which experience will be triggered. An immersive audio experience can accompany one or more of the models of the virtual objects in the environment. For example, the construction of the Colosseum can be associated with audio simulating construction of the venue. Similarly, the roar of a full crowd can be rendered when the user enters a simulated gladiator fight. In some other implementations, the respective depictions of the virtual objects220and230depend on the respective positions of the device210along both dimensions212and214. For example, as the device210moves along the first dimension212, the depiction of the virtual object220can change (e.g., the virtual object220can be depicted at a different time of day), and as the device210moves along the second dimension214, the depiction of the virtual object220can change in a different way than the first dimension212(e.g., the virtual object220can be depicted at a different time of year). As a particular example, the environment200can include a ruin of an ancient building, and the depiction of the building can change according to the location of the device210within the environment200, so that the user can move through the environment200and witness the change of the building through history, as described above. As another particular example, the image processing system can generate a composite image that depicts the environment200in different weather according to the location in the environment200. For example, locations along the first dimension212of the common coordinate system can be depicted as having different temperatures, and locations along the second dimension214of the common coordinate system can be depicted as having different magnitudes of precipitation. As a particular example, the object220can be depicted as relatively cold and subject to a relatively large amount of precipitation, while the object230can be depicted as relatively hot and subject to a relatively small amount of precipitation. In a third example, the image processing system can process the composite images to change the depiction of the entire environment200according to the location of the device210. For example, in addition to changing the depiction of the virtual objects to reflect a different time of day according to the location of the device210, the image processing system can process the entire composite image so that the entire environment200appears as it would during the time of day corresponding to the location of the device210. In a fourth example, the image processing system can process the composite images to change the depiction of the entire environment200according to the common coordinate system of the environment200. For example, in addition to changing the depiction of the virtual objects to reflect a different time of day according to the respective location of the virtual objects within the environment200, the image processing system can process the entire composite image so that each component of the environment200appears as it would during the time of day corresponding to the location of the component in the environment200. An example composite image generated in this way is described below with reference toFIG.3. In a fifth example, the image processing system can determine that a distance between the device210and one of the virtual objects220or230satisfies a predetermined threshold (e.g., is above or below the predetermined threshold), and trigger the virtual object220or230to execute an animation in the composite image. For example, as the device210approaches the virtual object220(in this example, a dog), the dog might look up at the user of the device210and wag its tail. That is, if the device210is outside of the threshold distance to the virtual object220, the image processing system generates a composite image without such an animation; when the device210comes within the threshold distance, the image processing system generates a composite image with the animation. As another example, the image process system can generate a composite sound that includes a dog's bark when the device210moves outside of the threshold distance of the virtual object220. In some implementations, the environment200is a virtual environment; that is, composite images of the environment200that include the objects220and230can be generated by a virtual reality system. In some other implementations, the environment200is a real environment; that is, composite images of the environment200that include the objects220and230can be generated by an augmented reality system. In some such implementations, the augmented reality system can determine the two-dimensional common coordinate system according to the environment200, e.g., according to the dimensions of the environment200or according to one or more obstructions within the environment200. That is, the two-dimensional common coordinate system can be adaptable based on limitations of the environment200. For example, the common coordinate system can be adaptable based on a size of available space in the environment200. In some implementations, the augmented reality system can determine the common coordinate system by processing sensor data characterizing the environment200(e.g., one or more RGB images or LIDAR images of the environment200) using a machine learning model that is configured to process images of environments and to generate model outputs characterizing an optimal configuration of a common coordinate system. Referring toFIG.2B, the image processing system can generate different composite images according to the respective locations of the device210and the virtual objects220and230within a three-dimensional common coordinate system of the environment250. As described above with reference to the environment200ofFIG.2A, the environment250can be either a virtual environment or a real-world physical environment. In a first example, the depictions of the virtual objects220and230in the composite images can depend on the respective locations of the virtual objects within the three-dimensional common coordinate system of the environment250. For example, the depictions of the virtual objects220and230can depend on the position of the virtual objects along one or more of: a first dimension222, a second dimension224, or a third dimension226of the environment250. For example, the depictions of the virtual objects220and230can depend on a single dimension of the environment250, e.g., a fourth dimension that is defined with respect to the first dimension222, the second dimension224, and the third dimension226, as described above. As a particular example, the fourth dimension can be skew relative to the three dimensions222,224, and226, e.g., diagonal relative to the three dimensions222,224, and226. As another example, as the virtual object220moves along the first dimension222, the depiction of the virtual object can change (e.g., the virtual object220can be depicted at a different time of day); as the virtual object moves along the second dimension224, the depiction of the virtual object can change in a different way than the first dimension222(e.g., the virtual object220can be depicted at a different time of year); and as the virtual object moves along the third dimension226, the depiction of the virtual object can change in a different way than the first dimension222and second dimension224(e.g., the virtual object220can be depicted in a different year or century). As another example, the depictions of the virtual objects220and230can depend on four different dimensions: the three dimensions222,224, and226and a fourth dimension defined relative to the three dimensions222,224, and226(e.g., a fourth dimension that is diagonal relative to the other dimensions, as described above). As described above with reference toFIG.2A, in some implementations, the image processing system can obtain different models of the virtual objects220or230according to the respective location of the virtual objects within the environment250. In some other implementations, the image processing system can obtain a single respective model of the virtual objects220or230and process the single model according to the respective location of the virtual objects within the environment250. In a second example, the depictions of the virtual objects220and230in the composite images can depend on the location of the device210. For example, the depiction of the virtual objects220and230can depend on the position of the device210along one or more of: the first dimension222, the second dimension224, or the third dimension226of the environment250. For example, as the device210moves along the first dimension222, the depiction of the virtual object220can change (e.g., the virtual object220can be depicted at a different time of day); as the device210moves along the second dimension224, the depiction of the virtual object220can change in a different way than the first dimension222(e.g., the virtual object220can be depicted at a different time of year); and as the device210moves along the third dimension226, the depiction of the virtual object220can change in a different way than the first dimension222and the second dimension224(e.g., the virtual object220can be depicted in a different year or century). In a third example, the image processing system can process the composite images to change the depiction of the entire environment250according to the location of the device210. For example, in addition to changing the depictions of the virtual objects to reflect a different time of day according to the location of the device210, the image processing system can process the entire composite image so that the entire environment250appears as it would during the time of day corresponding to the location of the device210. In a fourth example, the image processing system can process the composite images to change the depiction of the entire environment250according to the common coordinate system of the environment250. For example, in addition to changing the depictions of the virtual objects to reflect a different time of day according to the respective locations of the virtual objects within the environment250, the image processing system can process the entire composite image so that each component of the environment250(e.g., corresponding to respective pixels of the composite image) appears as it would during the time of day corresponding to the location of the component in the environment250. An example composite image generated in this way is described below with reference toFIG.3. FIG.3illustrates an example composite image300. The composite image depicts a scene that includes multiple objects (in this example, multiple animals). The depiction of each point in the scene, including each object in the scene, depends on the location of the point in a coordinate system of the scene. In particular, each point in the scene is rendered according to a different time of day, corresponding to the location of the point in the coordinate system. The composite image300can be generated by an image processing system by processing an initial image of the scene depicted in the composite image300. In particular, for each location within the scene and for each of one or more virtual objects at a respective location within the scene, the image processing system can render the location and virtual object at the location to appear as it would at a time of day corresponding to the location. In the example depicted inFIG.3, the depictions of the objects in the scene depend on a single dimensions, which is approximately in the diagonal direction from the top-left of the composite image300to the bottom-right of the composite image300. The dimension along which the depictions of the objects change can be defined according to a light source within the environment of the composite image300. In particular, points in the composite image300that are further towards the bottom-right (e.g., towards the eastern horizon) are depicted to be earlier in the day than points further towards the top-left (e.g., towards the western horizon). This specification uses the term “configured” in connection with systems and computer program components. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions. Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The term “data processing apparatus” refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A computer program, which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code, can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. In this specification, the term “database” is used broadly to refer to any collection of data: the data does not need to be structured in any particular way, or structured at all, and it can be stored on storage devices in one or more locations. Thus, for example, the index database can include multiple collections of data, each of which may be organized and accessed differently. Similarly, in this specification the term “engine” is used broadly to refer to a software-based system, subsystem, or process that is programmed to perform one or more specific functions. Generally, an engine will be implemented as one or more software modules or components, installed on one or more computers in one or more locations. In some cases, one or more computers will be dedicated to a particular engine; in other cases, multiple engines can be installed and running on the same computer or computers. The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers. Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone that is running a messaging application, and receiving responsive messages from the user in return. Data processing apparatus for implementing machine learning models can also include, for example, special-purpose hardware accelerator units for processing common and compute-intensive parts of machine learning training or production, i.e., inference, workloads. Machine learning models can be implemented and deployed using a machine learning framework, .e.g., a TensorFlow framework, a Microsoft Cognitive Toolkit framework, an Apache Singa framework, or an Apache MXNet framework. Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet. The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device. In addition to the embodiments described above, the following embodiments are also innovative: Embodiment 1 is a method comprising:maintaining first data associating each location within an environment with a particular time;obtaining an image depicting the environment from a point of view of a display device;obtaining second data characterizing one or more virtual objects; andprocessing the obtained image and the second data to generate a composite image depicting the one or more virtual objects at respective locations in the environment from the point of view of the display device, wherein the composite image depicts each virtual object according to the particular time that the first data associates with the location of the virtual object in the environment. Embodiment 2 is the method of embodiment 1, wherein:the first data associates each location in the environment with a particular time of day, andthe composite image depicts, for each virtual object, how the virtual object would appear at the time of day associated with the location of the virtual object in the environment. Embodiment 3 is the method of embodiment 1, wherein:the first data associates each location in the environment with a particular date in history, andthe composite image depicts, for each virtual object, how the virtual object would appear on the date in history associated with the location of the virtual object in the environment. Embodiment 4 is the method of embodiment 1, wherein:the first data associates each location in the environment with a particular time of year, andthe composite image depicts, for each virtual object, how the virtual object would appear at the time of year associated with the location of the virtual object in the environment. Embodiment 5 is the method of any one of embodiments 1-4, wherein obtaining second data characterizing one or more virtual objects comprises:obtaining third data characterizing the one or more virtual objects at a same time; andprocessing, for each virtual object, the third data according to the particular time associated with the location of the virtual object in the environment to generate the second data. Embodiment 6 is a method comprising:maintaining first data associating each location within an environment with a particular time;determining a current location of a display device in the environment;determining the time associated with the determined location in the maintained first data;obtaining an image depicting the environment from a point of view of the display device; andprocessing the obtained image to generate a composite image for display on the display device according to the determined time. Embodiment 7 is the method of embodiment 6, wherein:the first data associates each location in the environment with a particular time of day, andthe composite image depicts how the environment would appear at the time of day associated with the determined location. Embodiment 8 is the method of embodiment 6, wherein:the first data associates each location in the environment with a particular date in history, andthe composite image depicts how the environment would appear on the date in history associated with the determined location. Embodiment 9 is the method of embodiment 6, wherein:the first data associates each location in the environment with a particular time of year, andthe composite image depicts how the environment would appear at the time of year associated with the determined location. Embodiment 10 is the method of any one of embodiments 6-9, wherein generating the composite image comprises obtaining second data characterizing one or more virtual objects at the determined time. Embodiment 11 is the method of any one of embodiments 6-9, wherein generating the composite image comprises:obtaining second data characterizing one or more virtual objects, andprocessing the second data according to the determined time to generate third data characterizing the one or more virtual objects at the determined time. Embodiment 12 is a method comprising:obtaining an image depicting an environment from a point of view of a display device;obtaining data characterizing a virtual object;determining a location of the display device in a common coordinate system of the environment;determining a location corresponding to the virtual object in the common coordinate system of the environment;determining whether a distance in the common coordinate system of the environment between the display device and the virtual object is below a predetermined threshold;in response to determining that the distance between the display device and the virtual object is below the predetermined threshold, processing the obtained image to generate a composite image for display on the display device, wherein the composite image depicts the virtual object executing a first animation; andin response to determining that the distance between the display device and the virtual object is not below the predetermined threshold, processing the obtained image to generate a composite image for display on the display device, wherein the composite image depicts the virtual object executing a second animation that is different from the first animation. Embodiment 13 is the method of embodiment 12, further comprising:in response to determining that the distance between the display device and the virtual object is below the predetermined threshold, generating a composite sound that comprises a first sound associated with the virtual object. Embodiment 14 is the method of any one of embodiments 12 or 13, wherein:the virtual object is a model of an animal;the first animation characterizes a reaction of the animal to a user of the display device; andthe second animation characterizes the animal unaware of the user of the display device. Embodiment 15 is a system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform the method of any one of embodiments 1 to 14. Embodiment 16 is one or more non-transitory computer storage media encoded with a computer program, the program comprising instructions that are operable, when executed by a data processing apparatus, to cause the data processing apparatus to perform the method of any one of embodiments 1 to 14. While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a sub combination. Similarly, while operations are depicted in the drawings and recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.
51,896
11861799
DEFINITIONS “artificial environment” may be something that has been recorded or generated. “virtual visual space” refers to a fully or partially artificial environment that may be viewed, which may be three dimensional. “virtual visual scene” refers to a representation of the virtual visual space viewed from a particular point of view (position) within the virtual visual space. ‘virtual visual object’ is a visible virtual object within a virtual visual scene. “sound space” (or “virtual sound space”) refers to an arrangement of sound sources in a three-dimensional space. A sound space may be defined in relation to recording sounds (a recorded sound space) and in relation to rendering sounds (a rendered sound space). “sound scene” (or “virtual sound scene”) refers to a representation of the sound space listened to from a particular point of view (position) within the sound space. “sound object” refers to sound source that may be located within the sound space. A source sound object represents a sound source within the sound space, in contrast to a sound source associated with an object in the virtual visual space. A recorded sound object represents sounds recorded at a particular microphone or location. A rendered sound object represents sounds rendered from a particular location. “virtual space” may mean a virtual visual space, mean a sound space or mean a combination of a virtual visual space and corresponding sound space. In some examples, the virtual space may extend horizontally up to 360° and may extend vertically up to 180°. “virtual scene” may mean a virtual visual scene, mean a sound scene or mean a combination of a virtual visual scene and corresponding sound scene. ‘virtual object’ is an object within a virtual scene, it may be an artificial virtual object (e.g. a computer-generated virtual object) or it may be an image of a real object in a real space that is live or recorded. It may be a sound object and/or a virtual visual object. “Virtual position” is a position within a virtual space. It may be defined using a virtual location and/or a virtual orientation. It may be considered to be a movable ‘point of view’. “Correspondence” or “corresponding” when used in relation to a sound space and a virtual visual space means that the sound space and virtual visual space are time and space aligned, that is they are the same space at the same time. “Correspondence” or “corresponding” when used in relation to a sound scene and a virtual visual scene (or visual scene) means that the sound space and virtual visual space (or visual scene) are corresponding and a notional (virtual) listener whose point of view defines the sound scene and a notional (virtual) viewer whose point of view defines the virtual visual scene (or visual scene) are at the same location and orientation, that is they have the same point of view (same virtual position). “real space” (or “physical space”) refers to a real environment, which may be three dimensional. “real scene” refers to a representation of the real space from a particular point of view (position) within the real space. “real visual scene” refers to a visual representation of the real space viewed from a particular real point of view (position) within the real space. “extended reality” or “mediated reality” in this document refers to a user experiencing, for example visually, a fully or partially artificial environment (a virtual space) as a virtual scene at least partially rendered by an apparatus to a user. The virtual scene is determined by a point of view (virtual position) within the virtual space. Displaying the virtual scene means providing a virtual visual scene in a form that can be perceived by the user. “augmented reality” in this document refers to a form of mediated reality in which a user experiences a partially artificial environment (a virtual space) as a virtual scene comprising a real scene, for example a real visual scene, of a physical real environment (real space) supplemented by one or more visual or audio elements rendered by an apparatus to a user. The term augmented reality implies a mixed reality or hybrid reality and does not necessarily imply the degree of virtuality (vs reality) or the degree of mediality; “virtual reality” in this document refers to a form of mediated reality in which a user experiences a fully artificial environment (a virtual visual space) as a virtual scene displayed by an apparatus to a user; “virtual content” is content, additional to real content from a real scene, if any, that enables extended (mediated) reality by, for example, providing one or more artificial virtual objects. “extended reality content” or “mediated reality content” is virtual content which enables a user to experience, for example visually, a fully or partially artificial environment (a virtual space) as a virtual scene. Mediated reality content could include interactive content such as a video game or non-interactive content such as motion video. “augmented reality content” is a form of mediated reality content which enables a user to experience, for example visually, a partially artificial environment (a virtual space) as a virtual scene. Augmented reality content could include interactive content such as a video game or non-interactive content such as motion video. “virtual reality content” is a form of mediated reality content which enables a user to experience, for example visually, a fully artificial environment (a virtual space) as a virtual scene. Virtual reality content could include interactive content such as a video game or non-interactive content such as motion video. “perspective-mediated” as applied to mediated reality, augmented reality or virtual reality means that user actions determine the point of view (virtual position) within the virtual space, changing the virtual scene; “first person perspective-mediated” as applied to mediated reality, augmented reality or virtual reality means perspective mediated with the additional constraint that the user's real point of view (location and/or orientation) determines the point of view (virtual position) within the virtual space of a virtual user; “third person perspective-mediated” as applied to mediated reality, augmented reality or virtual reality means perspective mediated with the additional constraint that the user's real point of view does not determine the point of view (virtual position) within the virtual space; “user interactive” as applied to mediated reality, augmented reality or virtual reality means that user actions at least partially determine what happens within the virtual space; “displaying” means providing in a form that is perceived visually (viewed) by the user. “rendering” means providing in a form that is perceived by the user “virtual user” defines the point of view (virtual position-location and/or orientation) in virtual space used to generate a perspective-mediated sound scene and/or visual scene. A virtual user may be a notional listener and/or a notional viewer. “notional listener” defines the point of view (virtual position-location and/or orientation) in virtual space used to generate a perspective-mediated sound scene, irrespective of whether or not a user is actually listening “notional viewer” defines the point of view (virtual position-location and/or orientation) in virtual space used to generate a perspective-mediated visual scene, irrespective of whether or not a user is actually viewing. Three degrees of freedom (3 DoF) describes mediated reality where the virtual position is determined by orientation only (e.g. the three degrees of three-dimensional orientation). In relation to first person perspective-mediated reality, only the user's orientation determines the virtual position. Six degrees of freedom (6 DoF) describes mediated reality where the virtual position is determined by both orientation (e.g. the three degrees of three-dimensional orientation) and location/movement (e.g. the three degrees of three-dimensional location/movement). In relation to first person perspective-mediated reality, both the user's orientation and the user's location/movement in the real space determine the virtual position. Three degrees of freedom plus (3 DoF+) describes mediated reality where the virtual position is determined by both orientation (e.g. the three degrees of three-dimensional orientation) and restricted location/movement of the user (e.g. the three degrees of three-dimensional location/movement). In relation to first person perspective-mediated reality, both the user's orientation and the user's location/movement in the real space determine the virtual position. DETAILED DESCRIPTION The following Figures illustrate rendering of extended (mediated) reality using virtual content. In this context, extended (mediated) reality means the rendering of extended (mediated) reality for the purposes of achieving extended (mediated) reality for example augmented reality or virtual reality. In these examples, the extended (mediated) reality is first person perspective-mediated reality. It may or may not be user interactive. It may be 3 DoF, 3 DoF+ or 6 DoF, for example. FIGS.1A,2A,3Aillustrate at a first time a real space50, a virtual sound space20and a virtual visual space60. There is correspondence between the virtual sound space20and the virtual visual space60. A user51in the real space50has a position (point of view)57defined by a location52and an orientation53. The location is a three-dimensional location and the orientation is a three-dimensional orientation. In extended (mediated) reality there is a correspondence between a position (point of view)57of the user51and a virtual position (point of view)77of a virtual user71. The position (point of view)57of the user51has an associated location52and orientation53. The virtual position (point of view)77of the virtual user71has an associated virtual location72(corresponding to location52) and an associated virtual orientation73(corresponding to orientation53). In 3 DoF extended (mediated) reality, an orientation53of the user51controls a virtual orientation73of a virtual user71. There is a correspondence between the orientation53and the virtual orientation73such that a change in the orientation53produces the same change in the virtual orientation73. The virtual orientation73of the virtual user71in combination with a virtual field of view74defines a virtual visual scene75within the virtual visual space60. In some examples, it may also define a virtual sound scene76. A virtual visual scene75is that part of the virtual visual space60that is displayed to a user. A virtual sound scene76is that part of the virtual sound space20that is rendered to a user. The virtual sound space20and the virtual visual space60correspond in that a position within the virtual sound space20has an equivalent position within the virtual visual space60. In 3 D0F mediated reality, a change in the location52of the user51does not change the virtual location72or virtual orientation73of the virtual user71. In the example of 6 DoF mediated reality, the situation is as described for 3 DoF and in addition it is possible to change the rendered virtual sound scene76and the displayed virtual visual scene75by movement of a location52of the user51. For example, there may be a mapping between the location52of the user51and the virtual location72of the virtual user71. A change in the location52of the user51produces a corresponding change in the virtual location72of the virtual user71. A change in the virtual location72of the virtual user71changes the rendered virtual sound scene76and also changes the rendered virtual visual scene75. This may be appreciated fromFIGS.1B,2B and3Bwhich illustrate the consequences of a change in location52and orientation53of the user51on respectively the rendered virtual sound scene76(FIG.2B) and the rendered virtual visual scene75(FIG.3B). In the following examples reference will be made to the virtual space60this comprises the virtual visual space. It additionally comprises a corresponding virtual sound space20if one exists. In the following examples reference will be made to a participant or participants. A participant is a (virtual) user who is participating in extended reality. In the context of the real space50, the participant is the user51. In the context of the virtual space60, the participant is the virtual user71. The term ‘participant’ therefore refers, depending on context, to the user51and/or the virtual user71. The examples ofFIGS.4A,4B,4Cillustrate, respectively, before, during and after joining40a joining participant712to at least an existing participant711to enable a shared extended reality. Joining a joining participant712to an existing participant711to enable shared extended reality means that the existing participant711and the joining participant712are co-located42in a virtual space60and share, at least visually, the virtual space60. FIG.4Aillustrates an existing participant711in extended reality in a virtual space601. The existing participant711is at a location721and has a point of view771. FIG.4Billustrates joining40, to at least the existing participant711in extended reality, who is at the location721in the virtual space601, a joining participant712in extended reality to enable a shared extended reality. The joined participants711,712in the shared extended reality are at least co-located42in the virtual space60and can share at least visually the virtual space60i. The shared virtual space60ican be the original virtual space601or some other portion of the virtual space60. The participants711,712are co-located, in the shared virtual space60i, in that they can share, at least visually, the virtual space60in a shared extended reality. The existing participant711has a location721in the shared virtual space60iand the joining participant712has a location722in the same shared virtual space60i. The location721and the location722are within a threshold distance of each other. Thus in at least some examples, the joining participant712is located, in the shared virtual space60i, within a threshold distance of a location721of the existing participant711in the shared virtual space60i. If we have a 3 DoF experience (such as multi-viewpoint 3 DoF experience) then the two or more participants do not necessarily see each other since they would occupy the same position in space (with freedom of having different view direction). But they can have a visual indication of each other's presence, and they would typically hear each other. If we have a 6 DoF experience, then the users can generally see each other and hear each other. The participants51(71) can have the same or different points of view57(77). In this example, the existing participant711has a point of view771and the joining participant712has a point of view772. The point of view771and the point of view772can be co-dependent. For example, in this example the point of view771is towards the joining participant772and the point of view722is towards the existing participant771. In other examples, the point of view771and the point of view772are towards a particular portion of the virtual space60isuch as a virtual object. In at least some examples, when the joining participant712and the existing participant711are joined to share the virtual space60(FIG.4C), the orientation of the point of view772of the joining participant712in the virtual space60is selected so that the existing participant711is immediately visible to the joining participant712. This can be achieved by selecting the orientation of the point of view772of the joining participant712to be substantially aligned (with the margin of the field of view) with the location721of the existing participant711in the virtual space60. In at least some examples, when the joining participant712and the existing participant711are joined to share the virtual space60(FIG.4C), the location722of the joining participant712in the virtual space60is selected so that the joining participant712is immediately visible to the existing participant711within the virtual visual scene75. This can be achieved by selecting the location722of the joining participant712in the virtual space60to be substantially aligned (with the margin of the field of view) with the orientation of the point of view771of the existing participant711. FIG.4Cillustrates participants711,712sharing extended reality within the shared virtual space60i. In this example, but not necessarily all examples, the participants711,712can independently experience the extended reality within the shared virtual space60iby changing their respective points of view771,772. In at least some examples, the participants51(71) have different points of view57(77). For example, the first virtual user711has a first point of view771that corresponds to a first point of view571of the first real user511and the second virtual user712has a second point of view772that corresponds to a second point of view572of the second real user512. The first point of view771is controlled independently of the second point of view772In the example illustrated, the first point of view571of the first real user511is tracked using a head mounted apparatus101worn by the first real user511and the second point of view572of the second real user512is tracked using a head mounted apparatus102worn by the second real user512. After joining, the existing participant711participates in extended reality and can control the point of view771within virtual space601and the new participant712participates in extended reality and can control the point of view772within virtual space601. The point of view771associated with the existing participant711and the point of view772associated with the joining participant712are independently controllable by the respective existing participant711and the respective joining participant712. Although the example illustrated inFIGS.4B and4C, illustrates 6 DoF, with different locations for the points of view771,772, it is also relevant to 3 DoF where the locations for the points of view771,772can be the same and fixed. The extended reality rendered to a participant71is determined, at least in part, by extended reality content. The extended reality content has a visual element, and can also have an audio element. It can also have an interactive element. The audio element can be an input channel (if and how a participant is heard) and/or an output channel (if and how a participant hears). The visual element can be an input channel (if and how a participant is seen) and/or an output channel (how a participant sees). The interactive element can control if the participant is an observer (passive) or an interactor (active). When the participant71is an interactor, the participant can interact with objects within the virtual space60. The extended reality content rendered to a participant51,71is dependent upon the point of view57,77of the participant51,71. At least the visual element of the extended reality content rendered to a participant51,71(the virtual visual scene) is dependent upon the point of view57,77of the participant51,71. Changing the point of view57,77changes the virtual visual scene. The audio element of the extended reality content rendered to a participant51,71(the virtual sound scene) can, in some examples, be dependent upon the point of view57,77of the participant51,71. Changing the point of view57,77changes the virtual sound scene. The interactive element of the extended reality content rendered to a participant51,71(the virtual interactive scene) can, in some examples, be dependent upon the point of view57,77of the participant51,71. Changing the point of view57,77changes the availability of virtual objects, in the virtual visual scene, for interaction. A content configuration is used to control at least visual and/or audio interaction between participants of a shared extended reality. A content configuration controls, for participants511,512(711,712) in extended reality, what is heard, what is seen and interactivity with the virtual space60. The content configuration30ican control, for participant71iin extended reality, what is heard, what is seen and interactivity with the virtual space60(as appropriate) including whether the participant71iis seen by another participant71jor other participants71j, whether the participant71iis heard by another participant71jor other participants71j, whether the participant71isees another participant71jor other participants71j, whether the participant71ihears another participant71jor other participants71j. The content configuration30controls visual and/or audio communication between participants71. The example ofFIG.5illustrates that at least part of a content configuration301is inherited24from the existing participant511(711) by the joining participant512(712). Thus, the content configuration301controls, what the existing participant711sees and/or hears. It can also control how the existing participant711sees and/or hears the joining participant712. The content configuration301is inherited by the joining participant712. It controls what the joining participant712sees and/or hears. It can control how the joining participant712sees and/or hears the existing participant711. The content configuration301of the existing participant711that is inherited by the joining participant712can in some examples be dependent upon the content rendered in the virtual visual scene and/or setting for the existing participant711or the apparatus101associated with the existing participant711and/or the capability of the apparatus101associated with the existing participant711. FIG.5also illustrates that at least part of a join configuration32is inherited24between the existing participant511(711) and the joining participant512(712), The join configuration32controls joining of participants in the shared extended reality to the participant51(71) associated with that join configuration. The join configuration32controls when one participant71is joined to another participant71in a shared extended reality experience. The join configuration32can for example define a set of conditions for joining. The join configuration32can, for example, define when to refuse a join, when to allow a join unconditionally and when to allow a join with conditions. The joining participant712can join the existing participant711in the extended reality in response to acceptance of a request to join made by the joining participant712. The join configuration of the existing participant711determines whether the acceptance of the request to join made by the joining participant712is automatic or is dependent upon an affirmation by the existing participant711. The join configuration applied to a join to a particular existing participant711can, for example, depend upon whether the particular existing participant711is alone or in a group of existing participants711. The join configuration can, for example, be modified by inheritance from other participants. The join configuration can for example be dependent upon a privacy setting of the existing participant(s)711. The join configuration can for example be dependent upon whether or not a communication channel exists between the existing participants711. Inheritance means that parameters of a content configuration30and join configuration32are transferred from a (source) configuration to another (target) configuration where they are used. A new parameter may be added to the configuration or an existing parameter may be replaced or removed. The group of configurations including the source and target configurations will, after inheritance, have at least some common parameters. Hierarchy of inheritance determines a direction of inheritance. Hierarchy can be defined for all inherited parameters or separately for parameters or groups of parameters. The direction of inheritance can, in some examples, be unidirectional between the joining participant712and the existing participant(s)711. The direction of inheritance can, in some examples, be bi-directional between the joining participant712and the existing participant(s)711. The inheritance24of the content configuration30is different to the inheritance24of the join configuration32. The inheritance24of the content configuration30is hierarchical with a direction of inheritance24from the existing participant711to the joining participant712. The inheritance24of the join configuration32is hierarchical with a direction of inheritance24between the existing participant711to the joining participant712based on the respective content configurations20of the existing participant711and the joining participant712. What is inherited, by whom and from whom is based on the respective content configurations20of the existing participant(s)711and the joining participant712. The inheritance24of the join configuration32is dependent upon one or more parameters of the join configuration321for the existing participant711and one or more parameters of the join configuration322for the joining participant712. In at least some examples, audio and/or visual feedback is provided to the participant71who is a destination of the inheritance of the content configuration30and/or feedback is provided to the participant71who is the source of the inheritance of the content configuration30. In at least some examples, audio and/or visual feedback is provided to the participant who is a destination of the inheritance of a join configuration32and/or feedback is provided to the participant who is the source of the inheritance of a joining configuration. For the inheritance of the content configuration30, the existing participant711is dominant and the inheritance is unidirectional. Some or all of the parameters of the source content configuration301(the content configuration of the existing participant711) are transferred to the target configuration (the content configuration of the joining participant712). For the inheritance of the content configuration, the existing participant711is dominant so that the joining participant712has, at least initially, substantially the same content experience as the existing participant711. The joining participant712inherits at least one or more audio parameters that control what is heard within the extended reality. The audio parameters of the content configuration can for example define a hearing focus, volume levels, etc. The joining participant712inherits at least one or more visual parameters that control what is seen within the extended reality. The visual parameters can, for example, define a visual focus. Optionally the joining participant712inherits at least one or more interaction parameters that control what interactivity is possible with objects within the virtual space60. After inheritance of the content configuration30, there is equality of content configuration—each participant71has the same common content configuration30, at least initially. Each participant71therefore has the same shared experience of the extended reality content in the shared extended reality. In at least some examples, commonality between the content configuration30of participants71in a shared extended reality can be enforced. For example, participant modification of the content configuration30may not be allowed or only allowed within defined limits to enforce commonality of experience and prevent significant differences in content experience. In at least some examples, if a participant attempts to modify or to significantly modify their content configuration30they are instead offered an option to un-join from the shared extended reality. For the inheritance of the join configuration, the existing participant711is not necessarily dominant and the inheritance is not necessarily unidirectional. Some or all of the parameters of a source join configuration32are transferred to a target join configuration32. The source join configuration32can be the join configuration321of the existing participant711and the target join configuration32can be the join configuration322of the joining participant712and/or the source join configuration32can be the join configuration322of the joining participant712and the target join configuration32can be the join configuration321of the existing participant711. The inheritance of the join configuration32is dependent upon one or more parameters of the join configuration321for the existing participant711and one or more parameters of the join configuration322for the joining participant712. The most restrictive parameters are dominant. In some examples, the existing participant711is dominant acting as a host and the joining participant712is a guest. In some examples, the existing participant711is not dominant acting as a member and the joining participant712is an equal member. For the inheritance of the join configuration32, the existing participant711and the joining participant712provide substantially the same joining experience to another joining participant. The joining participant712and/or the existing participant711inherits at least one or more parameters that control whether and/or from whom to have explicit consent to join extended reality whether and/or to whom to give notification of a request to join extended reality whether and/or to whom to give notification of a joining participant joining extended reality. For the inheritance of the join configuration32, there can be inherited equality so that, at least initially, the existing participant711and the joining participant712share the same control of joining. At least one of the joining participant712and/or the existing participant711inherits at least one or more parameters that control joining that is more restrictive than its current parameter. The more restrictive parameters are dominant and are inherited. In at least some examples, the join configuration for an existing participant711adapts when the existing participant711has an open audio communication channel, to require explicit authorization for a join from the existing participant711and the other user or users who are communicating in the communication channel. In at least some examples, the joining participant712is provided with a preview of a shared extended reality before the joining participant712requests to join the shared extended reality, wherein the preview comprises existing participants711at the location of the join. The preview can give a visual indication of join configurations321of the existing participant(s)711. In at least some examples, the joining of the join configuration32is dependent upon a content of the extended reality experienced by the existing participant711. For example, a join can be auto rejected based on extended reality content. The extended reality content can for example have associated metadata that defines allowed/disallowed join points. The narrative of the extended reality content can, for example, be protected for the existing participant711and/or the joining participant712. Access to content is, for example, allowed/disallowed based on a content history of the joining participant712. Access to content is, for example, allowed/disallowed based on importance of current content to a narrative for the existing participant711. Access to content is, for example, allowed/disallowed based on an importance of content to a narrative for the joining participant712. FIG.6illustrates an example of a controller80suitable for use in an apparatus10. Implementation of a controller80may be as controller circuitry. The controller80may be implemented in hardware alone, have certain aspects in software including firmware alone or can be a combination of hardware and software (including firmware). As illustrated inFIG.6the controller80may be implemented using instructions that enable hardware functionality, for example, by using executable instructions of a computer program86in a general-purpose or special-purpose processor92. The processor92is configured to read from and write to the memory84. The processor92may also comprise an output interface via which data and/or commands are output by the processor92and an input interface via which data and/or commands are input to the processor92. The memory84stores a computer program86comprising computer program instructions (computer program code) that controls the operation of the apparatus10when loaded into the processor92. The computer program instructions, of the computer program86, provide the logic and routines that enables the apparatus to perform the methods illustrated in the various FIGs. The processor92by reading the memory84is able to load and execute the computer program86. The apparatus10therefore comprises:at least one processor92; andat least one memory84including computer program codethe at least one memory84and the computer program code configured to, with the at least one processor92, cause the apparatus10at least to perform:initiating joining to at least an existing participant in extended reality at a location in a virtual space, a joining participant in extended reality to enable a shared extended reality;causing a content configuration to be inherited from the existing participant by the joining participant, wherein the content configuration controls, for participants, what is heard, what is seen and interactivity with the virtual space;causing at least part of a join configuration to be inherited between the existing participant and the joining participant, wherein the join configuration controls joining of other joining participants in the shared extended reality.completing joining to at least the existing participant in extended reality the joining participant in extended reality to enable a shared extended reality, wherein the existing participant and the joining participant, collectively the joined participants, are initially, at least co-located in the virtual space and can share at least visually the virtual space. As illustrated in the example ofFIG.7, the computer program86may arrive at the apparatus10via any suitable delivery mechanism88. The delivery mechanism88may be, for example, a machine readable medium, a computer-readable medium, a non-transitory computer-readable storage medium, a computer program product, a memory device, a record medium such as a Compact Disc Read-Only Memory (CD-ROM) or a Digital Versatile Disc (DVD) or a solid state memory, an article of manufacture that comprises or tangibly embodies the computer program86. The delivery mechanism may be a signal configured to reliably transfer the computer program86. The apparatus10may propagate or transmit the computer program86as a computer data signal. Computer program instructions for causing an apparatus to perform at least the following or for performing at least the following:joining to at least an existing participant in extended reality at a location in a virtual space, a joining participant in extended reality to enable a shared extended reality,wherein the joined participants in the shared extended reality are at least co-located in the virtual space and can share at least visually the virtual space;wherein at least part of a content configuration is inherited from the existing participant by the joining participant, wherein the content configuration controls, for participants, what is heard, what is seen and interactivity with the virtual space;wherein at least part of a join configuration is inherited between the existing participant and the joining participant, wherein the join configuration controls joining of other joining participants in the shared extended reality. The computer program instructions may be comprised in a computer program, a non-transitory computer readable medium, a computer program product, a machine readable medium. In some but not necessarily all examples, the computer program instructions may be distributed over more than one computer program. Although the memory84is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable and/or may provide permanent/semi-permanent/dynamic/cached storage. Although the processor92is illustrated as a single component/circuitry it may be implemented as one or more separate components/circuitry some or all of which may be integrated/removable. The processor92may be a single core or multi-core processor. References to ‘computer-readable storage medium’, ‘computer program product’, ‘tangibly embodied computer program’ etc. or a ‘controller’, ‘computer’, ‘processor’ etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc. As used in this application, the term ‘circuitry’ may refer to one or more or all of the following:(a) hardware-only circuitry implementations (such as implementations in only analog and/or digital circuitry) and(b) combinations of hardware circuits and software, such as (as applicable):(i) a combination of analog and/or digital hardware circuit(s) with software/firmware and(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions and(c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g. firmware) for operation, but the software may not be present when it is not needed for operation. This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device. The blocks illustrated in the FIGs may represent steps in a method and/or sections of code in the computer program86. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some blocks to be omitted. The apparatus10can, for example, be a head-mounted apparatus of a participant. The apparatus10can, for example, be a head-mounted apparatus of a joining participant. The apparatus10can, for example, be a head-mounted apparatus of an existing participant. A head mounted apparatus can, for example, comprise a head-up display and headphones. The apparatus10can, for example, be a control device or system for communicating with head-mounted apparatus of participants. The apparatus10comprises means80for:joining40to at least an existing participant711in extended reality at a location in a virtual space60, a joining participant712in extended reality to enable a shared extended reality,wherein the joined participants711,712in the shared extended reality are at least co-located42in the virtual space60and can share at least visually the virtual space60;wherein at least part of a content configuration30is inherited24from the existing participant711by the joining participant712, wherein the content configuration30controls, for participants71, what is heard, what is seen and interactivity with the virtual space60;wherein at least part of a join configuration32is inherited24between the existing participant711and the joining participant712, wherein the join configuration32controls joining of other joining participants in the shared extended reality. FIG.8illustrates an example of a method100. The method comprises, at block102, initiating joining to at least an existing participant in extended reality at a location in a virtual space, a joining participant in extended reality to enable a shared extended reality. The method comprises, at block104, causing a content configuration to be inherited from the existing participant by the joining participant, wherein the content configuration controls, for participants, what is heard, what is seen and interactivity with the virtual space; The method comprises, at block106, causing at least part of a join configuration to be inherited between the existing participant and the joining participant, wherein the join configuration controls joining of other joining participants in the shared extended reality. The method comprises, at block108, completing joining to at least the existing participant in extended reality the joining participant in extended reality to enable a shared extended reality, wherein the existing participant and the joining participant, collectively the joined participants, are initially, at least co-located in the virtual space and can share at least visually the virtual space. Use Cases The extended reality content can in some examples be 3 DoF (three degrees of freedom) content. A participant51,71in the extended reality enabled by the extended reality content can rotate their head to change their point of view57,77(change of orientation only) and experience different virtual visual scenes (and in some examples different sound scenes). The 3 DoF extended reality content can define different and distinct virtual visual scenes (and optionally associated sound scenes), that are not adjacent in the virtual space60, but which can be accessed by changing the point of view57,77by changing orientation. For example, this can be called multi-viewpoint 3 DoF content. While the transition from a first point of view to an adjacent second point of view is continuous, the transition in virtual space60is a discontinuous jump in space or in space and time, a teleport. The extended reality content can in some example be 6 DoF (six degrees of freedom) or 3 DoF+ content (e.g., multi-viewpoint 3 DoF+ content). A participant51,71in the extended reality enabled by the extended reality content can rotate to change their point of view57,77and change their position to change their point of view57,77(change in orientation and/or position). Changing the point of view57,77changes the virtual visual scenes (and in some examples the sound scene). The 6 DoF extended reality content can define different and distinct virtual visual scenes (and optionally associated sound scenes), that are not adjacent in the virtual space60, but which can be accessed at changing the point of view57,77by changing location and/or orientation. While a transition from a first point of view to an adjacent second point of view is continuous, the transition in virtual space60can be a discontinuous jump in space or in space and time, a teleport. The extended reality content can also be consumed by more than one participant71at the same time such that the users can share a common extended reality. Such participants71can, in at least some examples, observe each other in the virtual space60and communicate with each other. This type of use case may be called “social” extended reality. The participants71can be in physical proximity in the real space50or remote from each other. A participant71in extended reality is generally provided with means to enter and leave the extended reality at their own time, join one or more other participants71that are already in extended reality, and invite participants71to extended reality. A participant71can also, in at least some examples, control privacy. For example, an ‘open-door’ mode can allow the existing participant711to be joined in the extended reality without a need for affirmation from the existing participant711. For example, a ‘closed-door’ mode can allow the existing participant711to be joined in the extended reality but only after their affirmation by the existing participant711. For example, a ‘private’ mode can prevent the existing participant711being joined in the extended reality. Privacy can be a parameter in a join configuration32. In the following examples, joining a participant (or user) refers to a user moving/being moved to close proximity to another user in the virtual space60via a teleport and the two users becoming mutually audible and visible. The examples are relevant for any type of extended reality including 3 DoF, 3 DoF+, 6 DoF or some other variant. In the following examples, a User A is in extended reality and wants to change to a different part of the virtual space60. This can be done by teleporting. The user A is the joining participant712. A participant712in extended reality (user A) is a in a virtual space602that is distinct from another virtual space601such that a teleport is required for the participant712in extended reality (user A) to transfer from the virtual space602to participate in extended reality at the virtual space601. Teleporting may, for example, be implemented such that user A moves a teleportation target to a place where they wish to move to, and upon confirmation they are immediately transferred there. Typically, this is done using a controller, and possible and unallowed targets are shown in different colors. In the example illustrated inFIG.9, the joining participant712(User A) points to different teleport regions82that may indicate, e.g., by color whether it is possible to transfer there or not. The teleport regions include, in this example, available teleport regions82′, conditionally available teleport regions82′″ and unavailable teleport regions82″. Referring to the example ofFIG.10, the participant712in extended reality (User A) wishes to teleport from virtual space602to virtual space601to experience multi-user extended reality. His friends Jussi and Arto are existing participants711in extended reality in the same virtual space601and are experiencing the same content at different nearby positions. Jussi711and Arto711have join configurations22[not illustrated] that allow other users to join them. For example, Jussi711has an associated join configuration32that allows directly joining without confirmation. Arto711has an associated join configuration32that conditionally allows joining. In this example, it requires that the joining participant712(User A) sends a request to join to Arto711and also requires that Arto711replies to the request confirming the join before the join occurs. The fact that Arto711and Jussi711have different join configurations22is indicated in this example via the teleport regions82. Jussi711is presented as located within an available teleport region82′ and Arto711is presented as located within a conditionally available teleport region82′″. Following on fromFIG.10, in the example ofFIG.11, the joining participant712(User A) previews Jussi's711teleport region82′. The previewing triggers a first audio indication to Jussi711that can be characterized as teleport ring tone for immersive media. This way Jussi711is made aware that a user is considering joining him. This may allow in various embodiments for Jussi711to communicate back or just be prepared that another user (User A)712may soon appear in his virtual space601. In 3 DoF, the appearance of User A712can be limited to a new audio communication channel being opened between Jussi711and User A712and/or a change in the viewport visualization for example indicating that User A712has the same point of view771as Jussi711. In 6 DoF, the appearance of User A712could occur within the rendered content that is within the virtual visual scene (and corresponding sound scene). The joining participant, User A,712could for example become visible in front of the existing participant711and could move virtual objects that are mutually visible to Jussi711and User A712. The teleport ring tone for immersive media may be, for example one of:modification of an audio object in the virtual space601within the existing participant's711(Jussi's) virtual visual scene;rendering to the existing participant711(Jussi) an audio object from the sound scene of the joining participant712(User A);a system-generated audio object;an audio object that the joining participant712(User A) has created for this purpose, for example, their personalized greeting message;an audio object that the existing participant711(e.g. Jussi) has created, for example, for each of his friends. Jussi might have a specific track that starts playing for one or a group of contacts. In some cases, the teleport ring tone for immersive media can be accompanied by a visual indication or visual content. For example, such visual content could become visible only when the existing participant711(Jussi) reacts to the ring tone that appears in his area, for example, by performing an activation gesture. After previewing the virtual space601, the joining participant712(User A) can indicate an intent to teleport and join the virtual space602to share the extended reality with the existing participant711(Jussi). If the existing participant711(Jussi) is experiencing extended reality by himself at the virtual space602then the existing participant's711(Jussi) join configuration321enables joining to the existing participant711(Jussi) without affirmation by the existing participant711(Jussi). If, however, the existing participant711(Jussi) is experiencing extended reality with existing participant711(Arto) who has a more restrictive join configuration321then Jussi's711join configuration is inherited from Arto711and only enables joining to Jussi711with affirmation. Following on fromFIG.10, in the example ofFIG.12, the joining participant712(User A) previews an existing participant's711(Arto's) teleport region82′″. The join configuration321associated with Arto711means that Arto711accepts requests for joining from other users, but does not allow direct joining. The previewing triggers a first audio indication to Arto711that can be characterized as teleport ring tone for immersive media. This way Arto711is made aware that a second user is considering joining him. Arto711is required to respond in order for the teleport to occur. The joining participant712(User A) now requires a “permission” from the existing participant711(Arto) to join with existing participant711(Arto). However, the permission from existing participant711(Arto) may not directly trigger the teleportation, since the joining participant712(User A) was only previewing the content. Thus, if Arto711accepts the request, in this example, Arto711will continue to hear an audio indication from User A's712preview action. This may be a different audio indication than the one Arto711heard before his response. As the joining participant712(User A) is previewing content where they require a permission to join, a second audio indication may be provided to the joining participant712(User A) himself. This can use a similar selection mechanism as the first audio indication. The waiting for the response and the response itself can be indicated to the joining participant712(User A) by a second and third immersive audio indicator. It can furthermore be indicated that the existing participant711(Arto) has been made aware of the request and is considering whether to accept, using a further audio indication to the joining participant712(User A). This audio may be similarly chosen, as the first teleport ring tone. The joining participant712(User A) joins the existing participant711(Arto) in the extended reality in response to acceptance of a request to join made by the joining participant712(User A). The join configuration321of the existing participant711(Arto) determines whether the acceptance of a request to join made by the joining participant712(User A) is automatic or is dependent upon an affirmation by the existing participant711(Arto). Referring to the example ofFIG.13, a joining participant712(User A) has previewed an existing participant's711(Arto's) region and the existing participant711(Arto) has not responded to the request. Joining participant712(User A) hears a second audio indication. The joining participant712(User A) is not aware that Jussi711would be available, and it is taking Arto711a long time to provide a response. The system begins, after a time threshold T1, to provide the joining participant712(User A) with alternative, less restricted teleport regions82. In particular, an existing participant711(Jussi) is presented as available with direct joining. The audio indication to joining participant712(User A) begins to move in Jussi's711direction and introduce audio elements from Jussi's711part of the scene. The joining participant712(User A) can now gesture to preview Jussi's711scene or directly join it based on the “pre-preview” provided by the audio modification (introduction of audio elements from Jussi's711part of the scene). Thus, the joining participant712(User A) may initiate the multi-user extended reality by indicating their wish to teleport to a proximity of an existing participant711. The existing participant711has the ability to accept or reject based on the existing participant's711preferences. Referring to the example ofFIG.14, the joining participant712(User A) has requested to join an existing participant's711(Arto's) extended reality i.e., to teleport from virtual space602to be co-located with the existing participant711(Arto) in the virtual space601and to share the extended reality. In theFIG.14, the existing participant711(Arto) has accepted the request and the joining participant712(User A) has confirmed the teleportation (or teleported automatically). The two participants711,712now experience a shared extended reality at the virtual space601. TheFIG.14illustrates joining40an existing participant711in extended reality at a location in a virtual space601, and a joining participant712in extended reality to enable a shared extended reality. The joined participants711,712in the shared extended reality are co-located42in the virtual space601and share at least visually the virtual space601. At least part of a content configuration301is inherited24(not illustrated) from the existing participant711by the joining participant712. The content configuration301controls, for participant712in the shared extended reality, what is heard, what is seen and interactivity with the virtual space601. At least part of a join configuration32is inherited24between the existing participant711and the joining participant712. The join configuration32controls joining of other joining participants in the shared extended reality at the virtual space601. In this example, the inherited content configuration301causes the participants711,712to be able to see each other and to be able to communicate using audio with each other. In some examples, the joining40can be accompanied with a start of a common audio-visual communication channel, which the existing participant711can accept or reject. Referring to the example ofFIG.15, the joining participant712(User A) wishes to join an existing participant711(Arto) at virtual space601. The existing participant711(Arto) declines the request. After the existing participant711(Arto) has declined the request, the joining participant712(User A) still has the opportunity to teleport to the virtual space601. The joining participant712(User A) chooses teleport. The teleport is carried out according to a private mode of teleportation. The joining participant712(User A) is teleported to the virtual space601but an instance of the virtual space601that does not have the existing participant711. The joining participant712(User A) is not joined to a shared extend reality but teleports to the virtual space601(only). Both participants711,712are now in distinct separate instances of the virtual space601. The participant712is now in a new instance60′1of the virtual space601. The two participants711,712experience the extend reality content at different independent instances of the same virtual space601. The participants711,712are not visible or audible to each other. In some implementations, there may be interactive elements in the virtual space that are common to the instances of that virtual space. The system can allow for an existing participant711to choose whether they wish to experience the extended reality content together as shared extended reality content (e.g., Social VR) or to remain private. Referring to the examples ofFIGS.16A &16B, a joining participant712(User A) wishes to teleport to share an extended reality with existing participant711(Arto) at the virtual space601. InFIG.16A, the joining participant712(User A) is previewing the existing participant's711(Arto) virtual space601. A request to join existing participant711(Arto) is, at least temporarily, rejected based on the content properties of the extended reality content associated with virtual space601. In this example, the existing participant711(Arto) has previously interacted with extended reality content in such a way that the shared extended reality experience between joining participant712(User A) and existing participant711(Arto) would be substantially compromised if the joining participant712(User A) were allowed to join existing participant711(Arto) in a shared extended reality experience at the virtual space601. The existing participant711(Arto) has, in this example, just interacted with a virtual object defined by the extended reality content and caused an outcome. In some cases, this interaction could be saved as a state of the virtual space601. The virtual space601therefore has a pre-interaction state and a post-interaction state. If the joining participant712(User A) were to teleport into the virtual space601, after the interaction, the joining participant712(User A) would enter the virtual space601in the post-interaction state. This may be undesirable. The system can prevent teleporting in this situation and others based on the state of the virtual space601, which is in turn dependent upon content. Another example of not allowing teleporting based on content would be if there is something later on in the content which requires that the two users have not seen something together (because, for example, they see it differently). For example, let us consider a cinematic immersive media content, where one participant's user-selected narrative is one where the hero gets the job in the interview and another participant's user-selected narrative is one where the hero does not get the job). In this case, the two participants711,712cannot be seeing the same interview in proximity of each other. A further example of cinematic content is where teleporting would be temporarily disabled if there is a substantial cut scene or animation ongoing. During this it could appear unnatural and disturbing for a user to receive an incoming teleporting participant712in their extended reality experience. According to this embodiment, there can be, for example, a temporary decline of teleportation request based on the content consumption state of the existing participant and/or the requesting participant712. After a subsequent trigger within the content, the two users can, for example, be automatically joined in the same shared extend reality (according to a request to do so). Thus, the joining of the joining participant712can be dependent upon a content of the extended reality experienced by the existing participant711. For example, a join can be auto-rejected based on content. Referring to the example ofFIGS.17A &17B, in various embodiments of multi-user extend reality, participants71can have means to communicate with each other privately, in parallel, to the shared extended reality. For example, this can be considered in MPEG-I 6 DoF Social VR use cases. Thus, for example, a voice communications channel90can be made available for users regardless of being joined in the same audio-visual extended reality, when users are users of the same multi-user extended reality system. InFIG.17A, a requesting participant712(user A) wishes to teleport to a shared extended reality with existing participants711(Arto or Jussi), who are located at different parts of the virtual space601but who share a voice communications channel90that the requesting participant712(user A) is not part of. An existing participant's711(e.g. Arto's) join configuration allows the existing participant's711to accept or reject the join request, and if accepting to join or not join the joining participant712to the voice communications channel90. The other existing participant's711(e.g. Jussi's) join configuration32is, in the absences of a shared extended reality between Arto and Jussi, less restrictive than Arto's and automatically accepts a join request while joining participant712to the voice communications channel90. The presence of the voice communications channel90, however, causes Jussi's join configuration32to inherit the more restrictive aspects of Arto's join configuration32. If the requesting participant712(user A) wishes to join an existing participant711(e.g. Arto) in shared extended reality, the existing participant's711(e.g. Arto's) permission is sufficient. This is because the voice communications channel90can be kept private by Arto's response. The voice communications channel90can be kept private because the existing participant's711(e.g. Arto's) join configuration32allows the existing participant711(e.g. Arto) to accept or reject the join request, and if accepting whether or not to join or not join the joining participant712to the voice communications channel90. InFIG.17B, the requesting participant712(user A) wishes to join an existing participant711(e.g. Jussi). The existing participant's711(e.g. Jussi's) permission level in the associated join configuration32is modified based on the existence of the voice communications channel90, and at least Arto's711permission is required. The system can also require the requesting participant712(user A) to receive permission from Jussi711. The voice communications channel90may have privacy settings or at least a minimum level of privacy is implied by inheritance between join configurations32of the existing participants711in the voice communications channel90. Referring to the example ofFIG.18A,18B, the joining participant712(User A) wishes to teleport to a joint experience with existing participants711(Arto and Jussi), who share extended reality at the virtual space601. Arto and Jussi are in common location, they can see and hear each other. InFIG.18A, the teleportation is straightforward if the existing participants711(Arto and Jussi) do not have a private voice communications channel90. In this case, the highest permission level is inherited between the join configurations32of the existing participants711(Arto and Jussi) and is applied to control the join. Thus the joining participant712(User A) will require the existing participant711(Arto) to approve the teleportation. The highest permission level is inherited between the join configurations32of the participants711,712. That is permission is required from Arto711, irrespective of whether the join request is made to Arto711or to Jussi711. After the join the three participants711,712(Arto and Jussi, User A) share extended reality at the virtual space601. The highest permission level is inherited between the join configurations32of the participants711,712(Arto and Jussi, User A). For example, permission may be required from Arto711, irrespective of whether a new join request, for new joining participant, is made to User A712, Arto711or to Jussi711. InFIG.18A, Arto711and Jussi711have a private voice communication channel90activated, the voice communication channel90temporarily modifies the highest permission level of the join configuration shared by the existing participants711(Arto and Jussi). The permission of all the participants in the voice communications channel90are required. That is permission is required from both Arto711and Jussi711, irrespective of whether the join request is made to Arto711or to Jussi711, to enable teleportation. Thus, the join configuration32for an existing participant711adapts, when the existing participant711has an open voice communication channel90, to require explicit authorization for a join from the existing participant711and the other user or users who are communicating in the voice communication channel90. In the example ofFIG.19Athe joining participant712(User A) can preview the virtual space601used for a shared extended reality by existing participants711(Arto and Jussi) in the shared extended reality. The permissions of the join configurations32is as before. Jussi's711join configuration (without inheritance) allows joining without affirmation from Jussi711. Arto's711join configuration allows joining but only with affirmation from Arto711. Jussi's711join configuration (after inheritance from Arto711on sharing the extended reality at the virtual space601) allows joining but only with affirmation from Arto711. Therefore, a request from User A712to join the shared extended reality made to Arto711requires permission from Arto711and a request to join the shared extended reality made to Jussi711requires permission from Arto711(because of inheritance of more restrictive permission requirement to Jussi's711join configuration). In the example ofFIG.19D, a request from User A712to join the extended reality shared by Arto711and Jussi711is made to either Arto711or Jussi711. An audio indication is made to Arto711as permission is required from Arto711. Permission is received from Arto711and there is a successful join, for example, as described with reference toFIG.18A. User A712, Arto711and Jussi711shared an extended reality. If permission for the join is withheld by Arto711then in some examples no teleportation occurs, as illustrated in the example ofFIG.19C. However, in some examples, if the request to join is made to Jussi711then if Arto711withholds permission for User A712to join a shared extended reality comprising User A712, Arto711and Jussi711, a new instance of a shared extended reality can be created comprising User A712and Jussi711but not Arto711. This is illustrated in the example ofFIG.19B. There may be a first audio indication to Jussi711denoting the preview by User A711before they are joined. Jussi711sees in his environment both the user and Arto711, while User A712and Arto711only see Jussi711. After the teleportation to the instance of the shared extended reality that includes Jussi711but not Arto711, Arto711may be played a modified audio indication letting him know that User A712is at the same location (just in a parallel instance), and similarly a modified audio indication may be provided to User A712. Thus, different instances of the same virtual space601are created for User A712and Arto711, for example, as described previously with reference toFIG.15 Referring to the example ofFIG.20Athe joining participant712(User A), who is at the virtual space602, wishes to teleport to share extended reality with existing participant711(Arto) at the virtual space601. In this example, the requested existing participant711(e.g., Arto) does not wish to have another participant712(e.g. User A) join him at his current location (virtual space601). However, the requested existing participant711(Arto) wants to teleport to share extended reality with the requesting joining participant712(User A) at the virtual space602. Therefore Arto711responds with a “call back” action. User A712is indicated that Arto711will join User A712instead, when he is ready. As illustrated in the example ofFIG.20B, User A712can receive a periodic audio indication that Arto711is still expected to join. Either User A712or Arto711is able to terminate the call back at any time e.g., user A711could change their permission level. In the example ofFIG.20C, User A712has rejected the call back request, which is indicated to Arto.711 If User A712does not reject the call back request, when Arto711is available, Arto711will teleport to join User A712at virtual space601as illustrated in the example ofFIG.20D. From the foregoing, it can be appreciated that in at least some examples, the joining participant712joins the existing participant711in the extended reality in response to acceptance of a request to join made by the joining participant712. The join configuration321of the existing participant711determines whether the acceptance of a request to join made by the joining participant712is automatic or is dependent upon an affirmation by the existing participant711. When a request by the joining participant712to join an existing participant711in extended reality at a location of the existing participant711is not accepted, a contention resolution process can determine a location in the virtual space60at which the requesting participant712experiences extended reality and whether or not the requesting participant712experiences a shared extended reality with another participant71. For example, depending upon the contention resolution, the requesting joining participant712can be:i) located at a location of the requested existing participant, without that existing participant [FIG.15]ii) co-located at a location of the requested existing participant, with a different existing participant(s) [FIG.13]iii) co-located at a location of the requesting participant, with the requested existing participant [FIG.20D]iv) co-located with the requested existing participant but not all the existing participants at a virtual space [FIG.19B] In examples i) and iv), there are split worlds, that is different instances of extended reality. It is possible for one participant to have extended reality in respect of a particular part of a virtual space and for another participant to have extended reality in respect of that same particular part of a virtual space but for the two participants to be unaware of each other. One participant occupies one instance of that virtual space and the other participant occupies another independent instance of that virtual space. The two instances are separated wholly or partially. In example i) and ii) the teleport is prioritized to location. The process enables a location-only teleport for the requesting joining participant712which does not join the existing participant711in extended reality at the location in the virtual space601, and the requesting joining participant712in extended reality to enable a shared extended reality at the virtual space601, but instead enables the requesting joining participant712to experience extended reality at a location in the virtual space601, that is not shared with the existing participant711. In example iii) the teleport prioritizes the requested existing participant over location. In example iv) the teleport prioritizes the requested existing participant over other nearby participants. The join request can imply priorities to help with contention e.g. whether joining the location is key, joining a user is key, or joining multiple users is key. If location is key, outcomes i) and ii) are more likely. If joining a particular user is key, outcomes ii) and iv) are more likely. When a joining participant712requests to join an existing participant711in an extended reality at a location of the existing participant a response to the request can be inter alia:join with the requested existing participant at requested location [FIG.14];join with requested existing participant but not at requested location [FIG.20];join without requested existing participant at requested location [FIG.15,FIG.20];at least partial rejection of request dependent upon existing user [FIG.15];at least partial rejection of request dependent upon content rendered for shared extended reality [FIG.16A,16B];timeout [FIG.13];alternative join option [FIG.13,20]. Where a structural feature has been described, it may be replaced by means for performing one or more of the functions of the structural feature whether that function or those functions are explicitly or implicitly described. The above described examples find application as enabling components of:automotive systems; telecommunication systems; electronic systems including consumer electronic products; distributed computing systems; media systems for generating or rendering media content including audio, visual and audio visual content and mixed, mediated, virtual and/or augmented reality; personal systems including personal health systems or personal fitness systems; navigation systems; user interfaces also known as human machine interfaces; networks including cellular, non-cellular, and optical networks; ad-hoc networks; the internet; the internet of things; virtualized networks; and related software and services. The term ‘comprise’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising Y indicates that X may comprise only one Y or may comprise more than one Y. If it is intended to use ‘comprise’ with an exclusive meaning then it will be made clear in the context by referring to “comprising only one . . . ” or by using “consisting”. In this description, reference has been made to various examples. The description of features or functions in relation to an example indicates that those features or functions are present in that example. The use of the term ‘example’ or ‘for example’ or ‘can’ or ‘may’ in the text denotes, whether explicitly stated or not, that such features or functions are present in at least the described example, whether described as an example or not, and that they can be, but are not necessarily, present in some of or all other examples. Thus ‘example’, ‘for example’, ‘can’ or ‘may’ refers to a particular instance in a class of examples. A property of the instance can be a property of only that instance or a property of the class or a property of a sub-class of the class that includes some but not all of the instances in the class. It is therefore implicitly disclosed that a feature described with reference to one example but not with reference to another example, can where possible be used in that other example as part of a working combination but does not necessarily have to be used in that other example. Although examples have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the claims. Features described in the preceding description may be used in combinations other than the combinations explicitly described above. Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not. Although features have been described with reference to certain examples, those features may also be present in other examples whether described or not. The term ‘a’ or ‘the’ is used in this document with an inclusive not an exclusive meaning. That is any reference to X comprising a/the Y indicates that X may comprise only one Y or may comprise more than one Y unless the context clearly indicates the contrary. If it is intended to use ‘a’ or ‘the’ with an exclusive meaning then it will be made clear in the context. In some circumstances the use of ‘at least one’ or ‘one or more’ may be used to emphasis an inclusive meaning but the absence of these terms should not be taken to infer any exclusive meaning. The presence of a feature (or combination of features) in a claim is a reference to that feature or (combination of features) itself and also to features that achieve substantially the same technical effect (equivalent features). The equivalent features include, for example, features that are variants and achieve substantially the same result in substantially the same way. The equivalent features include, for example, features that perform substantially the same function, in substantially the same way to achieve substantially the same result. In this description, reference has been made to various examples using adjectives or adjectival phrases to describe characteristics of the examples. Such a description of a characteristic in relation to an example indicates that the characteristic is present in some examples exactly as described and is present in other examples substantially as described. Whilst endeavoring in the foregoing specification to draw attention to those features believed to be of importance it should be understood that the Applicant may seek protection via the claims in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not emphasis has been placed thereon.
77,729
11861800
DETAILED DESCRIPTION A messaging system typically allow users to exchange content items (e.g., messages, images and/or video) with one another in a message thread. A messaging system may implement one or more content feeds for surfacing media content to end users. The disclosed embodiments provide for a messaging system to include different user interfaces for presenting available augmented reality content items (e.g., Lenses) in association with a multi-video clip camera mode. The camera mode corresponds with capturing multiple video clips which are combinable to generate a media content item (e.g., for sending to a friend, broadcasting to others, etc.). The user interfaces for presenting available augmented reality content items include a carousel interface and an explorer user interface. The carousel interface presents a first set of available augmented reality content items while the device is capturing video in the multi-video clip camera mode. The explorer user interface is a separate interface with a tile view for presenting a second set of available augmented reality content items. In a case where the user selects an augmented reality content item via the explorer user interface, the messaging system stores an indication of the selected augmented reality content item. The stored indication is usable to persistently present the selected augmented reality content item within the carousel interface, for example, until the session for multi-video clip capture is complete. Indications for position within the tiled view and text-based searching via the explorer user interface may also be stored for persistence between user interfaces, until the session is complete. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104and other applications106. Each messaging client104is communicatively coupled to other instances of the messaging client104(e.g., hosted on respective other client devices102), a messaging server system108and third-party servers110via a network112(e.g., the Internet) A messaging client104can also communicate with locally-hosted applications106using Applications Program Interfaces (APIs). A messaging client104is able to communicate and exchange data with other messaging clients104and with the messaging server system108via the network112. The data exchanged between messaging clients104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network112to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server116is coupled to, and provides a programmatic interface to, application servers114. The application servers114are communicatively coupled to a database server120, which facilitates access to a database126that stores data associated with messages processed by the application servers114. Similarly, a web server128is coupled to the application servers114, and provides web-based interfaces to the application servers114. To this end, the web server128processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server116receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers114. Specifically, the Application Program Interface (API) server116provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers114. The Application Program interface (API) server116exposes various functions supported by the application servers114, including account registration, login functionality, the sending of messages, via the application servers114, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server118, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers114host a number of server applications and subsystems, including for example a messaging server118, an image processing server122, and a social network server124. The messaging server118implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available to the messaging client104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server118, in view of the hardware requirements for such processing. The application servers114also include an image processing server122that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server118. The social network server124supports various social networking functions and services and makes these functions and services available to the messaging server118. To this end, the social network server124maintains and accesses an entity graph304(as shown inFIG.3) within the database126. Examples of functions and services supported by the social network server124include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. Returning to the messaging client104, features and functions of an external resource (e.g., an application106or apples) are made available to a user via an interface of the messaging client104. In this context, “external” refers to the fact that the application106or applet is external to the messaging client104. The external resource is often provided by a third party but may also be provided by the creator or provider of the messaging client104. The messaging client104receives a user selection of an option to launch or access features of such an external resource. The external resource may be the application106installed on the client device102(e.g., a “native app”), or a small-scale version of the application (e.g., an “applet”) that is hosted on the client device102or remote of the client device102(e.g., on third-party servers110). The small-scale version of the application includes a subset of features and functions of the application (e.g., the full-scale, native version of the application) and is implemented using a markup-language document. In one example, the small-scale version of the application (e.g., an “applet”) is a web-based, markup-language version of the application and is embedded in the messaging client104. In addition to using markup-language documents (e.g., a .*ml file), an applet may incorporate a scripting language (e.g., a .*js file or a .json file) and a style sheet (e.g., a .*ss file). In response to receiving a user selection of the option to launch or access features of the external resource, the messaging client104determines whether the selected external resource is a web-based external resource or a locally-installed application106. In some cases, applications106that are locally installed on the client device102can be launched independently of and separately from the messaging client104, such as by selecting an icon, corresponding to the application106, on a home screen of the client device102. Small-scale versions of such applications can be launched or accessed via the messaging client104and, in some examples, no or limited portions of the small-scale application can be accessed outside of the messaging client104. The small-scale application can be launched by the messaging client104receiving, from a third-party server110for example, a markup-language document associated with the small-scale application and processing such a document. In response to determining that the external resource is a locally-installed application106, the messaging client104instructs the client device102to launch the external resource by executing locally-stored code corresponding to the external resource. In response to determining that the external resource is a web-based resource, the messaging client104communicates with the third-party servers110(for example) to obtain a markup-language document corresponding to the selected external resource. The messaging client104then processes the obtained markup-language document to present the web-based external resource within a user interface of the messaging client104. The messaging client104can notify a user of the client device102, or other users related to such a user (e.g., “friends”), of activity taking place in one or more external resources. For example, the messaging client104can provide participants in a conversation (e.g., a chat session) in the messaging client104with notifications relating to the current or recent use of an external resource by one or more members of a group of users. One or more users can be invited to join in an active external resource or to launch a recently-used but currently inactive (in the group of friends) external resource. The external resource can provide participants in a conversation, each using respective messaging clients104, with the ability to share an item, status, state, or location in an external resource with one or more members of a group of users into a chat session. The shared item may be an interactive chat card with which members of the chat can interact, for example, to launch the corresponding external resource, view specific information within the external resource, or take the member of the chat to a specific location or state within the external resource. Within a given external resource, response messages can be sent to users on the messaging client104. The external resource can selectively include different media items in the responses, based on a current context of the external resource. The messaging client104can present a list of the available external resources (e.g., applications106or applets) to a user to launch or access a given external resource. This list can be presented in a context-sensitive menu. For example, the icons representing different ones of the application106(or applets) can vary based on how the menu is launched by the user (e.g., from a conversation interface or from a non-conversation interface). FIG.2is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers114. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104and on the sever-side by the application servers114. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an augmentation system208, a map system210, an external resource system212, and a camera mode system214. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server118. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. The collection management system204furthermore includes a curation interface206that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface206enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain examples, compensation may be paid to a user for the inclusion of user-generated content into a collection. In such cases, the collection management system204operates to automatically make payments to such users for the use of their content. The augmentation system208provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system208provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system208operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system208operatively supplies a media overlay to the messaging client104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay may include text or image that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the augmentation system208uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database126and accessed through the database server120. In some examples, the augmentation system208provides a user-based publication platform that enables users to select a geolocation on a map and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The augmentation system208generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In other examples, the augmentation system208provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the augmentation system208associates the media overlay of the highest bidding merchant with a corresponding geolocation for a predefined amount of time. In other examples, as discussed below with respect toFIG.3, the augmentation system208provides for presenting augmented reality content in association with an image or a video captured by a camera of the client device102. The augmentation system208may implement or otherwise access augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences) for providing real-time special effect(s) and/or sound(s) that may be added to the image or video. To facilitate the presentation of augmented reality content, the augmentation system208may implement or otherwise access object recognition algorithms (e.g., including machine learning algorithms) configured to scan an image or video, and to detect/track the movement of objects within the image or video. The map system210provides various geographic location functions, and supports the presentation of map-based media content and messages by the messaging client104. For example, the map system210enables the display of user icons or avatars (e.g., stored in profile data302) on a map to indicate a current or past location of “friends” of a user, as well as media content (e.g., collections of messages including photographs and videos) generated by such friends, within the context of a map. For example, a message posted by a user to the messaging system100from a specific geographic location may be displayed within the context of a map at that particular location to “friends” of a specific user on a map interface of the messaging client104. A user can furthermore share his or her location and status information (e.g., using an appropriate status avatar) with other users of the messaging system100via the messaging client104, with this location and status information being similarly displayed within the context of a map interface of the messaging client104to selected users. The external resource system212provides an interface for the messaging client104to communicate with remote servers (e.g. third-party servers110) to launch or access external resources, i.e. applications or applets. Each third-party server110hosts, for example, a markup language (e.g., HTML5) based application or small-scale version of an application (e.g., game, utility, payment, or ride-sharing application). The messaging client104may launches a web-based resource (e.g., application) by accessing the HTML5 file from the third-party servers110associated with the web-based resource. In certain examples, applications hosted by third-party servers110are programmed in JavaScript leveraging a Software Development Kit (SDK) provided by the messaging server118. The SDK includes Application Programming Interfaces (APIs) with functions that can be called or invoked by the web-based application. In certain examples, the messaging server118includes a JavaScript library that provides a given external resource access to certain user data of the messaging client104. HTML5 is used as an example technology for programming games, but applications and resources programmed based on other technologies can be used. In order to integrate the functions of the SDK into the web-based resource, the SDK is downloaded by a third-party server110from the messaging server118or is otherwise received by the third-party server110. Once downloaded or received, the SDK is included as part of the application code of a web-based external resource. The code of the web-based resource can then call or invoke certain functions of the SDK to integrate features of the messaging client104into the web-based resource. The SDK stored on the messaging server118effectively provides the bridge between an external resource (e.g., applications106or applets and the messaging client104. This provides the user with a seamless experience of communicating with other users on the messaging client104, while also preserving the look and feel of the messaging client104. To bridge communications between an external resource and a messaging client104, in certain examples, the SDK facilitates communication between third-party servers110and the messaging client104. In certain examples, a Web ViewJavaScriptBridge running on a client device102establishes two one-way communication channels between an external resource and the messaging client104. Messages are sent between the external resource and the messaging client104via these communication channels asynchronously. Each SDK function invocation is sent as a message and callback. Each SDK function is implemented by constructing a unique callback identifier and sending a message with that callback identifier. By using the SDK, not all information from the messaging client104is shared with third-party servers110. The SDK limits which information is shared based on the needs of the external resource. In certain examples, each third-party server110provides an HTML5 file corresponding to the web-based external resource to the messaging server118. The messaging server118can add a visual representation (such as a box art or other graphic) of the web-based external resource in the messaging client104. Once the user selects the visual representation or instructs the messaging client104through a GUI of the messaging client104to access features of the web-based external resource, the messaging client104obtains the HTML5 file and instantiates the resources necessary to access the features of the web-based external resource. The messaging client104presents a graphical user interface (e.g., a landing page or title screen) for an external resource. During, before, or after presenting the landing page or title screen, the messaging client104determines whether the launched external resource has been previously authorized to access user data of the messaging client104. In response to determining that the launched external resource has been previously authorized to access user data of the messaging client104, the messaging client104presents another graphical user interface of the external resource that includes functions and features of the external resource. In response to determining that the launched external resource has not been previously authorized to access user data of the messaging client104, after a threshold period of time (e.g., 3 seconds) of displaying the landing page or title screen of the external resource, the messaging client104slides up (e.g., animates a menu as surfacing from a bottom of the screen to a middle of or other portion of the screen) a menu for authorizing the external resource to access the user data. The menu identifies the type of user data that the external resource will be authorized to use. In response to receiving a user selection of an accept option, the messaging client104adds the external resource to a list of authorized external resources and allows the external resource to access user data from the messaging client104. In some examples, the external resource is authorized by the messaging client104to access the user data in accordance with an OAuth 2 framework. The messaging client104controls the type of user data that is shared with external resources based on the type of external resource being authorized. For example, external resources that include full-scale applications (e.g., an application106) are provided with access to a first type of user data (e.g., only two-dimensional avatars of users with or without different avatar characteristics). As another example, external resources that include small-scale versions of applications (e.g., web-based versions of applications) are provided with access to a second type of user data (e.g., payment information, two-dimensional avatars of users, three-dimensional avatars of users, and avatars with various avatar characteristics). Avatar characteristics include different ways to customize a look and feel of an avatar, such as different poses, facial features, clothing, and so forth. The camera mode system214implements various functions for providing different camera modes within the context of the messaging system100. For example, the camera mode system214provides for first and second camera modes, and for providing the user with the option to select between the first and second camera modes. The first camera mode corresponds with capturing a single video clip in order to generate a media content item. The camera mode system214provides a second camera mode for capturing multiple videos for combining to generate the media content item. In addition, the camera mode system214is configured to adjust user interfaces (e.g., a capture user interface for capturing video clip(s) and/or a preview user interface for previewing captured video clip(s)) based on which camera mode is enabled. FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database126of the messaging server system108, according to certain examples. While the content of the database126is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database126includes message data stored within a message table306. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. Further details regarding information that may be included in a message, and included within the message data stored in the message table306is described below with reference toFIG.4. An entity table308stores entity data, and is linked (e.g., referentially) to an entity graph304and profile data302. Entities for which records are maintained within the entity table308may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The profile data302stores multiple types of profile data about a particular entity. The profile data302may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data302includes, for example, a user name, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. Where the entity is a group, the profile data302for the group may similarly include one or more avatar representations associated with the group, in addition to the group name, members, and various settings (e.g., notifications) for the relevant group. The database126also stores augmentation data, such as overlays or filters, in an augmentation table310. The augmentation data is associated with and applied to videos (for which data is stored in a video table314) and images (for which data is stored in an image table316). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a set of filters presented to a sending user by the messaging client104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. Other augmentation data that may be stored within the image table316includes augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences). An augmented reality content item may provide a real-time special effect and/or sound that may be added to an image or a video. As described above, augmentation data includes augmented reality content items, overlays, image transformations, AR images, and similar terms refer to modifications that may be applied to image data (e.g., videos or images). This includes real-time modifications, which modify an image as it is captured using device sensors (e.g., one or multiple cameras) of a client device102and then displayed on a screen of the client device102with the modifications. This also includes modifications to stored content, such as video clips in a gallery that may be modified. For example, in a client device102with access to multiple augmented reality content items, a user can use a single video clip with multiple augmented reality content items to see how the different augmented reality content items will modify the stored clip. For example, multiple augmented reality content items that apply different pseudorandom movement models can be applied to the same content by selecting different augmented reality content items for the content. Similarly, real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device102would modify the captured data. Such data may simply be displayed on the screen and not stored in memory, or the content captured by the device sensors may be recorded and stored in memory with or without the modifications (or both). In some systems, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time. Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various examples, different methods for achieving such transformations may be used. Some examples may involve generating a three-dimensional mesh model of the object or objects, and using transformations and animated textures of the model within the video to achieve the transformation. In other examples, tracking of points on an object may be used to place an image or texture (which may be two dimensional or three dimensional) at the tracked position. In still further examples, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or frames of video). Augmented reality content items thus refer both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Real-time video processing can be performed with any kind of video data (e.g., video streams, video files, etc.) saved in a memory of a computerized system of any kind. For example, a user can load video files and save them in a memory of a device, or can generate a video stream using sensors of the device. Additionally, any objects can be processed using a computer animation model, such as a human's face and parts of a human body, animals, or non-living things such as chairs, cars, or other objects. In some examples, when a particular modification is selected along with content to be transformed, elements to be transformed are identified by the computing device, and then detected and tracked if they are present in the frames of the video. The elements of the object are modified according to the request for modification, thus transforming the frames of the video stream. Transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements characteristic points for each element of an object are calculated (e.g., using an Active Shape Model (ASM) or other known methods). Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh used in the following stage of tracking the elements of the object in the video stream. In the process of tracking, the mentioned mesh for each element is aligned with a position of each element. Then, additional points are generated on the mesh. A first set of first points is generated for each element based on a request for modification, and a set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh. In such method, a background of the modified object can be changed or distorted as well by tracking and modifying the background. In some examples, transformations changing some areas of an object using its elements can be performed by calculating characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. Points are generated on the mesh, and then various areas based on the points are generated. The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream. Depending on the specific request for modification properties of the mentioned areas can be transformed in different ways. Such modifications may involve changing color of areas; removing at least some part of areas from the frames of the video stream; including one or more new objects into areas which are based on a request for modification; and modifying or distorting the elements of an area or object. In various examples, any combination of such modifications or other similar modifications may be used. For certain models to be animated, some characteristic points can be selected as control points to be used in determining the entire state-space of options for the model animation. In some examples of a computer animation model to transform image data using face detection, the face is detected on an image with use of a specific face detection algorithm (e.g., Viola-Jones). Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. Other methods and algorithms suitable for face detection can be used. For example, in some examples, features are located using a landmark, which represents a distinguishable point present in most of the images under consideration. For facial landmarks, for example, the location of the left eye pupil may be used. If an initial landmark is not identifiable (e.g., if a person has an eyepatch), secondary landmarks may be used. Such landmark identification procedures may be used for any such objects. In some examples, a set of landmarks forms a shape. Shapes can be represented as vectors using the coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes. In some examples, a search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. Such a search then repeats the steps of suggesting a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point and then conforming the tentative shape to a global shape model until convergence occurs. In some systems, individual template matches are unreliable, and the shape model pools the results of the weak template matches to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. A transformation system can capture an image or video stream on a client device (e.g., the client device102) and perform complex image manipulations locally on the client device102while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), style transfers, graphical element application, and any other suitable image or video manipulation implemented by a convolutional neural network that has been configured to execute efficiently on the client device102. In some examples, a computer animation model to transform image data can be used by a system where a user may capture an image or video stream of the user (e.g., a selfie) using a client device102having a neural network operating as part of a messaging client104operating on the client device102. The transformation system operating within the messaging client104determines the presence of a face within the image or video stream and provides modification icons associated with a computer animation model to transform image data, or the computer animation model can be present as associated with an interface described herein. The modification icons include changes that may be the basis for modifying the user's face within the image or video stream as part of the modification operation. Once a modification icon is selected, the transform system initiates a process to convert the image of the user to reflect the selected modification icon (e.g., generate a smiling face on the user). A modified image or video stream may be presented in a graphical user interface displayed on the client device102as soon as the image or video stream is captured, and a specified modification is selected. The transformation system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification. That is, the user may capture the image or video stream and be presented with a modified result in real-time or near real-time once a modification icon has been selected. Further, the modification may be persistent while the video stream is being captured, and the selected modification icon remains toggled. Machine taught neural networks may be used to enable such modifications. The graphical user interface, presenting the modification performed by the transform system, may supply the user with additional interaction options. Such options may be based on the interface used to initiate the content capture and selection of a particular computer animation model (e.g., initiation from a content creator user interface). In various examples, a modification may be persistent after an initial selection of a modification icon. The user may toggle the modification on or off by tapping or otherwise selecting the face being modified by the transformation system and store it for later viewing or browse to other areas of the imaging application. Where multiple faces are modified by the transformation system, the user may toggle the modification on or off globally by tapping or selecting a single face modified and displayed within a graphical user interface. In some examples, individual faces, among a group of multiple faces, may be individually modified, or such modifications may be individually toggled by tapping or selecting the individual face or a series of individual faces displayed within the graphical user interface. A story table312stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table308). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some examples, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). As mentioned above, the video table314stores video data that, in one example, is associated with messages for which records are maintained within the message table306. Similarly, the image table316stores image data associated with messages for which message data is stored in the entity table308. The entity table308may associate various augmentations from the augmentation table310with various images and videos stored in the image table316and the video table314. FIG.4is a schematic diagram illustrating a structure of a message400, according to some examples, generated by a messaging client104for communication to a further messaging client104or the messaging server118. The content of a particular message400is used to populate the message table306stored within the database126, accessible by the messaging server118. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application servers114. A message400is shown to include the following example components:message identifier402: a unique identifier that identifies the message400.message text payload404: text, to be generated by a user via a user interface of the client device102, and that is included in the message400.message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400. Image data for a sent or received message400may be stored in the image table316.message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400. Video data for a sent or received message400may be stored in the video table314.message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.message augmentation data412: augmentation data (e.g., filters, stickers, or other annotations or enhancements) that represents augmentations to be applied to message image payload406, message video payload408, or message audio payload410of the message400. Augmentation data for a sent or received message400may be stored in the augmentation table310.message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client104.message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).message story identifier418: identifier values identifying one or more content collections (e.g., “stories” identified in the story table312) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the Client device102on which the message400was generated and from which the message400was sent.message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table316. Similarly, values within the message video payload408may point to data stored within a video table314, values stored within the message augmentations412may point to data stored in an augmentation table310, values stored within the message story identifier418may point to data stored in a story table312, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table308. FIG.5is a diagram illustrating a user interface arrangement500configured to capture, combine and preview multiple video clips, in accordance with some example embodiments. For explanatory purposes, the user interface arrangement500is primarily described herein with reference to the messaging client104ofFIG.1, and the camera mode system214ofFIG.2. Not all of the depicted and described interfaces/components may be used in all implementations, and one or more embodiments may include additional or different interfaces/components than those shown and described with respect to the figure. Variations in the arrangement and type of the interfaces/components may be made without departing from the spirit or scope of the claims as set forth herein. The user interface arrangement500may be implemented at least in part by the camera mode system214. As noted above, the camera mode system214may correspond to a subsystem of the messaging system100, and may be supported on the client side by the messaging client104and/or on the server side by the application servers114. In one or more embodiments, the capturing, combining and previewing of video clip(s) as described herein may be implemented client side, server side and/or a combination of client side and server side. As shown inFIG.5, the capture user interface502includes a camera selection button506, which is user-selectable for switching between the rear-facing and front-facing camera of the client device102. The capture user interface502further includes a flash button508for activating or deactivating a flash with respect to captured image data512(or a captured image). The capture user interface502further includes a camera mode selection button510. In addition, the capture user interface502includes a carousel launch button522for launching a carousel interface, as discussed below with respect toFIG.6D. Moreover, the capture user interface502includes a capture button520which is user-selectable to capture video (e.g., video clips) and/or images (e.g., pictures). As described herein, a “video clip” corresponds to a series of video frames that runs for an uninterrupted period of time. For example, a video clip corresponds with the video captured from moment that the camera of starts recording until the moment the camera stops recording. In one or more embodiments, the messaging client104in conjunction with the camera mode system214provides for a user to select between a first camera mode and a second camera mode for video capture. For example, the first camera mode corresponds with capturing a single video clip which is usable to generate a media content item. The second camera mode corresponds with capturing multiple video clips which may be combined to generate the media content item. In this regard, the camera mode selection button510is user-selectable for switching between the first camera mode and the second camera mode. In one or more embodiments, the messaging client104defaults to the first camera mode. For example, upon startup of the messaging client104, the messaging client104activates the camera of the client device102to display captured image data512in real time, and to default to the first camera mode with respect to the capture user interface502. In response to user selection of the camera mode selection button510, the messaging client104in conjunction with the camera mode system214provides for switching from the first camera mode to the second camera mode. Switching to the second camera mode may also be effected via a predefined touch gesture (e.g., a left drag gesture starting from the capture button520while in the first camera mode). In one or more embodiments, a tutorial (e.g., a modal or overlay) may be presented the first time the second camera mode is launched, to teach the user of features related to the second camera mode. In the first camera mode, the capture button520is selectable to capture a single video clip via a predefined gesture (e.g., a press-and-hold gesture, where video is recorded for the duration of the hold). In addition, the capture button520is selectable to capture a picture via another predefined gesture (e.g., tap gesture). In the second camera mode, the behavior of the capture button520may differ from that of the first camera mode in order to facilitate capturing multiple video clips. In one or more embodiments, the capture button520is responsive to different types of touch input for capturing video clips. In a first example, the capture button520is selectable to capture a video clip via a press-and-hold gesture (e.g., where video is recorded for the duration of the hold). In another example, the capture button520is selectable to capture a video clip via first and second tap gestures, with the first tap gesture initiating video capture and the second tap gesture ending video capture for the video clip (e.g., corresponding to hands-free recording). In one or more embodiments, a predefined touch region for the capture button520for the second tap gesture may smaller than for the first tap gesture (e.g., to reduce likelihood of the user inadvertently stopping video capture). For example, the touch region may correspond to a predefined region within the center of the displayed capture button520. In the second camera mode, the camera mode system214provides for capturing the multiple video clips in a sequential manner, such that the first video clip is followed by the second video clip, the second video clip is followed by the third video clip, and so on. Each of the video clips may have been captured in response to respective touch inputs via the capture button520(e.g., press-and-hold gestures, first/second taps, or combinations thereof). In one or more embodiments, the camera mode system214provides for displaying updates to the timeline progress bar514in real-time, to depict video clips as they are captured. As shown in the example ofFIG.5, display of the timeline progress bar514may be accompanied by display of the undo button516and the preview button518. In one or more embodiments, the camera mode system214provides for displaying the undo button516, the timeline progress bar514and the preview button518in the second camera mode only. As such, the undo button516, the timeline progress bar514and the preview button518are not displayed while the first camera mode is active. As shown in the example ofFIG.5, the timeline progress bar514depicts video clips as respective segments, with the length of each segment being proportional to the duration of the respective video clip. The segments may be added and/or updated in real-time. The length of each segment may appear to increase in real-time as each respective video clip is being captured. For illustrative purposes, the expanded view524(which is not necessarily shown by the capture user interface502) depicts example video clips15. In one or more embodiments, the timeline progress bar514is configured to update in real-time based on passing preset time thresholds with respect to the combined duration of all currently-captured video clips. For example, the initial timeline length for the timeline progress bar514may be preset to a first time threshold (e.g., 10 seconds) such that the timeline progress bar514is depicted to fill up upon reaching the first time threshold. Once the combined duration of currently-captured video clips reaches the first time threshold, the timeline length is adjusted to a second time threshold (e.g., 30 seconds), with the current progress (e.g., segment(s)) being depicted to collapse relative to the adjusted timeline length. Once the combined duration of currently-captured video clips reaches the second time threshold, the timeline length is adjusted to a third time threshold (e.g., 60 seconds), with the current progress (e.g., segment(s)) being depicted to collapse relative to the adjusted timeline length. In one or more embodiments, the camera mode system214provides for limiting or capping the combined duration for all currently-captured video clips. For example, the camera mode system214may set a maximum duration to 60 seconds (e.g., corresponding to the above-mentioned third time threshold). The capture user interface502may display a notification if the total recording time reaches the maximum duration, to prevent the recording of subsequent video clips to include in the media content item. The capture user interface502further includes the undo button516. As noted above, the undo button516may be presented while the second camera mode is active (and not while the first camera mode is active). The undo button516is selectable to delete the most recent video clip (e.g., corresponding to the last, or right-most, segment of the timeline progress bar514). In a case where no video clips are in the timeline progress bar514, the undo button516may be replaced with a close button (depicted as an “x” and discussed further below with respect toFIG.7), which is selectable to exit the second camera mode and revert to the first camera mode. Reverting from the second camera mode to the first camera mode may also be effected by user selection of the camera mode selection button510. In response to user selection of the camera mode selection button510while in the second camera mode, the messaging client104may prompt the user to confirm that any captured video clips will be removed. The capture user interface502further includes the preview button518. The preview button518is selectable to switch from the capture user interface502to the preview user interface504. On the other hand, the first camera mode in example embodiments may not include the preview button518and may instead automatically present a preview interface following capture of the single video clip (or picture). In the second camera mode, the preview user interface504provides for previewing the captured video clips (e.g., clips16) as captured. In addition, the preview user interface504provides user-selectable elements for generating the media content item based on the captured video clips. In one or more embodiments, the preview user interface504includes a user-selectable button (a “±” button, which is depicted and discussed further below with respect toFIG.7) for adding video clips to the captured video clips. Selection of this button may cause the camera mode system214to switch from the preview user interface504back to the capture user interface502, with all video clips and edits being preserved. For example, the camera mode system214may facilitate preserving the clips in local memory in association with the collection management system204, and may facilitate preserving the edits in local memory in association with the augmentation system208. In addition to preserving video clips and/or edits with respect to the user-selectable button (the “+” button), the camera mode system214may preserve and re-present the video clips and/or edits with respect to the user switching between other interfaces and/or applications. For example, video clips and/or edits are preserved when returning to the camera selection button506or preview user interface504from one or more of: another interface within the messaging client104(e.g., a chat interface, a reply interface); an application other than the messaging client104(e.g., with the selected camera mode and/or timeline progress also being preserved as facilitated by camera mode system214); and/or killing of the messaging client104(e.g., with the selected camera mode and/or timeline progress also being preserved). Referring back toFIG.5, the preview user interface504includes editing tools526for modifying/annotating (e.g., drawing on, adding text to, adding stickers to, cropping, and the like) the captured video clips. While not shown inFIG.5, the preview user interface504may further include interface elements (e.g., buttons) for one or more of: saving the captured video clips (e.g., with modifications/annotations) as a media content item; creating or updating a Story based on the captured video clips (e.g., with modifications/annotations); modifying audio signal(s) associated with the captured video clips; sending a media content item which includes the captured video clips (e.g., with modifications/annotations) to a contact/friend; and/or broadcasting the media content item in association with a feed interface (e.g., for viewing by other users who are not necessarily contacts/friends). As noted, the preview user interface504provides for a media content item to be generated based on the multiple video clips. In one or more embodiments, the messaging client104(e.g., in conjunction with the messaging server system108) is configured to combine the multiple video clips, together with modifications or annotations, to generate the media content item based on the combined video clips. The media content item may correspond to a single entity (e.g., video, message) which includes all of the clips (with modifications/annotations). In one or more embodiments, the media content item is configured to be played (e.g., with respect to a viewing user) continuously, so as to loop back to the first video clip after the last video clip is played. FIGS.6A-6Eillustrate a user interface (e.g., a capture user interface602) configured to capture multiple video clips for including into a media content item, in accordance with some example embodiments.FIGS.6A-6Edepict example scenarios in which the user selects the above-mentioned second camera mode (FIG.6A), captures a first video clip (FIGS.6B-6C), launches a carousel interface (e.g.,FIG.6D), and continues to capture video clips (FIG.6E). Similar to the capture user interface502ofFIG.5, the capture user interface602ofFIGS.6A-6Eincludes one or more of: a camera selection button604(e.g., for switching between rear-facing and front-facing cameras), a flash button606(e.g., for activating and deactivating flash), a camera mode selection button608(e.g., for switching between the first and second camera modes), a capture button610, a carousel launch button612(e.g., for launching the carousel interface624), a timeline progress bar616(e.g., for displaying progress in capturing video clips), a close button614(e.g., for switching from the second camera mode back to the first camera mode), a preview button618(e.g., for previewing, editing and generating a media content item based on captured video clip(s)), and/or an undo button622(e.g., to delete the most recent video clip). In the example ofFIG.6A, the user selects the camera mode selection button608. In one or more embodiments, the capture user interface602may default to the first camera mode for capturing a single video clip. In response to selection of the camera mode selection button608, the messaging client104in conjunction with the camera mode system214provides for switching from the first camera mode to the second camera mode. As noted above, such switching may include adjusting the capture button520to be responsive to different types of touch input for capturing video, and/or adding the undo button516, the timeline progress bar514and the preview button518to the capture user interface502. The close button614is a user-selectable button for closing out of the second camera mode. In response to user selection of the close button614, the camera mode system214provides for exiting the second camera mode and reverting to the first camera mode. In one or more embodiments, the close button614is presented when there are no captured video clips (e.g., no video clips have been captured, or all captured video clip(s) have been removed via the undo button516). The capture user interface602also includes a preview button618, which is selectable to preview, edit and/or generate a media content item which includes the captured video clip(s). In one or more embodiments, the preview button618is enabled after a first video clip has been captured. Alternatively or in addition, the camera mode system214may implement a minimum video duration (e.g., 5 seconds) in order to enable the preview button618. In the example ofFIG.6A, the preview button618is disabled since no video clips have yet been captured (e.g., the timeline progress bar616is empty). In one or more embodiments, display of the preview button618changes when switching from disabled (e.g., a grayed-out checkmark) to enabled (e.g., a yellow checkmark). A tool tip (e.g., a message indicating to “preview your media content item”) may direct user attention to the enabled preview button618. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the preview button618directs to the preview user interface504. FIG.6Billustrates an example when the user initiates capture of a first video clip. For example, the user initiate capture of the first video clip based on touch input620(e.g., a press-and-hold gesture, or a first tap gesture as described above) via the capture button610. As shown in the example ofFIG.6B, the timeline progress bar616is updated in real-time to display a first segment corresponding to the first video clip. The length of the first segment may appear to increase in real-time as each respective video clip is being captured. FIG.6Cillustrates when the user completes capture of the first video clip (e.g., release of the press-and-hold gesture, or a second tap gesture as described above). In one or more embodiments, upon completion of capturing the first video clip, the camera mode system214provides for updating the capture user interface602by replacing the close button614with the undo button622(e.g., which is selectable to remove the first video clip from the timeline progress bar616), and/or by enabling the preview button618. As noted above, the carousel launch button612is user-selectable to launch the carousel interface624. In response to selection of the preview button618, the capture user interface602is updated (e.g., by the camera mode system214) to display the carousel interface624as shown inFIG.6D. In one or more embodiments, the carousel interface624allows the user to cycle through and/or select different augmented reality content items (e.g., Lenses) to apply/display with respect to images currently being captured by the device camera and being displayed on the device screen. Each of the available augmented reality content items is represented by an icon which is user-selectable for switching to the respective augmented reality content item. In one or more embodiments, the icon corresponding to an active augmented reality content item (e.g., active AR icon626) is displayed in a different manner relative to (e.g., larger than) the remaining icons. Behavior of the active AR icon626in the second camera mode is similar to that of the capture button610. For example, the user may select the active AR icon626to capture a subsequent video clip(s) via respective press-and-hold gestures and/or first and second tap gestures. The corresponding augmented reality content item (e.g., Lens) is applied to the subsequently-captured video clip(s). In addition, the user may select to apply different augmented reality content items to different video clips as they are captured. In one or more embodiments, a viewing user of the media content item, which includes augmented reality content, may be presented with an interface to apply (e.g., unlock) corresponding augmented reality content item(s) for modifying captured image/video from their end. In the example ofFIG.6E, the user has captured four video clips, as depicted by respective segments in the timeline progress bar616. As noted above, the undo button622is selectable to remove video clip(s) from the timeline progress bar616(e.g., with each tap gesture for removing the most recent video clip). The capture user interface602further includes a preview button618, which is selectable to preview, edit and/or generate a media content item based on the captured video clips via a preview user interface702as discussed below with respect toFIG.7. FIG.7illustrates the preview user interface702for previewing multiple video clips for combining into a media content item, in accordance with some example embodiments. For example,FIG.7depicts an example scenario in which the user selects to preview the multiple video clips (e.g., 4 video clips) captured in association withFIG.6D. Similar to the preview user interface504ofFIG.5, the preview user interface702ofFIG.7includes editing tools704. For example, the editing tools704include user-selectable icons (e.g., buttons) for modifying/annotating (e.g., drawing on, adding text to, adding stickers to, cropping, and the like) the captured video clips. The user-selectable icons may include an option for selecting between looping, bouncing (e.g., switching between forward and reverse playback) and/or single playback with respect to the resulting media content item. In addition, the preview user interface702includes: a save button714which is selectable to save the captured video clips (e.g., with modifications/annotations) as a media content item; a story button716which is selectable to create a Story based on the captured video clips (e.g., with modifications/annotations); an audio button712which is selectable to modify audio signal(s) associated with the captured video clips; and/or a send button718which is selectable to send a media content item which combines the captured video clips (e.g., including any modifications/annotations) to a recipient (e.g., a contact/friend) and/or to broadcast the media content item to other users of the messaging system100. Moreover, the preview user interface702provides for looping playback (e.g., for preview purposes) of the captured video clip(s), as shown by looped playback722. The preview user interface702further includes a video preview708in which each video clip is represented as a respective thumbnail and in which a position indicator720indicates a current playback position for the looped playback722. The thumbnails are depicted as combined together (e.g., as a combined video clip). In one or more embodiments, the thumbnails are individually selectable for editing/deleting (e.g., in conjunction with one or more of the editing tools704). In addition, the preview user interface702includes an add video button710for adding video clips to the captured video clips (e.g., which are viewable via the video preview708). In response to user selection of the add video button710(e.g., or alternatively, a predefined gesture such as a swipe down gesture within a predefined region of the preview user interface702), the camera mode system214provides for switching from the preview user interface702back to the capture user interface502, with all video clips and edits being preserved. A tool tip (e.g., a message indicating to “go back to camera to add more”) may direct user attention to the add video button710. The tool tip may be displayed only once (e.g., a first time), to advise the user that selection of the add video button710directs to the capture user interface502. With respect to preserving video clips and edits, the camera mode system214may facilitate preserving the clips in local memory in association with the collection management system204, and may facilitate preserving the edits (e.g., via the editing tools704) in local memory in association with the augmentation system208. In one or more embodiments, the preview user interface702further includes a close button706which is selectable to exit the preview user interface702and return to the capture user interface502without video clips and/or edits being preserved. In one or more embodiments, user selection of the close button706may prompt the user to confirm deletion of the video clips and/or edits. FIGS.8A-8Cillustrate switching between a carousel interface816(included within a capture user interface802) and an explorer interface804for selecting augmented reality content items in association with multi-video clip capture, in accordance with some example embodiments.FIGS.8A-8Cdepict example scenarios in which: a user is presented with a carousel interface816and selects an explore tab826to switch to an explorer interface804for browsing available augmented reality content items (e.g.,FIG.8A), the user selects an augmented reality content item via the explorer interface804(FIG.8B), and the user is presented with an updated carousel interface816which is configured to persistently include the selected augmented reality content item (e.g.,FIG.8C). Similar to the capture user interface602ofFIG.6D, the capture user interface802ofFIGS.8A-8Cincludes one or more of the following interface elements: a camera selection button806(e.g., for switching between rear-facing and front-facing cameras); a flash button808(e.g., for activating and deactivating flash); a camera mode selection button810(e.g., for switching between the first and second camera modes); captured image data812(e.g., corresponding to real-time video/images captured by the device camera); a carousel interface816(e.g., for cycling through and/or selecting different augmented reality content items, with the active AR icon814corresponding to a currently-selected augmented reality content item); a timeline progress bar820(e.g., for displaying progress in capturing video clips); an undo button818(e.g., for deleting the most recent video clip); and/or a preview button822(e.g., for previewing, editing and generating a media content item based on captured video clip(s)). These interface elements in the capture user interface802ofFIGS.8A-8Care configured to perform functions similar to those described above with respect to the capture user interface602ofFIG.6D. As shown inFIG.8A, the camera mode selection button810is highlighted, thereby indicating that the second camera mode is enabled for capturing multiple video clips. In the example ofFIG.8A, one video clip has been captured, for example, as depicted by the timeline progress bar820including a single segment. The preview button822is enabled (e.g., not grayed out). Similar toFIG.6Das described above, the carousel interface816ofFIG.8Aallows the user to cycle through and/or select different augmented reality content items (e.g., Lenses) to apply/display with respect to the captured image data812. Each augmented reality content items in the carousel interface816is represented by an icon which is user-selectable for switching to the respective augmented reality content item. The active AR icon814is displayed in a different manner relative to (e.g., larger than) the remaining icons. In the example ofFIG.8A, the active AR icon814is blank, indicating that an augmented reality content item has not been selected (e.g., corresponding to no AR being applied to the captured image data812). The user may select the active AR icon626to capture additional video clip(s) via respective press-and-hold gestures and/or first and second tap gestures. In addition, the user may select to apply different augmented reality content items to different video clips as they are captured (e.g., such that the generated media content item includes different augmented reality content for different video clips). In one or more embodiments, the augmented reality content items presented within the carousel interface816correspond to a first set of available augmented reality content items. For example, the messaging client104in conjunction with the augmentation system208is configured to determine the first set of augmented reality content items based on one or more of: a geolocation of the device (e.g., where the augmented reality content items relate to the geolocation); an object detected in the captured image data812(e.g., where the augmented reality content items relate to the detected object, such as a face or scenery); a rear-facing or front-facing camera status of the device (e.g., where the augmented reality content items are associated with front or rear capture); user history associated with augmented reality content items (e.g., previously-selected augmented reality content items); and/or user preferences associated with augmented reality content items (e.g., user-specified augmented reality content items such as favorites). An indication of the first set of augmented reality content items may be stored by and accessible via the augmentation system208. In one or more embodiments, the carousel interface816is presented in association with selection of a browse tab824included within the capture user interface802. In the example ofFIG.8A, the browse tab824is selected and as such, the carousel interface816is presented for browsing through the first set of augmented reality content items. The messaging client104also provides for browsing a second set of augmented reality content items. In this regard, the capture user interface802includes an explore tab826which is user-selectable to display the explorer interface804shown inFIG.8B. As shown inFIG.8B, the explorer interface804includes a tiled interface830corresponding to the second set of augmented reality content items. In one or more embodiments, the messaging client104in conjunction with the augmentation system208is configured to determine the second set of augmented reality content items. The second set of augmented reality content items may correspond to augmented reality content items created by users of the messaging system100. For example, a user (creator) may design and submit augmented reality content item(s) for use by others within the messaging system100. The submitted augmented reality content item(s) may be subject to an approval process (e.g., including administrator approval) in order to be included within the second set of augmented reality content items. An indication of the second set of augmented reality content items may be stored by and accessible via the augmentation system208. In one or more embodiments, the messaging client104in conjunction with the augmentation system208facilitates populating the tiled interface830with the second set of augmented reality content items. Each augmented reality content item within in the second set is represented as a respective tile. Each tile includes a sample photo (or sample gif) of the corresponding augmented reality content, an icon representing the augmented reality content item (e.g., as discussed above with respect to the carousel interface816), and a name of the augmented reality content item. The tiled interface830provides for scrolling (e.g., vertically scrolling) through the augmented reality content items of the second set in response to a predefined gesture (e.g., a swipe or drag gesture). The user may select a particular augmented reality content item presented within the tiled interface830via a predefined gesture (e.g., by tapping the corresponding augmented reality content item). In the example ofFIG.8B, the user selects the AR content item832which is represented by the AR icon834. In one or more embodiments, the augmented reality content items within the second set may be grouped via AR tabs828. In the example ofFIG.8B, the AR tabs828include a For You tab (e.g., which may be based at least in part on user history and/or preference), a Trending tab (e.g., based on popularity among users), a Holidays tab (e.g., for relevant holidays and/or seasons), a Face tab (e.g., with face-based effects), a World tab (e.g., for scenic effects) and a Music tab (e.g., for music-related augmented reality content items). Moreover, the augmented reality content items within the second set may be searched for and/or filtered via the search interface836. For example, the search interface836is configured to receive text-based search input based on one or more of the name, types of detected objects, type of content, and the like. The augmented reality content items may be searchable by one or more of these terms, for example, based on metadata associated with the augmented reality content items. In response to text-based search input entered within the search interface836(e.g., including partial and/or complete search terms), the messaging client104provides for the tiled interface830to present augmented reality content items that match the entered text. Thus, the messaging client104provides for a user to browse through different sets of augmented reality content items. In particular, the browse tab824within the capture user interface802is selectable to browse the first of augmented reality content items via the carousel interface816. In addition, the explore tab826within the capture user interface802is selectable to surface the explorer interface804for browsing and/or searching the second set of augmented reality content items. In response to user selection of the AR content item832within the tiled interface830, the messaging client104switches from the explorer interface804back to the capture user interface802as shown inFIG.8C. In switching, the messaging client104in conjunction with the augmentation system208may provide for unlocking the AR content item832, for example, by activating the AR content item832within the capture user interface802. As shown in the example ofFIG.8C, the captured image data812is modified (e.g., in real-time) to include augmented reality content (e.g., effect) corresponding to the AR content item832. In addition, the carousel interface816is updated to indicate the that the AR icon834is the active AR icon814. In addition, the messaging client104in conjunction with the augmentation system208provides for updating the first set of augmented reality content items to include the AR content item832in a persistent manner. The carousel interface816may therefore persistently present the corresponding AR icon834while the second camera mode is active (e.g., while the camera mode selection button810is highlighted) in the capture user interface802. In this regard, the messaging client104is configured to manage different camera instances with respect to messaging. For example, the messaging client104is configured to manage or otherwise maintain a main camera instance, and one or more modular camera instances. As described herein, the main camera instance corresponds with an active session of the capture user interface802as presented inFIGS.8A-8C. The main camera instance may be associated with the active session for capturing video (e.g., upon startup of the messaging client104). On the other hand, the messaging client104may invoke or otherwise create one or more modular camera instances in association with other user interfaces provided by the messaging client104. By way of non-limiting example, the messaging client104may create a modular camera instance in association with a reply interface. The reply interface may be surfaced when a user responds to a message (e.g., from a friend), a Story, a chat or the like. The reply interface may activate the device camera in order to capture video/pictures to include in a reply. As such, the reply interface may include interface elements (e.g., similar to one or more of the elements806822within the capture user interface802) for capturing and/or augmenting video. The messaging client104is configured to create a modular camera instance with respect to the reply interface. Thus, the messaging client104is configured to maintain separate camera instances, including a main camera instance and one or more modular camera instances with respect to capturing video and/or pictures. By virtue of maintaining such camera instances, it is possible for the messaging client104to preserve settings and/or preferences with respect to images captured across different user interfaces of the messaging client104. Regarding the second camera mode (e.g., for multi-video clip capture), the messaging client104is configured to save an indication of selections and/or input within the explorer interface804in association with the main camera instance. In doing so, the selections and/or input persist when switching between the explorer interface804and the capture user interface802. As noted above with respect toFIG.8C, the AR content item832is unlocked (e.g., activated and/or presented) within the capture user interface802in response to user selection of the AR content item832within the explorer interface804. In one or more embodiments, the unlocking by the messaging client104is performed with respect to the main camera instance, to facilitate saving the indication of selections and/or input within the explorer interface804. Thus, one example of persisting selections and/or input between the explorer interface804and the capture user interface802corresponds to user selection of the AR content item832via the explorer interface804(e.g., via the tiled interface830or the search interface836). An indication of the AR content item832is stored (e.g., in local memory as an update to the first subset of augmented reality content items) and used to populate the carousel interface816when switching thereto. In one or more embodiments, the user may switch away from the capture user interface802multiple times (e.g., by switching to different user interfaces within the messaging client104and/or by switching between the messaging client104and other applications). However, the messaging client104is configured to persistently present the AR icon834within the carousel interface816when the user returns to the capture user interface802. In one or more embodiments, the messaging client104provides for positioning the AR icon834in a first position (e.g., immediately to the right of the active AR icons814) upon subsequently returning to the capture user interface802. The user may further continue to select additional augmented reality content items via the explorer interface804(e.g., via the tiled interface830and/or the search interface836). The messaging client104may store respective indications for each augmented reality content item selected via the explorer interface804, for persistently presenting (e.g., in front of previously unlocked augmented reality content items) within the carousel interface816. In one or more embodiments, the messaging client104is configured to remove the stored indication of the selected AR content item832(e.g., from local memory) upon completion of a session corresponding to the second camera mode. In one or more embodiments, removing the stored indication causes the messaging client104to no longer present the selected AR content item832within the carousel interface816. The session may be determined to be completed when the user opts to disable the second camera mode (e.g., by tapping the camera mode selection button810or selecting a close button as described above), and/or when the user selects to send and/or upload the media content item including the captured video from the session (e.g., by selecting a save button, send button, and/or Story button as described above). In a case where the user sends and/or uploads the media content item, the messaging client104may in example embodiments delay removal of the stored indication and continue to persist the stored indication for a preset period of time (e.g., 48 hours after selecting to send and/or broadcast). In such instance, the indication may be stored in the database126for the preset period of time, and removed thereafter. In addition to persisting the selected AR content item832, the messaging client104in example embodiments is configured to save an indication of a position (e.g., a most recent position) within the tiled interface830and AR tabs828of the explorer interface804. Thus, in a case of subsequently switching back to the explorer interface804, the most recent position (e.g., the vertical position within the tiled interface830and/or the active tab within AR tabs828) is persisted in conjunction with the main camera instance. The stored indication may be removed upon completion of the session corresponding to the second camera mode, such that the position no longer persists (e.g., the tiled interface830starts from a default position) when subsequently navigating to the explorer interface804. Further, the messaging client104in example embodiments is configured to save an indication of text-based search term(s) input (e.g., most recently input) by the user within the search interface836of the explorer interface804. Thus, in a case of subsequently switching back to the explorer interface804, the most recently-entered search terms (e.g., together with the active tab within AR tabs828) may persist within the search interface836. The stored indication may be removed upon completion of the session corresponding to the second camera modem, such that the search term no longer persists (e.g., the search interface836is instead empty) when subsequently navigating to the explorer interface804. Thus, the messaging client104provides for persisting indications of user selections between the explorer interface804and the capture user interface802. As a result, it is possible for the messaging client104to unlock, present and navigate through augmented reality content items in a more persistent manner. FIG.9is a flowchart illustrating a process900for presenting available augmented reality content items in association with multi-video clip capture, in accordance with some example embodiments. For explanatory purposes, the process900is primarily described herein with reference to the messaging client104ofFIG.1. However, one or more blocks (or operations) of the process900may be performed by one or more other components, and/or by other suitable devices. Further for explanatory purposes, the blocks (or operations) of the process900are described herein as occurring in serial, or linearly. However, multiple blocks (or operations) of the process900may occur in parallel or concurrently. In addition, the blocks (or operations) of the process900need not be performed in the order shown and/or one or more blocks (or operations) of the process900need not be performed and/or can be replaced by other operations. The process900may be terminated when its operations are completed. In addition, the process900may correspond to a method, a procedure, an algorithm, etc. The messaging client104(e.g., in conjunction with the augmentation system208) displays a capture user interface in accordance with a camera mode configured to capture multiple video clips for combining to generate a media content item (block902). Display of the capture user interface in accordance with the camera mode may be associated with a main camera instance of the messaging application, the main camera instance being separate from one or more modular camera instances associated other user interfaces of the messaging application. The messaging client104displays a carousel interface within the capture user interface, the carousel interface for presenting a first set of augmented reality content items, each augmented reality content item within the first set of augmented reality content items being selectable to apply respective augmented reality content to captured video (block904). The first set of augmented reality content items may be based on one or more of a geolocation of the device, an object detected in the captured video, a rear-facing or front-facing camera status of the device, user history associated with augmented reality content items, or user preferences associated with augmented reality content items. The messaging client104receives first user input selecting an explore tab included within the capture user interface, the explore tab being selectable to switch to an explorer user interface for presenting a second set of augmented reality content items (block906). The second set of augmented reality content items may correspond to augmented reality content items created by users of the messaging application. The carousel interface may be displayed in association with a browse tab included within the capture user interface, such that the browse tab is selectable to browse the first set of augmented reality content items via the carousel interface and the explore tab is selectable to browse or search the second set of augmented reality content items via the explorer user interface. The messaging client104switches, in response to receiving the first user input, from the capture user interface to the explorer user interface (block908). The messaging client104receives, via the explorer user interface, second user input selecting an augmented reality content item from among the second set of augmented reality content items (block910). The messaging client104may unlock, in response to receiving the second user input, the selected augmented reality content item for use with respect to the main camera instance. The messaging client104, in response to receiving the second user input, switches from the explorer user interface to the capture user interface based on the selected augmented reality content item, and updates the first set of augmented reality content items to include the selected augmented reality content item, such that the carousel interface persistently presents the selected augmented reality content item as part of the first set of augmented reality content items (block912). Persistently presenting the selected augmented reality content item as part of the first set of augmented reality content items may include storing an indication of the selected augmented reality content item in association with the main camera instance. The messaging client104may remove the stored indication of the selected augmented reality content item upon completion of a session corresponding to the camera mode. The messaging client104may store a position within the explorer user interface in association with the main camera instance, and navigate to the stored position within the explorer user interface in response to subsequent user input selecting the explore tab. The explorer user interface may include a search interface for text-based searching of the second set of augmented reality content items. Storing the position may further include storing a text-based search term provided within the search interface. Navigating to the stored position may further include pre-populating the search interface with the text-based search term in response to the subsequent user input. FIG.10is a schematic diagram illustrating an access-limiting process1000, in terms of which access to content (e.g., an ephemeral message1002, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group1004) may be time-limited (e.g., made ephemeral). An ephemeral message1002is shown to be associated with a message duration parameter1006, the value of which determines an amount of time that the ephemeral message1002will be displayed to a receiving user of the ephemeral message1002by the messaging client104. In one example, an ephemeral message1002is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter1006. The message duration parameter1006and the message receiver identifier424are shown to be inputs to a message timer1010, which is responsible for determining the amount of time that the ephemeral message1002is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message1002will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter1006. The message timer1010is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message1002) to a receiving user. The ephemeral message1002is shown inFIG.10to be included within an ephemeral message group1004(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group1004has an associated group duration parameter1008, a value of which determines a time duration for which the ephemeral message group1004is presented and accessible to users of the messaging system100. The group duration parameter1008, for example, may be the duration of a music concert, where the ephemeral message group1004is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter1008when performing the setup and creation of the ephemeral message group1004. Additionally, each ephemeral message1002within the ephemeral message group1004has an associated group participation parameter1012, a value of which determines the duration of time for which the ephemeral message1002will be accessible within the context of the ephemeral message group1004. Accordingly, a particular ephemeral message group1004may “expire” and become inaccessible within the context of the ephemeral message group1004, prior to the ephemeral message group1004itself expiring in terms of the group duration parameter1008. The group duration parameter1008, group participation parameter1012, and message receiver identifier424each provide input to a group timer1014, which operationally determines, firstly, whether a particular ephemeral message1002of the ephemeral message group1004will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group1004is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer1014operationally controls the overall lifespan of an associated ephemeral message group1004, as well as an individual ephemeral message1002included in the ephemeral message group1004. In one example, each and every ephemeral message1002within the ephemeral message group1004remains viewable and accessible for a time period specified by the group duration parameter1008. In a further example, a certain ephemeral message1002may expire, within the context of ephemeral message group1004, based on a group participation parameter1012. Note that a message duration parameter1006may still determine the duration of time for which a particular ephemeral message1002is displayed to a receiving user, even within the context of the ephemeral message group1004. Accordingly, the message duration parameter1006determines the duration of time that a particular ephemeral message1002is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message1002inside or outside the context of an ephemeral message group1004. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message1002from the ephemeral message group1004based on a determination that it has exceeded an associated group participation parameter1012. For example, when a sending user has established a group participation parameter1012of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message1002from the ephemeral message group1004after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group1004when either the group participation parameter1012for each and every ephemeral message1002within the ephemeral message group1004has expired, or when the ephemeral message group1004itself has expired in terms of the group duration parameter1008. In certain use cases, a creator of a particular ephemeral message group1004may specify an indefinite group duration parameter1008. In this case, the expiration of the group participation parameter1012for the last remaining ephemeral message1002within the ephemeral message group1004will determine when the ephemeral message group1004itself expires. In this case, a new ephemeral message1002, added to the ephemeral message group1004, with a new group participation parameter1012, effectively extends the life of an ephemeral message group1004to equal the value of the group participation parameter1012. Responsive to the ephemeral timer system202determining that an ephemeral message group1004has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group1004to no longer be displayed within a user interface of the messaging client104. Similarly, when the ephemeral timer system202determines that the message duration parameter1006for a particular ephemeral message1002has expired, the ephemeral timer system202causes the messaging client104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message1002. FIG.11is a diagrammatic representation of the machine1100within which instructions1110(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1100to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1110may cause the machine1100to execute any one or more of the methods described herein. The instructions1110transform the general, non-programmed machine1100into a particular machine1100programmed to carry out the described and illustrated functions in the manner described. The machine1100may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1100may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1100may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1110, sequentially or otherwise, that specify actions to be taken by the machine1100. Further, while only a single machine1100is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1110to perform any one or more of the methodologies discussed herein. The machine1100, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine1100may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine1100may include processors1104, memory1106, and input/output I/O components1102, which may be configured to communicate with each other via a bus1140. In an example, the processors1104(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1108and a processor1112that execute the instructions1110. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.11shows multiple processors1104, the machine1100may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1106includes a main memory1114, a static memory1116, and a storage unit1118, both accessible to the processors1104via the bus1140. The main memory1106, the static memory1116, and storage unit1118store the instructions1110embodying any one or more of the methodologies or functions described herein. The instructions1110may also reside, completely or partially, within the main memory1114, within the static memory1116, within machine-readable medium1120within the storage unit1118, within at least one of the processors1104(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1100. The I/O components1102may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1102that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1102may include many other components that are not shown inFIG.11. In various examples, the I/O components1102may include user output components1126and user input components1128. The user output components1126may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components1128may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components1102may include biometric components1130, motion components1132, environmental components1134, or position components1136, among a wide array of other components. For example, the biometric components1130include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1132include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components1134include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components1136include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1102further include communication components1138operable to couple the machine1100to a network1122or devices1124via respective coupling or connections. For example, the communication components1138may include a network interface Component or another suitable device to interface with the network1122. In further examples, the communication components1138may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1124may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components1138may detect identifiers or include components operable to detect identifiers. For example, the communication components1138may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1138, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory1114, static memory1116, and memory of the processors1104) and storage unit1118may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1110), when executed by processors1104, cause various operations to implement the disclosed examples. The instructions1110may be transmitted or received over the network1122, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1138) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1110may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices1124. FIG.12is a block diagram1200illustrating a software architecture1204, which can be installed on any one or more of the devices described herein. The software architecture1204is supported by hardware such as a machine1202that includes processors1220, memory1226, and PO components1238. In this example, the software architecture1204can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1204includes layers such as an operating system1212, libraries1210, frameworks1208, and applications1206. Operationally, the applications1206invoke API calls1250through the software stack and receive messages1252in response to the API calls1250. The operating system1212manages hardware resources and provides common services. The operating system1212includes, for example, a kernel1214, services1216, and drivers1222. The kernel1214acts as an abstraction layer between the hardware and the other software layers. For example, the kernel1214provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1216can provide other common services for the other software layers. The drivers1222are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1222can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1210provide a common low-level infrastructure used by the applications1206. The libraries1210can include system libraries1218(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1210can include API libraries1224such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1210can also include a wide variety of other libraries1228to provide many other APIs to the applications1206. The frameworks1208provide a common high-level infrastructure that is used by the applications1206. For example, the frameworks1208provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1208can provide a broad spectrum of other APIs that can be used by the applications1206, some of which may be specific to a particular operating system or platform. In an example, the applications1206may include a home application1236, a contacts application1230, a browser application1232, a book reader application1234, a location application1242, a media application1244, a messaging application1246, a game application1248, and a broad assortment of other applications such as a third-party application1240. The applications1206are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1206, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1240(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1240can invoke the API calls1250provided by the operating system1212to facilitate functionality described herein, Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third. Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors1004or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Ephemeral message” refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
130,771
11861801
DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative examples of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various examples of the inventive subject matter. It will be evident, however, to those skilled in the art, that examples of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. Reading is still an important and popular activity. In some examples, printed codes are added that enable an enhanced reading experience, where the printed codes are printed in traditional printed media (e.g., paper books, magazines, etc.) or included in electronic media (e.g., electronic books, electronic magazines, etc.) and enable the reader to seamlessly experience virtual objects overlaid in the physical world while reading. The reader uses a wearable electronic device such as AR glasses that identify the printed codes included in the text and seamlessly provides virtual objects within a mixed reality environment for the reader to interact with while reading. A code module is associated with each printed code and the AR glasses match the printed code to the code module and execute the code module. The code module is often highly contextualized to text or an image included in a written material (e.g., on the page of a book). The code module provides virtual objects and a mixed reality interaction for the reader of the written material. For example, if a reader is reading a textbook on gases, a code module may provide virtual gas molecules where the user can interact with the supersized gas molecules to understand the relationship between volume and pressure. AR glasses, though, have a limited battery life compared with the time that many people spend reading materials such as books. For example, often the battery life of AR glasses with displays and being actively used is 30 minutes or less. Additionally, readers often find that it is burdensome to turn AR glasses on and off many times while reading. Accordingly, there is a technical problem of how to provide a mixed reality experience to a reader of printed material that includes printed codes with AR glasses that have limited battery life. Examples address this technical problem by providing a reading mode for the AR glasses that causes the AR glasses go to sleep, or portions of the AR glasses to go to sleep, and wake up at intervals to capture images of subsequent sections of a written material (e.g., pages of the book). The AR glasses then identify any printed codes in each section of the written material and execute associated code modules. The AR glasses may adjust a sleep tune to learn the reading habits of the reader. By putting displays and other components of the AR glasses to sleep and waking them up to scan for codes, the battery life of the AR glasses may be extended to last for several hours or more, which is typically long enough for an average user when reading. As a result, the functioning of the AR glasses is greatly improved. Moreover, the AR glasses limit other features of the AR glasses during the reading mode in order to reduce energy consumption. Additionally, the code modules may require a lot of energy to download to the AR glasses from a backend server. The communication hardware and software of the AR glasses consume a great deal of energy. Examples address this technical problem by managing the downloading of the code modules. In some examples, the code modules for an entire written material are downloaded to the AR glasses at once when the AR glasses have a good signal with a host or backend and then the communication hardware of the AR glasses are put into a sleep mode to conserve energy. In some examples, the AR glasses download the code modules for an entire written material so that the AR glasses do not have to connect with another device while the a users is reading the written material. In some examples, the AR glasses use low-energy communication protocols to download the code modules to conserve energy. In some examples, the communications hardware is kept in a sleep mode and awakened to download code modules when needed. Networked Computing Environment FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple instances of a client device102, each of which hosts a number of applications, including a messaging client104and other applications106. Each messaging client104is communicatively coupled to other instances of the messaging client104(e.g., hosted on respective other client devices102), a messaging server system108and third-party servers110via a network112(e.g., the Internet). A messaging client104can also communicate with locally-hosted applications106using Applications Program Interfaces (APIs). A messaging client104is able to communicate and exchange data with other messaging clients104and with the messaging server system108via the network112. The data exchanged between messaging clients104, and between a messaging client104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data). The messaging server system108provides server-side functionality via the network112to a particular messaging client104. While certain functions of the messaging system100are described herein as being performed by either a messaging client104or by the messaging server system108, the location of certain functionality either within the messaging client104or the messaging server system108may be a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108but to later migrate this technology and functionality to the messaging client104where a client device102has sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client104. This data may include message content, client device information, geolocation information, media augmentation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client104. Turning now specifically to the messaging server system108, an Application Program Interface (API) server116is coupled to, and provides a programmatic interface to, application servers114. The application servers114are communicatively coupled to a database server120, which facilitates access to a database126that stores data associated with messages processed by the application servers114. Similarly, a web server128is coupled to the application servers114, and provides web-based interfaces to the application servers114. To this end, the web server128processes incoming network requests over the Hypertext Transfer Protocol (HTTP) and several other related protocols. The Application Program Interface (API) server116receives and transmits message data (e.g., commands and message payloads) between the client device102and the application servers114. Specifically, the Application Program Interface (API) server116provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client104in order to invoke functionality of the application servers114. The Application Program Interface (API) server116exposes various functions supported by the application servers114, including account registration, login functionality, the sending of messages, via the application servers114, from a particular messaging client104to another messaging client104, the sending of media files (e.g., images or video) from a messaging client104to a messaging server118, and for possible access by another messaging client104, the settings of a collection of media data (e.g., story), the retrieval of a list of friends of a user of a client device102, the retrieval of such collections, the retrieval of messages and content, the addition and deletion of entities (e.g., friends) to an entity graph (e.g., a social graph), the location of friends within a social graph, and opening an application event (e.g., relating to the messaging client104). The application servers114host a number of server applications and subsystems, including for example a messaging server118, an image processing server122, and a social network server124. The messaging server118implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories OF galleries). These collections are then made available to the messaging client104. Other processor and memory intensive processing of data may also be performed server-side by the messaging server118, in view of the hardware requirements for such processing. The application servers114also include an image processing server122that is dedicated to performing various image processing operations, typically with respect to images or video within the payload of a message sent from or received at the messaging server118. The social network server124supports various social networking functions and services and makes these functions and services available to the messaging server118. To this end, the social network server124maintains and accesses an entity graph308(as shown inFIG.3) within the database126. Examples of functions and services supported by the social network server124include the identification of other users of the messaging system100with which a particular user has relationships or is “following,” and also the identification of other entities and interests of a particular user. Returning to the messaging client104, features and functions of an external resource (e.g., an application106or applet) are made available to a user via an interface of the messaging client104. In this context, “external” refers to the fact that the application106or applet is external to the messaging client104. The external resource is often provided by a third party but may also be provided by the creator or provider of the messaging client104. The messaging client104receives a user selection of an option to launch or access features of such an external resource. The external resource may be the application106installed on the client device102(e.g., a “native app”), or a small-scale version of the application (e.g., an “applet”) that is hosted on the client device102or remote of the client device102(e.g., on third-party servers110). The small-scale version of the application includes a subset of features and functions of the application (e.g., the full-scale, native version of the application) and is implemented using a markup-language document. In one example, the small-scale version of the application (e.g., an “applet”) is a web-based, markup-language version of the application and is embedded in the messaging client104. In addition to using markup-language documents (e.g., a .*ml file), an applet may incorporate a scripting language (e.g., a .*js file or a .json file) and a style sheet (e.g., a .*ss file). In response to receiving a user selection of the option to launch or access features of the external resource, the messaging client104determines whether the selected external resource is a web-based external resource or a locally-installed application106. In some cases, applications106that are locally installed on the client device102can be launched independently of and separately from the messaging client104, such as by selecting an icon, corresponding to the application106, on a home screen of the client device102. Small-scale versions of such applications can be launched or accessed via the messaging client104and, in some examples, no or limited portions of the small-scale application can be accessed outside of the messaging client104. The small-scale application can be launched by the messaging client104receiving, from a third-party server110for example, a markup-language document associated with the small-scale application and processing such a document. In response to determining that the external resource is a locally-installed application106, the messaging client104instructs the client device102to launch the external resource by executing locally-stored code corresponding to the external resource. In response to determining that the external resource is a web-based resource, the messaging client104communicates with the third-party servers110(for example) to obtain a markup-language document corresponding to the selected external resource. The messaging client104then processes the obtained markup-language document to present the web-based external resource within a user interface of the messaging client104. The messaging client104can notify a user of the client device102, or other users related to such a user (e.g., “friends”), of activity taking place in one or more external resources. For example, the messaging client104can provide participants in a conversation (e.g., a chat session) in the messaging client104with notifications relating to the current or recent use of an external resource by one or more members of a group of users. One or more users can be invited to join in an active external resource or to launch a recently-used but currently inactive (in the group of friends) external resource. The external resource can provide participants in a conversation, each using respective messaging clients104, with the ability to share an item, status, state, or location in an external resource with one or more members of a group of users into a chat session. The shared item may be an interactive chat card with which members of the chat can interact, for example, to launch the corresponding external resource, view specific information within the external resource, or take the member of the chat to a specific location or state within the external resource. Within a given external resource, response messages can be sent to users on the messaging client104. The external resource can selectively include different media items in the responses, based on a current context of the external resource. The messaging client104can present a list of the available external resources (e.g., applications106or applets) to a user to launch or access a given external resource. This list can be presented in a context-sensitive menu. For example, the icons representing different ones of the application106(or applets) can vary based on how the menu is launched by the user (e.g., from a conversation interface or from a non-conversation interface) System Architecture FIG.2is a block diagram illustrating further details regarding the messaging system100, according to some examples. Specifically, the messaging system100is shown to comprise the messaging client104and the application servers114. The messaging system100embodies a number of subsystems, which are supported on the client-side by the messaging client104and on the server-side by the application servers114. These subsystems include, for example, an ephemeral timer system202, a collection management system204, an augmentation system208, a map system210, a game system212, an external resource system214, and an enhanced reading system216. The ephemeral timer system202is responsible for enforcing the temporary or time-limited access to content by the messaging client104and the messaging server118. The ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively enable access (e.g., for presentation and display) to messages and associated content via the messaging client104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing sets or collections of media (e.g., collections of text, image video, and audio data). A collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client104. The collection management system204furthermore includes a curation interface206that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface206enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain examples, compensation may be paid to a user for the inclusion of user-generated content into a collection. In such cases, the collection management system204operates to automatically make payments to such users for the use of their content. The augmentation system208provides various functions that enable a user to augment (e.g., annotate or otherwise modify or edit) media content associated with a message. For example, the augmentation system208provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The augmentation system208operatively supplies a media overlay or augmentation (e.g., an image filter) to the messaging client104based on a geolocation of the client device102. In another example, the augmentation system208operatively supplies a media overlay to the messaging client104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo, a digital object,) at the client device102. For example, the media overlay may include text or image that can be overlaid on top of a photograph taken by the client device102. In another example, the media overlay includes an identification of a location overlay (e.g., Venice beach), a name of a live event, or a name of a merchant overlay (e.g., Beach Coffee House). In another example, the augmentation system208uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database126and accessed through the database server120. In some examples, the augmentation system208provides a user-based publication platform that enables users to select a geolocation on a map and upload content associated with the selected geolocation. The user may also specify circumstances under which a particular media overlay should be offered to other users. The augmentation system208generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In other examples, the augmentation system208provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the augmentation system208associates the media overlay of the highest bidding merchant with a corresponding geolocation for a predefined amount of time. The map system210provides various geographic location functions and supports the presentation of map-based media content and messages by the messaging client104. For example, the map system210enables the display of user icons or avatars (e.g., stored in profile data316) on a map to indicate a current or past location of “friends” of a user, as well as media content (e.g., collections of messages including photographs and videos) generated by such friends, within the context of a map. For example, a message posted by a user to the messaging system100from a specific geographic location may be displayed within the context of a map at that particular location to “friends” of a specific user on a map interface of the messaging client104. A user can furthermore share his or her location and status information (e.g., using an appropriate status avatar) with other users of the messaging system100via the messaging client104, with this location and status information being similarly displayed within the context of a map interface of the messaging client104to selected users. The game system212provides various gaming functions within the context of the messaging client104. The messaging client104provides a game interface providing a list of available games that can be launched by a user within the context of the messaging client104and played with other users of the messaging system100. The messaging system100further enables a particular user to invite other users to participate in the play of a specific game, by issuing invitations to such other users from the messaging client104. The messaging client104also supports both voice and text messaging (e.g., chats) within the context of gameplay, provides a leaderboard for the games, and also supports the provision of in-game rewards (e.g., coins and items). The external resource system214provides an interface for the messaging client104to communicate with remote servers (e.g., third-party servers110) to launch or access external resources, e.g., applications or applets. Each third-party server110hosts, for example, a markup language (e.g., HTML5) based application or small-scale version of an application (e.g., game, utility, payment, or ride-sharing application). The messaging client104may launch a web-based resource (e.g., application) by accessing the HTML5 file from the third-party servers110associated with the web-based resource. In certain examples, applications hosted by third-party servers110are programmed in JavaScript leveraging a Software Development Kit (SDK) provided by the messaging server118. The SDK includes Application Programming Interfaces (APIs) with functions that can be called or invoked by the web-based application. In certain examples, the messaging server118includes a JavaScript library that provides a given external resource access to certain user data of the messaging client104. HTML5 is used as an example technology for programming games, but applications and resources programmed based on other technologies can be used. In order to integrate the functions of the SDK into the web-based resource, the SDK is downloaded by a third-party server110from the messaging server118or is otherwise received by the third-party server110. Once downloaded or received, the SDK is included as part of the application code of a web-based external resource. The code of the web-based resource can then call or invoke certain functions of the SDK to integrate features of the messaging client104into the web-based resource. The SDK stored on the messaging server118effectively provides the bridge between an external resource (e.g., applications106or applets and the messaging client104. This provides the user with a seamless experience of communicating with other users on the messaging client104, while also preserving the look and feel of the messaging client104. To bridge communications between an external resource and a messaging client104, in certain examples, the SDK facilitates communication between third-party servers110and the messaging client104. In certain examples, a WebViewJavaScriptBridge running on a client device102establishes two one-way communication channels between an external resource and the messaging client104. Messages are sent between the external resource and the messaging client104via these communication channels asynchronously. Each SDK function invocation is sent as a message and callback. Each SDK function is implemented by constructing a unique callback identifier and sending a message with that callback identifier. By using the SDK, not all information from the messaging client104is shared with third-party servers110. The SDK limits which information is shared based on the needs of the external resource. In certain examples, each third-party server110provides an HTML5 file corresponding to the web-based external resource to the messaging server118. The messaging server118can add a visual representation (such as a box art or other graphic) of the web-based external resource in the messaging client104. Once the user selects the visual representation or instructs the messaging client104through a GUI of the messaging client104to access features of the web-based external resource, the messaging client104obtains the HTML5 file and instantiates the resources necessary to access the features of the web-based external resource. The messaging client104presents a graphical user interface (e.g., a landing page or title screen) for an external resource. During, before, or after presenting the landing section of reading materials such as a page or title screen, the messaging client104determines whether the launched external resource has been previously authorized to access user data of the messaging client104. In response to determining that the launched external resource has been previously authorized to access user data of the messaging client104, the messaging client104presents another graphical user interface of the external resource that includes functions and features of the external resource. In response to determining that the launched external resource has not been previously authorized to access user data of the messaging client104, after a threshold period of time (e.g., 3 seconds) of displaying the landing page or title screen of the external resource, the messaging client104slides up (e.g., animates a menu as surfacing from a bottom of the screen to a middle of or other portion of the screen) a menu for authorizing the external resource to access the user data. The menu identifies the type of user data that the external resource will be authorized to use. In response to receiving a user selection of an accept option, the messaging client104adds the external resource to a list of authorized external resources and allows the external resource to access user data from the messaging client104. In some examples, the external resource is authorized by the messaging client104to access the user data in accordance with an ORuth 2 framework. The messaging client104controls the type of user data that is shared with external resources based on the type of external resource being authorized. For example, external resources that include full-scale applications (e.g., an application106) are provided with access to a first type of user data (e.g., only two-dimensional avatars of users with or without different avatar characteristics). As another example, external resources that include small-scale versions of applications (e.g., web-based versions of applications) are provided with access to a second type of user data (e.g., payment information, two-dimensional avatars of users, three-dimensional avatars of users, and avatars with various avatar characteristics). Avatar characteristics include different ways to customize a look and feel of an avatar, such as different poses, facial features, clothing, and so forth. The enhanced reading system216provides functions and routines for providing enhanced augmented reality book reading. The enhanced reading system216provides the functions and routines as described herein and inFIG.6. The enhanced reading system216captures images of sections of reading material, e.g., pages of a book, as it is being read and detects quick response codes or codes in the book that correspond to code modules that provide a mixed reality experience for the sections of reading material, e.g., page of the book. The enhanced reading system216retrieves the code modules and executes the code modules. The enhanced reading system216provides power-saving features such as going into a sleep mode and waking up to execute the code modules. The enhanced reading system216may retrieve all the code modules for a book prior to reading and then cause the communications hardware and software to go into a sleep mode to save power. Data Architecture FIG.3is a schematic diagram illustrating data structures300, which may be stored in the database126of the messaging server system108, according to certain examples. While the content of the database126is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database126includes message data stored within a message table302. This message data includes, for any particular one message, at least message sender data, message recipient (or receiver) data, and a payload. Further details regarding information that may be included in a message, and included within the message data stored in the message table302is described below with reference toFIG.4. An entity table306stores entity data, and is linked (e.g., referentially) to an entity graph308and profile data316. Entities for which records are maintained within the entity table306may include individuals, corporate entities, organizations, objects, places, events, and so forth. Regardless of entity type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph308stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interested-based or activity-based, merely for example. The profile data316stores multiple types of profile data about a particular entity. The profile data316may be selectively used and presented to other users of the messaging system100, based on privacy settings specified by a particular entity. Where the entity is an individual, the profile data316includes, for example, a user name, telephone number, address, settings (e.g., notification and privacy settings), as well as a user-selected avatar representation (or collection of such avatar representations). A particular user may then selectively include one or more of these avatar representations within the content of messages communicated via the messaging system100, and on map interfaces displayed by messaging clients104to other users. The collection of avatar representations may include “status avatars,” which present a graphical representation of a status or activity that the user may select to communicate at a particular time. Where the entity is a group, the profile data316for the group may similarly include one or more avatar representations associated with the group, in addition to the group name, members, and various settings (e.g., notifications) for the relevant group. The database126also stores augmentation data, such as overlays or filters, in an augmentation table310. The augmentation data is associated with and applied to videos (for which data is stored in a video table304) and images (for which data is stored in an image table312). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a set of filters presented to a sending user by the messaging client104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters), which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device102, or the current time. Other augmentation data that may be stored within the image table312includes augmented reality content items (e.g., corresponding to applying Lenses or augmented reality experiences). An augmented reality content item may be a real-time special effect and sound that may be added to an image or a video. As described above, augmentation data includes augmented reality content items, overlays, image transformations, AR images, and similar terms refer to modifications that may be applied to image data (e.g., videos or images). This includes real-time modifications, which modify an image as it is captured using device sensors (e.g., one or multiple cameras) of a client device102and then displayed on a screen of the client device102with the modifications. This also includes modifications to stored content, such as video clips in a gallery that may be modified. For example, in a client device102with access to multiple augmented reality content items, a user can use a single video clip with multiple augmented reality content items to see how the different augmented reality content items will modify the stored clip. For example, multiple augmented reality content items that apply different pseudorandom movement models can be applied to the same content by selecting different augmented reality content items for the content. Similarly, real-time video capture may be used with an illustrated modification to show how video images currently being captured by sensors of a client device102would modify the captured data, Such data may simply be displayed on the screen and not stored in memory, or the content captured by the device sensors may be recorded and stored in memory with or without the modifications (or both). In some systems, a preview feature can show how different augmented reality content items will look within different windows in a display at the same time. This can, for example, enable multiple windows with different pseudorandom animations to be viewed on a display at the same time. Data and various systems using augmented reality content items or other such transform systems to modify content using this data can thus involve detection of objects (e.g., faces, hands, bodies, cats, dogs, surfaces, objects, etc.), tracking of such objects as they leave, enter, and move around the field of view in video frames, and the modification or transformation of such objects as they are tracked. In various examples, different methods for achieving such transformations may be used. Some examples may involve generating a three-dimensional mesh model of the object or objects, and using transformations and animated textures of the model within the video to achieve the transformation. In other examples, tracking of points on an object may be used to place an image or texture (which may be two dimensional or three dimensional) at the tracked position. In still further examples, neural network analysis of video frames may be used to place images, models, or textures in content (e.g., images or frames of video). Augmented reality content items thus refer both to the images, models, and textures used to create transformations in content, as well as to additional modeling and analysis information needed to achieve such transformations with object detection, tracking, and placement. Real-time video processing can be performed with any kind of video data (e.g., video streams, video files, etc.) saved in a memory of a computerized system of any kind. For example, a user can load video files and save them in a memory of a device, or can generate a video stream using sensors of the device. Additionally, any objects can be processed using a computer animation model, such as a human's face and parts of a human body, animals, or non-living things such as chairs, cars, or other objects. In some examples, when a particular modification is selected along with content to be transformed, elements to be transformed are identified by the computing device, and then detected and tracked if they are present in the frames of the video. The elements of the object are modified according to the request for modification, thus transforming the frames of the video stream. Transformation of frames of a video stream can be performed by different methods for different kinds of transformation. For example, for transformations of frames mostly referring to changing forms of object's elements characteristic points for each element of an object are calculated (e.g., using an Active Shape Model (ASM) or other known methods). Then, a mesh based on the characteristic points is generated for each of the at least one element of the object. This mesh is used in the following stage of tracking the elements of the object in the video stream. In the process of tracking, the mentioned mesh for each element is aligned with a position of each element. Then, additional points are generated on the mesh. A first set of first points is generated for each element based on a request for modification, and a set of second points is generated for each element based on the set of first points and the request for modification. Then, the frames of the video stream can be transformed by modifying the elements of the object on the basis of the sets of first and second points and the mesh. In such method, a background of the modified object can be changed or distorted as well by tracking and modifying the background. In some examples, transformations changing some areas of an object using its elements can be performed by calculating characteristic points for each element of an object and generating a mesh based on the calculated characteristic points. Points are generated on the mesh, and then various areas based on the points are generated. The elements of the object are then tracked by aligning the area for each element with a position for each of the at least one element, and properties of the areas can be modified based on the request for modification, thus transforming the frames of the video stream. Depending on the specific request for modification properties of the mentioned areas can be transformed in different ways. Such modifications may involve changing color of areas; removing at least some part of areas from the frames of the video stream; including one or more new objects into areas which are based on a request for modification; and modifying or distorting the elements of an area or object. In various examples, any combination of such modifications or other similar modifications may be used. For certain models to be animated, some characteristic points can be selected as control points to be used in determining the entire state-space of options for the model animation. In some examples of a computer animation model to transform image data using face detection, the face is detected on an image with use of a specific face detection algorithm (e.g., Viola-Jones). Then, an Active Shape Model (ASM) algorithm is applied to the face region of an image to detect facial feature reference points. Other methods and algorithms suitable for face detection can be used. For example, in some examples, features are located using a landmark, which represents a distinguishable point present in most of the images under consideration. For facial landmarks, for example, the location of the left eye pupil may be used. If an initial landmark is not identifiable (e.g., if a person has an eyepatch), secondary landmarks may be used. Such landmark identification procedures may be used for any such objects. In some examples, a set of landmarks forms a shape. Shapes can be represented as vectors using the coordinates of the points in the shape. One shape is aligned to another with a similarity transform (allowing translation, scaling, and rotation) that minimizes the average Euclidean distance between shape points. The mean shape is the mean of the aligned training shapes. In some examples, a search for landmarks from the mean shape aligned to the position and size of the face determined by a global face detector is started. Such a search then repeats the steps of suggesting a tentative shape by adjusting the locations of shape points by template matching of the image texture around each point and then conforming the tentative shape to a global shape model until convergence occurs. In some systems, individual template matches are unreliable, and the shape model pools the results of the weak template matches to form a stronger overall classifier. The entire search is repeated at each level in an image pyramid, from coarse to fine resolution. A transformation system can capture an image or video stream on a client device (e.g., the client device102) and perform complex image manipulations locally on the client device102while maintaining a suitable user experience, computation time, and power consumption. The complex image manipulations may include size and shape changes, emotion transfers (e.g., changing a face from a frown to a smile), state transfers (e.g., aging a subject, reducing apparent age, changing gender), style transfers, graphical element application, and any other suitable image or video manipulation implemented by a convolutional neural network that has been configured to execute efficiently on the client device102. In some examples, a computer animation model to transform image data can be used by a system where a user may capture an image or video stream of the user (e.g., a selfie) using a client device102having a neural network operating as part of a messaging client104operating on the client device102. The transformation system operating within the messaging client104determines the presence of a face within the image or video stream and provides modification icons associated with a computer animation model to transform image data, or the computer animation model can be present as associated with an interface described herein. The modification icons include changes that may be the basis for modifying the user's face within the image or video stream as part of the modification operation. Once a modification icon is selected, the transform system initiates a process to convert the image of the user to reflect the selected modification icon (e.g., generate a smiling face on the user). A modified image or video stream may be presented in a graphical user interface displayed on the client device102as soon as the image or video stream is captured, and a specified modification is selected. The transformation system may implement a complex convolutional neural network on a portion of the image or video stream to generate and apply the selected modification. That is, the user may capture the image or video stream and be presented with a modified result in real-time or near real-time once a modification icon has been selected. Further, the modification may be persistent while the video stream is being captured, and the selected modification icon remains toggled. Machine taught neural networks may be used to enable such modifications. The graphical user interface, presenting the modification performed by the transform system, may supply the user with additional interaction options. Such options may be based on the interface used to initiate the content capture and selection of a particular computer animation model (e.g., initiation from a content creator user interface). In various examples, a modification may be persistent after an initial selection of a modification icon. The user may toggle the modification on or off by tapping or otherwise selecting the face being modified by the transformation system and store it for later viewing or browse to other areas of the imaging application. Where multiple faces are modified by the transformation system, the user may toggle the modification on or off globally by tapping or selecting a single face modified and displayed within a graphical user interface. In some examples, individual faces, among a group of multiple faces, may be individually modified, or such modifications may be individually toggled by tapping or selecting the individual face or a series of individual faces displayed within the graphical user interface. A story table314stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table306). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story,” which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from varies locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story,” which enables a user whose client device102is located within a specific geographic location (e.g., on a college OF university campus) to contribute to a particular collection. In some examples, a contribution to a location story may require a second degree of authentication to verify that the end-user belongs to a specific organization or other entity (e.g., is a student on the university campus). As mentioned above, the video table304stores video data that, in one example, is associated with messages for which records are maintained within the message table302, Similarly, the image table312stores image data associated with messages for which message data is stored in the entity table306. The entity table306may associate various augmentations from the augmentation table310with various images and videos stored in the image table312and the video table304. Data Communications Architecture FIG.4is a schematic diagram illustrating a structure of a message400, according to some examples, generated by a messaging client104for communication to a further messaging client104or the messaging server118. The content of a particular message400is used to populate the message table302stored within the database126, accessible by the messaging server118. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application servers114. A message400is shown to include the following example components:message identifier402: a unique identifier that identifies the message400.message text payload404: text, to be generated by a user via a user interface of the client device102, and that is included in the message400.message image payload406: image data, captured by a camera component of a client device102or retrieved from a memory component of a client device102, and that is included in the message400. Image data for a sent or received message400may be stored in the image table312.message video payload408: video data, captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400. Video data for a sent or received message400may be stored in the video table304.message audio payload410: audio data, captured by a microphone or retrieved from a memory component of the client device102, and that is included in the message400.message augmentation data412: augmentation data (e.g., filters, stickers, or other annotations or enhancements) that represents augmentations to be applied to message image payload406, message video payload408, or message audio payload410of the message400. Augmentation data for a sent or received message400may be stored in the augmentation table310,message duration parameter414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload406, message video payload408, message audio payload410) is to be presented or made accessible to a user via the messaging client104.message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respect to content items included in the content (e.g., a specific image into within the message image payload406, or a specific video in the message video payload408).message story identifier418: identifier values identifying one or more content collections (e.g., “stories” identified in the story table314) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.message tag420: each message400may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the Client device102on which the message400was generated and from which the message400was sent.message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within an image table312. Similarly, values within the message video payload408may point to data stored within a video table304, values stored within the message augmentations412may point to data stored in an augmentation table310, values stored within the message story identifier418may point to data stored in a story table314, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within an entity table306. Time-Based Access Limitation Architecture FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message group504) may be time-limited (e.g., made ephemeral). An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client104. In one example, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer510, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer510is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. The ephemeral message502is shown inFIG.5to be included within an ephemeral message group504(e.g., a collection of messages in a personal story, or an event story). The ephemeral message group504has an associated group duration parameter508, a value of which determines a time duration for which the ephemeral message group504is presented and accessible to users of the messaging system100. The group duration parameter508, for example, may be the duration of a music concert, where the ephemeral message group504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the group duration parameter508when performing the setup and creation of the ephemeral message group504. Additionally, each ephemeral message502within the ephemeral message group504has an associated group participation parameter512, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message group504. Accordingly, a particular ephemeral message group504may “expire” and become inaccessible within the context of the ephemeral message group504, prior to the ephemeral message group504itself expiring in terms of the group duration parameter508. The group duration parameter508, group participation parameter512, and message receiver identifier424each provide input to a group timer514, which operationally determines, firstly, whether a particular ephemeral message502of the ephemeral message group504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message group504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the group timer514operationally controls the overall lifespan of an associated ephemeral message group504, as well as an individual ephemeral message502included in the ephemeral message group504. In one example, each and every ephemeral message502within the ephemeral message group504remains viewable and accessible for a time period specified by the group duration parameter508. In a further example, a certain ephemeral message502may expire, within the context of ephemeral message group504, based on a group participation parameter512. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message group504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message group504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message group504based on a determination that has exceeded an associated group participation parameter512. For example, when a sending user has established a group participation parameter512of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message group504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message group504when either the group participation parameter512for each and every ephemeral message502within the ephemeral message group504has expired, or when the ephemeral message group504itself has expired in terms of the group duration parameter508. In certain use cases, a creator of a particular ephemeral message group504may specify an indefinite group duration parameter508. In this case, the expiration of the group participation parameter512for the last remaining ephemeral message502within the ephemeral message group504will determine when the ephemeral message group504itself expires. In this case, a new ephemeral message502, added to the ephemeral message group504, with a new group participation parameter512, effectively extends the life of an ephemeral message group504to equal the value of the group participation parameter512, Responsive to the ephemeral timer system202determining that an ephemeral message group504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(and, for example, specifically the messaging client104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message group504to no longer be displayed within a user interface of the messaging client104. Similarly, when the ephemeral tuner system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client104to no longer display an indicium an icon or textual identification) associated with the ephemeral message502. Enhanced Book Reading with AR Glasses FIG.6illustrates a system600for enhanced book reading with AR glasses, in accordance with some examples. The system600includes wearable electronic device602and a backend604. The wearable electronic device602may be the glasses1100. The VR module621of the wearable electronic device602provides a mixed reality or augmented reality experience for the user. For example, images may be projected on the lenses or display620of the glasses1100. Additionally, the camera612, for example cameras1169ofFIG.11, captures images of the real environment and the VR module621uses this information to provide a VR interactive environment. For example, the hands of the person using the wearable electronic device602are projected on the display620and the person may interact with the virtual images presented on the display620. The display620includes display hardware and software and/or firmware to control the display. The display620is transparent so the user can see both the images presented on the display as well as the real world objects that are visible through the display. The module executor624runs or executes modules such as the books module606on the wearable electronic device602. The module presenter626is a module that presents other modules for execution by the module executor624. The communications622is wireless communications hardware and software for communicating with the backend604. For example, the communications622includes an antenna and software or hardware to implement communication protocols such as IEEE 802.11 or Blue Tooth®. The communications622may include processing circuitry that is configured to implement one or more communication protocols and control transceiver circuitry that is coupled to the processing circuitry and an antenna. The reading module606adds enhancements for reading physical books. The reading module606includes or may call code detection module608, camera module614, display module616, navigation module618, and code module fetcher610. The code detection module608takes an image and determines whether there is a depiction of a code613such as code902,1002, and1004within the image611or whether the image611contains a code613, which may be termed a printed code, QR code, Snapcode®, or another name. The camera module614controls or causes the camera612to capture an image and make it available for other modules. The camera612is camera hardware and software or firmware to control the camera612. The display module616displays or causes to be displayed text and images for a user of the wearable electronic device602. The navigation module618manages the execution of code modules630, which may reside on the wearable electronic device602and/or the backend604. The code modules630correspond to codes613that the code detection module608detects. The code to code module map609provides a mapping from codes to code module630. The code modules630include a recommended sleep time or duration between sections of reading material, e.g., pages, in accordance with some embodiments. The reading module606causes the camera module614to capture an image611. The reading module606causes the code detection module608to determine if there are codes613in the image611. If there are codes613in the image611, then the reading module606determines which code module630to cause the module executor624to execute in response to the code613being present in the image611. In some examples, the reading module606decodes the code613and sends the information encoded in the code613to the backend604. The backend604determines which code module630should be executed based on the information encoded in the code613, in some examples, the code613indicates an entire reading material, such as an entire book, which corresponds to a number of code modules630. In some examples, the reading module606refrains from turning on display hardware unless the code613is identified or a physical user interface item629is selected. The camera612, display620, and communications622have a low-energy consumption mode where either a portion of the circuitry has power lowered or turned off or the entire circuitry has its power lowered or turned off. In some examples, camera612, display620, and communications622each include at least two different circuits where one circuit has power keep to the circuit for turning on and off the remaining circuitry. The camera612, display620, and communications622can be placed in a low power or lower power mode by turning off the circuitry except for the low power circuit. The low power mode puts the hardware in a low energy consumption mode. In some examples, the code modules630provide a mixed reality experience where an initial location of virtual reality objects is related to the objects depicted on the section of the readming material, such as a page of the book, in the image611, For example, referring toFIG.9, the code module630corresponding to code902initially depicts a virtual reality dragon in the location of the dragon906where the VR or AR dragon then flies off the page of the book. The code module630may access the camera module614to provide a live image of the section of the reading material, e.g. a page of a book, which is augmented with VR objects and interactive UI elements or items. In other embodiments, the code module630presents AR objects and interactive UI elements or items so that the user sees them on a screen as well as seeing the real world objects through the screen. In some embodiments, the AR objects and interactive UI elements or items are projected on the eye of the user. In response to detecting that the code613corresponds to a piece of reading material, e.g., a book, the code module fetcher610checks if the code modules630associated with the code are preloaded at to the wearable electronic device602. If so, the code module fetcher610may access the code modules630locally from the wearable electronic device602. Alternatively, if the code modules630are not preloaded, the code module fetcher610may make a request for the code modules630from the backend604. In some examples, the code module fetcher610requests some or all of the code modules630for a piece of reading material, e.g., a book, when the code module fetcher610determines that a particular book is being read, which may be inferred from the image611comprising a code613that indicates the a piece of reading material, e.g., book. In some examples, the code module fetcher610waits to load a code module630from the backend604until the code613corresponding to the code module630has been detected in the image611. The code module fetcher610may cache code modules630at the wearable electronic device602based on knowing which piece of reading material, e.g., book, is being read. For example, the code module fetcher610may request the next several code modules630after a code module630corresponding to a code613that has been detected. The module executor624takes the selected code module630and executes it. The navigation module618may be executed with the code module630to manage the code module630exiting and returning back to the reading module606. The code module630provides a mixed reality experience for the reader of a piece of reading material, e.g., book. The a piece of reading material, e.g., book, may be a physical such as paper or may be a reading material that is being electronically read. The mixed reality experience includes six degrees of freedom in accordance with some examples. The mixed reality experience includes interactive objects such as user interface objects. The user may terminate the code module630or the code module630may timeout. The navigation module618determines when there is a timeout and returns control to the reading module606when there is a timeout. User data627is collected for how the user is reading the reading material, e.g., book. Some of the data that may be collected is how often a section of reading material is changed, e.g., a page is turned for a book, whether the user interacted with a code module630, how often the user reads, and so forth. The user data627is stored collectively to protect privacy, in accordance with some examples. The user data627is encrypted to protect privacy, in accordance with some examples. The reading module606detects when reading material, e.g., book, is no longer being read based on the images611. For example, the images611may no longer include the reading material, e.g., book, which may be used to infer that the reading material, e.g., book, is no longer being read. The image611may be of a different piece of reading material, e.g., a different book, in which case the reading module606may determine to exit or to scan the image611for codes613. The reading module606may be executed or selected by the user. The user interface item629is a means for the user of the wearable electronic device602to interact with the wearable electronic device602. In some examples, the user interface item629is an external button. Reading mode, e.g., a book rading mode, can be selected from a module carousel1214and causes the remaining user interface to stop being displayed. The wearable electronic device602enters a mode where the display620is turned off to conserve the battery631. The reading module606is executing and non-essential services are shutdown. The sleep module615will periodically request that the camera module614controls the camera612to capture an image611or frame. In some examples, the sleep module615will request that the camera module614capture an image611from the camera every 15 seconds, which corresponds to ⅛th of the time spent on average on a page of text of a book. The code detection module608then checks the image611for codes613. If no codes613are found, the image611is discarded. If code detection module608detects a code613, then the code to code module map609looks to see if a corresponding code module630is loaded in the wearable electronic device602. If the code module630is not loaded, then the code module fetcher610manages retrieving the code module630from the backend604via the communications622by sending packets or communications628to the backend604. In some examples, the sleep module615places the hardware of the AR glasses into a sleep state for a sleep duration or sleep time where user interface item629selections are still recognized and where a timer or interrupt circuit is still powered to awake the wearable electronic device602. In some examples, the sleep module615places the display620and camera612hardware into a low power consumption state. The sleep time or sleep duration is based on an amount of time to read a section of reading material, e.g., a page of a book, with a goal of awaking once for every section of the reading material, e.g., page of the book. The sleep time or sleep duration is an estimate of the reading time of a section of reading material, e.g., page of the book, and may be from less than a second to several minutes. The sleep time or sleep duration may be adjusted depending on the reading habits of the user. The sleep module615may determine to enter a sleep mode based on the code module630reporting an explicit user input to exit the code module630, a timeout period where the user has not interacted with the code module630for a predetermined duration of time, or if the code module630detects that the user has turned to a different section of reading material, e.g., page, or is no longer focused on the book. The code module630may use the camera612during operation to provide a mixed reality experience and may be responsible for ensuring that the user is still focused on the same section of the reading material, e.g., page of the hook. The sleep module615puts the communications622into a sleep mode after the code modules630have been retrieved for a piece of reading material, e.g., a book, until the reading mode, e.g., book reading mode, is exited or another piece of reading material, e.g., a book, is determined to be within an image611. The sleep module615may power down the communications622and then power up the communications622when the reading mode, e.g., book reading mode, is exited. In some examples, the communications622have a low power mode where power is maintained to a portion of the circuitry to permit high-priority communications but that requires powering up the remainder of the communications622prior to use. If a code module630is matched to the code613, then the display module616turns the display620on and the navigation module618launches the code module630. The reading module606is paused while the code module630executes or runs. When the user is finished with the code module630the navigation module618terminates the code module630and returns to the reading module606. The user may indicate they are done with the code module630via an interaction with the code module630such as a hand gesture, a timeout, or another interaction such as with the user interface item629. The sleep module615may use different times to wake up and request that the camera module614capture an image. The sleep module615is configured to wake-up when the user interface item629is selected, in accordance with some examples. For example, a button on a side of the glasses1100may wake-up the sleep module615. For example, fifteen seconds may be too long for the sleep module615to wake-up. The sleep module615may adjust the time for waking up based on user data627. In some examples, the code detection module608determines a section of reading material, e.g., page of the book, that is being read and stores this in the user data627. If the page numbers are not sequential, for example, a first page is page 2 and a second page is page 4, then the sleep module615may reduce the wake-up time to 5 seconds to ensure that every page is captured by an image611to search for codes613. In some examples, the code613is associated with a page of the book so that sleep module615can determine if sections of reading material, e.g., pages, have been skipped from being captured by the camera module614. The sleep module615reduces the sleep time if pages are being skipped. The sleep module615enables the wearable electronic device602to sleep and be more energy-efficient since an image611is captured only once every 15 seconds or so, and the wearable electronic device602can go into a sleep mode when it is not capturing and analyzing images611. Having the code detection module608resident in the wearable electronic device602means that the wearable electronic device602does not have to send the image611over the communications622to be analyzed, which reduces network usage and may save time and battery631usage. In some examples, when the code module630is being executed other user interfaces are not presented, which may save battery631usage. The wearable electronic device602may enter a passive mode where the user would have to exit the code module630to access additional user interface options other than a physical user interface item629such as a button. The code module630may have a user interface item to exit or terminate the code module630such as a press of a button when the user interface item629is a button or a hand jester. In some examples, when the code module630is executing images611, the code module630is configured to detect a change in the section of the reading material, e.g., a turn of the page, which terminates the code module630and causes another image611to be captured and analyzed by the code detection module608for codes613. In an offline mode of reading module606, a target piece of reading material, e.g., a target book, may have a special code613that indicates that all of the code modules630of the target book should be downloaded to the wearable electronic device602so that the communications622does not have to be used during the reading of the target book. In some examples, the wearable electronic device602has an option that when a code613is read from a book, then the code module fetcher610retrieves all the code modules630for that book so that the communications622does not have to be used during the reading of the book. The communications622is powered off until a new book is detected or until the book mode is exited. A new book may be detected if the code613indicates a different book than a current book that is being read. Book is used but it is understood that other reading materials may be used. In some examples, the book mode ofFIG.12may be for specific reading materials, e.g., books. For example, the highlighted1210may be for a specific book with a “<Book title>”. The icon that is highlighted1210may be a special icon for a specific book. Selecting the highlighted1210would cause a reading module606to be executed that is specific to that book. The code modules630are loaded into the wearable electronic device602so that the communications622do not have to be used during the reading of the specific book. The backend604is a server in the messaging server system108or a client device102. In some examples, the backend604is a client device102that passes the communications from the wearable electronic device602to the server in the messaging server system108and sends the code modules630to the wearable electronic device602. The client device102acts as a cache where the client device102recognizes that a new piece of rading material, e.g., book, is being read. The client device102requests all the code modules630from the server and acts as the backend604. In some examples, the client device102acting as a cache enables a lower energy communication protocol to be used. FIG.7illustrates a flow diagram700of the operation of the wearable electronic device602, in accordance with some examples. The flow diagram700begins with module presentation702. The module presenter626presents different modules or applications from which the user may select. For example, module carousel1214illustrated inFIG.12is presented on the display620of a mobile device1202. The wearable electronic device602, e.g., AR glasses, may be paired another device such as a client device (e.g., mobile device1202) and that the controls can be set through use of the client device, (e.g., as shown inFIG.12). In other embodiments the controls, e.g., the module carousel1214, is displayed on the wearable electronic device602, e.g., displayed on the screen of the AR glasses. The flow diagram700continues at reading module in focus704. For example, the user moves the reading module606to the highlighted1210position of the module carousel1214. The flow diagram700continues with the reading module highlighted706. The flow diagram700continues with reading module selected708, For example, the finger1212selects highlighted1210. The flow diagram700continues with books module714, which is now executing. For example, the reading module606may be executed by the module executor624, which may be called by the module presenter626in response to the reading module606being selected by the finger1212of a user. The flow diagram700continues with process sections of reading material, pages, and looks for codes716, which is performed by the code detection module608. For example, the sleep module615may have the camera module614capture an image611. The code detection module608looks for a code613. The reading module606displays724that a code was found using the display module616. The code module fetcher722fetches a code module630corresponding to the found code613. The navigation module618manages the execution of the code module630. The code module630uses the display module616to present an interactive augmented reality experience to the user. The AR module621is used to provide user interface items that are interactive. The module executor624is used by the navigation module618to execute the code module630. The navigation module618detects that there is an exit732from the code module630and moves to a state of module exit726where control is returned to the reading module606. In some examples, the reading module606continues to run during execution of the reading module606. The reading module606captures images611periodically while the code module630executes and determines an identifier for a section of reading materials, e.g., a page number. The exit732is triggered by a change the identifier for the section of the reading material, e.g., in page number, being detected by the reading module606. FIG.8illustrates a method800of enhanced reading with AR glasses, in accordance with some examples. The method800is performed by the reading module606or, more specifically, the sleep module615, in accordance with some examples. The method800starts in a sleep state802where a reading material state, e.g., a book state, is already entered. For example, the highlighted1210book icon is selected as illustrated inFIG.12. The sleep state802periodically checks if it is a wake time804. For example, the sleep state802may wake every 1 to 30 seconds. If it is not yet wake time, then sleep module615returns to the sleep state802. In some examples, an interrupt is used to wake the sleep module615based on a time set by the sleep module615. If it is a wake time804, then the sleep module615takes and processes an image806as described herein. In some examples, the sleep module615is awakened from a manual waken808. For example, a user may press a button, which may be a user interface item629. After the take and process an image806state, the sleep module615determines whether to exit812. For example, the image captured in take and process an image806may indicate that the user is no longer reading a reading material, e.g., a book. If the sleep module615determines to exit, then the sleep module615returns to the reading module810. For example, the reading module606may be called to determine what action should be taken next. In some examples, if the reading module606determines that the identifier for the section of the reading material, a page number of page, in the image611is the same as a previous image, then the reading module606reenters the sleep state802. In some examples, adjust sleep time820is performed before reentering the sleep state802. If the determination is not to exit, then the sleep module615determines whether to enter a continuous state814, example, the image may indicate that the user is flipping through the book quickly, so the sleep module615determines to enter a continuous state816where images are captured relatively frequently such as every one second. When the sleep module615determines to exit the continuous state816, the sleep module615determines whether to adjust sleep time818. For example, if the image indicates that sections of the reading material, e.g., pages, have been skipped, then the sleep module615may determine to adjust sleep time820. In another example, the sleep module615may determine that the image is of a same page as one or more previous images and determine to adjust sleep time820. If the sleep time is not to be adjusted, then the state returns to sleep state802. Otherwise, the state is to adjust sleep time820, which uses the user data672. The sleep time may be increased or decreased based on the reading speed of the user as determined by keeping track of which page the user is on in the book based on the captured images. In some examples, the sleep time or sleep duration is adjust to an estimated time for the user to read a section of reading material, e.g., a page of the book, based on the user data627including page numbers of the book and timestamps of when images of the pages with the page numbers were captured. In some examples, the sleep time is increased by a fraction of a second if an image is of the same page as a previous image. In some examples, the sleep time is decreased by a fraction of a second if an image is a different image than a previous image. In this way the sleep duration is constantly being adjusted to accommodate the reading speed of the wearer of the wearable electronic device602. In some examples the sleep duration is from one second to one-hundred and twenty seconds. FIG.9illustrates a book900with codes, in accordance with some examples. The book900includes a page number904of page 5, an illustration of a dragon906, and a code902. The code902is identified by code detection module608from an image611captured by camera module614. The code to code module map609determines a code module630corresponding to the identified code902. The code module fetcher610retrieves the appropriate code module630. The navigation module618and module executor624execute the code module630. In this case the code module630presents a three-dimensional dragon on the display620of the wearable electronic device602. The code module630may include a mixed reality experience where the wearer of the wearable electronic device602can interact with a virtual reality dragon. A book is illustrated inFIG.9but it is understood that other types of reading material may be used. FIG.10illustrates a book1000with codes, in accordance with some examples. The book1000includes page number1006of page 5 and page number1008of page 6, code1002, and code1004. The code1002and code1004are identified by code detection module608from an image611captured by camera module614. The code to code module map609determines a code module630corresponding to the identified code1002and a code module603corresponding to the identified code1004. The code module fetcher610retrieves the appropriate code modules630. In some examples, the code module630corresponding to code1002is executed first and then the code module630corresponding to code1004. In some examples, a menu or option is presented to the wearer of the wearable electronic device602regarding which code module630the wearer would like to execute first or execute no code modules630. The navigation module618and module executor624execute the selected code module630. In this case the code module630presents further instruction to aid in the instruction of the properties of gases. The code module630may include a mixed reality experience where the wearer of the wearable electronic device602can interact with virtual reality gases. A book is illustrated inFIG.10but it is understood that other types of reading material may be used. FIG.11illustrates examples of a wearable electronic device in the form of glasses1100, in accordance with some examples. The wearable electronic device in the form of glasses1100. The glasses1110are an article of eyewear constituted by electronics, which operate within a network system for communicating image and video content.FIG.11illustrates a front perspective view of the glasses1100. In some examples, the wearable electronic device is termed AR glasses. The glasses1100can include a frame1132made from any suitable material such as plastic or metal, including any suitable shape memory alloy. The frame1132can have a front piece1133that can include a first or left lens, display, or optical element holder1136and a second or right lens, display, or optical element holder1137connected by a bridge1138. The front piece1133additionally includes a left end portion1141and a right end portion1142. A first or left optical element1144and a second or right optical element1143can be provided within respective left and right optical element holders1136,1137. Each of the optical elements1143,1144can be a lens, a display, a display assembly, or a combination of the foregoing. In some examples, for example, the glasses1100are provided with an integrated near-eye display mechanism that enables, for example, display to the user of preview images for visual media captured by cameras1169of the glasses1100. The frame1132additionally includes a left arm or temple piece1146and a right arm or temple piece1147coupled to the respective left and right end portions1141,1142of the front piece1133by any suitable means such as a hinge (not shown), so as to be coupled to the front piece1133, or rigidly or fixedly secured to the front piece1133so as to be integral with the front piece1133. Each of the temple pieces1146and1147can include a first portion1151that is coupled to the respective end portion1141or1142of the front piece1133and any suitable second portion1152, such as a curved or arcuate piece, for coupling to the ear of the user. In one example, the front piece1133can be formed from a single piece of material, so as to have a unitary or integral construction. In one example, the entire frame1132can be formed from a single piece of material so as to have a unitary or integral construction. The glasses1100include a computing device, such as a computer1161, which can be of any suitable type so as to be carried by the frame1132and, in one example, of a suitable size and shape, so as to be at least partially disposed in one or more of the temple pieces1146and1147. In one example, the computer1161has a size and shape similar to the size and shape of one of the temple pieces1146,1147and is thus disposed almost entirely if not entirely within the structure and confines of such temple pieces1146and1147. In one example, the computer1161can be dispose in both of the temple pieces1146,11147. The computer1161can include one or more processors with memory, wireless communication circuitry, and a power source. The computer1161comprises low-power circuitry, high-speed circuitry, location circuitry, and a display processor. Various other examples may include these elements in different configurations or integrated together in different ways. Additional details of aspects of the computer1161may be implemented as described with reference to the description that follows. The computer1161additionally includes a battery1162or other suitable portable power supply. In one example, the battery1162is disposed in one of the temple pieces1146or1147. In the glasses1100shown inFIG.11, the battery1162is shown as being disposed in the left temple piece1146and electrically coupled using a connection1174to the remainder of the computer1161disposed in the right temple piece1147. One or more input and output devices can include a connector or port (not shown) suitable for charging a battery1162accessible from the outside of the frame1132, a wireless receiver, transmitter, or transceiver (not shown), or a combination of such devices. The glasses1100include digital cameras1169. Although two cameras1169are depicted, other examples contemplate the use of a single or additional (i.e., more than two) cameras1169. For ease of description, various features relating to the cameras1169will further be described with reference to only a single camera1169, but it will be appreciated that these features can apply, in suitable examples, to both cameras1169. Digital cameras1169are the camera612ofFIG.6, which may include two or more cameras. In various examples, the glasses1100may include any number of input sensors or peripheral devices in addition to the cameras1169. The front piece1133is provided with an outward-facing, forward-facing, front, or outer surface1166that faces forward or away from the user when the glasses1100are mounted on the face of the user, and an opposite inward-facing, rearward-facing, rear, or inner surface1167that faces the face of the user when the glasses1100are mounted on the face of the user. Such sensors can include inward-facing video sensors or digital imaging modules such as cameras1169that can be mounted on or provided within the inner surface1167of the front piece1133or elsewhere on the frame1132so as to be facing the user, and outward-facing video sensors or digital imaging modules such as the cameras1169that can be mounted on or provided with the outer surface1166of the front piece1133or elsewhere on the frame1132so as to be facing away from the user. Such sensors, peripheral devices, or peripherals can additionally include biometric sensors, location sensors, accelerometers, or any other such sensors. In some examples, projectors (not illustrated) are used to project images on the inner surface of the optical elements1143,1144(or lenses) to provide a mixed reality or augmented reality experience for the user of the glasses1100. The glasses1100further include an example of a camera control mechanism or user input mechanism comprising a camera control button mounted on the frame1132for haptic or manual engagement by the user. The camera control button provides a bi-modal or single-action mechanism in that it is disposable by the user between only two conditions, namely an engaged condition and a disengaged condition. In this example, the camera control button is a pushbutton that is by default in the disengaged condition, being depressible by the user to dispose it to the engaged condition. Upon release of the depressed camera control button, it automatically returns to the disengaged condition. In other examples, the single-action input mechanism can instead be provided by, for example, a touch-sensitive button comprising a capacitive sensor mounted on the frame1132adjacent to its surface for detecting the presence of a user's finger, to dispose the touch-sensitive button to the engaged condition when the user touches a finger to the corresponding spot on the outer surface1166of the frame1132. It will be appreciated that the above-described camera control button and capacitive touch button are but two examples of a haptic input mechanism for single-action control of the camera1169, and that other examples may employ different single-action haptic control arrangements. The computer1161is configured to perform the methods described herein. The computer1161is an example of a wearable electronic device602, in accordance with some examples. In some examples, the computer1161is coupled to one or more antennas for reception of signals from a GNSS and circuitry for processing the signals where the antennas and circuitry are housed in the glasses1100. In some examples, the computer1161is coupled to one or more wireless antennas and circuitry for transmitting and receiving wireless signals where the antennas and circuitry are housed in the glasses1100. In some examples, there are multiple sets of antennas and circuitry housed in the glasses1100. In some examples, the antennas and circuitry are configured to operate in accordance with a communication protocol such as Bluetooth™, Low-energy Bluetooth™, IEEE 802, IEEE 802.11az/be, and so forth. In some examples, PDR sensors are housed in glasses1100and coupled to the computer1161. In some examples, the glasses1100are VR headsets. FIG.12illustrates a user interface1200for module selection, in accordance with some examples. Illustrated is a mobile device1202with a camera1204and screen1206. A module for the wearable electronic device602is being selected on the mobile device1202, which may be a client device102. The module carousel1214is a user interface item where modules that are available for the user to select may be scrolled through by swiping the finger1212. The module in the center is highlighted. Here the reading mode, e.g., a book mode, is in the center of the carousel and is highlighted1210. The highlighted1210module has an explanation of the module highlighted1210, which here is plain text. The explanation of module1208may be an animation, an image, text, an interactive mixed reality experience, or something else to explain the functions of the highlighted1210module. The mobile device1202accepts the selection of the book mode and sends it to the wearable electronic device602. In some examples, the wearable electronic device602presents the user interface1200. For example, the module carousel1214may be a mixed reality or virtual reality user interface item and the finger1212may be a rendered image of the user's finger. The back interface item1216exits the user interface1200. FIG.13illustrates a method1300for enhanced reading with AR glasses, in accordance with some examples. The method1300at operation1302with entering a reading mode. For example, referring toFIG.7, when reading module is selected708, then the reading mode, e.g., book mode, is entered and the wearable electronic device602runs the reading module606. The method1300continues at operation1304with capturing a first image of a section of reading material. For example, the camera module614causes the camera612to capture an image611such as book1000. The method1300continues at operation1306with identifying a code within the first image. For example, code detection module608detects code613within the image611such as with the image611being book1000and the code613being code1002. The method1300continues at operation1308with identifying a code module corresponding to the code where the code module includes content corresponding to the section of the reading material. For example, code to code module map609takes the code613and maps it to a corresponding code module630. The code module fetcher610fetches the corresponding code module630. For example, for code1002ofFIG.10, then the code module630is a code module that provides a mixed reality experience for learning about the property of gases. The method1300continues at operation1310with executing the code module where the code module provides a virtual reality object related to an object depicted on the section of the reading materials, e.g., the page of the book. For example, module executor624with the navigation module618executes the corresponding code module630to provide the mixed reality experience of the gases. The method1300continues at operation1312entering a sleep mode for a sleep duration where the sleep duration is based on an estimated reading time duration of a section not reading material, e.g., a page of the book. For example, a time duration may have passed where the user was inactive with the code module630, the user may have turned to a different section of the reading materials, e.g., turned the page of the book, or the user may provide an explicit user input such as a hand wave to terminate the code module630. The method1300continues at operation1314with reentering the reading mode after the sleep duration. The sleep module615places the wearable electronic device602into a sleep state802upon termination of the code module630. The method continues at operation1316with capturing a second image. For example, the sleep module615wakes up the wearable electronic device602and begins the process anew by capturing a new image611to determine if a section of the reading material, e.g., page, change has occurred and whether codes613are included in the image611. The method1300may include one or more additional operations. Operations of method1300may be performed in a different order. One or more of the operations of method1300may be optional. The method1300may be performed by the client device102, VR wearable electronic device602, and/or the wearable electronic device in the form of VR glasses1100. Portions of the functionality may be performed on a server computer or host computer. The term book may be used as an exemplary but it should be understood that other reading material may be used such as news papers, menus, magazines, pamphlets, instructions, mail, and so forth. The term page may be used as an exemplary but it should be understood that other sections of reading material may be used such as a leaf of a menu, a portion of a billboard, and so forth. The term page is being used to represent a section of a reading materials that a user may be attentive to. Machine Architecture FIG.14is a diagrammatic representation of the machine1400within which instructions1410(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1400to perform any one or more of the methodologies discussed herein may be executed. For example, the instructions1410may cause the machine1400to execute any one or more of the methods described herein. The instructions1410transform the general, non-programmed machine1400into a particular machine1400programmed to carry out the described and illustrated functions in the manner described. The machine1400may operate as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1400may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1400may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smartwatch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1410, sequentially or otherwise, that specify actions to be taken by the machine1400. Further, while only a single machine1400is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1410to perform any one or more of the methodologies discussed herein. The machine1400, for example, may comprise the client device102or any one of a number of server devices forming part of the messaging server system108. In some examples, the machine1400may also comprise both client and server systems, with certain operations of a particular method or algorithm being performed on the server-side and with certain operations of the particular method or algorithm being performed on the client-side. The machine1400may include processors1404, memory1406, and input/output I/O components1402, which may be configured to communicate with each other via a bus1440. In an example, the processors1404(e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) Processor, a Complex Instruction Set Computing (CISC) Processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor1408and a processor1412that execute the instructions1410. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. AlthoughFIG.14shows multiple processors1404, the machine1400may include a single processor with a single-core, a single processor with multiple cores (e.g., a multi-core processor), multiple processors with a single core, multiple processors with multiples cores, or any combination thereof. The memory1406includes a main memory1414, a static memory1416, and a storage unit1418, both accessible to the processors1404via the bus1440. The main memory1406, the static memory1416, and storage unit1418store the instructions1410embodying any one or more of the methodologies or functions described herein. The instructions1410may also reside, completely or partially, within the main memory1414, within the static memory1416, within machine-readable medium1420within the storage unit1418, within at least one of the processors1404(e.g., within the Processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1400. The I/O components1402may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1402that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones may include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1402may include many other components that are not shown inFIG.14. In various examples, the I/O components1402may include user output components1426and user input components1428. The user output components1426may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The user input components1428may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or another pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location and force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further examples, the I/O components1402may include biometric components1430, motion components1432, environmental components1434, or position components1436, among a wide array of other components. For example, the biometric components1430include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye-tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1432include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope). The environmental components1434include, for example, one or cameras (with still image/photograph and video capabilities), illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. With respect to cameras, the client device102may have a camera system comprising, for example, front cameras on a front surface of the client device102and rear cameras on a rear surface of the client device102. The front cameras may, for example, be used to capture still images and video of a user of the client device102(e.g., “selfies”), which may then be augmented with augmentation data (e.g., filters) described above. The rear cameras may, for example, be used to capture still images and videos in a more traditional camera mode, with these images similarly being augmented with augmentation data. In addition to front and rear cameras, the client device102may also include a 360° camera for capturing 360° photographs and videos. Further, the camera system of a client device102may include dual rear cameras (e.g., a primary camera as well as a depth-sensing camera), or even triple, quad or penta rear camera configurations on the front and rear sides of the client device102. These multiple cameras systems may include a wide camera, an ultra-wide camera, a telephoto camera, a macro camera and a depth sensor, for example. The position components1436include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1402further include communication components1438operable to couple the machine1400to a network1422or devices1424via respective coupling or connections. For example, the communication components1438may include a network interface Component or another suitable device to interface with the network1422. In further examples, the communication components1438may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1424may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB). Moreover, the communication components1438may detect identifiers or include components operable to detect identifiers. For example, the communication components1438may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1438, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NEC beacon signal that may indicate a particular location, and so forth. The various memories (e.g., main memory1414, static memory1416, and memory of the processors1404) and storage unit1418may store one or more sets of instructions and data structures (e.g., software) embodying or used by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions1410), when executed by processors1404, cause various operations to implement the disclosed examples. The instructions1410may be transmitted or received over the network1422, using a transmission medium, via a network interface device (e.g., a network interface component included in the communication components1438) and using any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions1410may be transmitted or received using a transmission medium via a coupling (e.g., a peer-to-peer coupling) to the devices1424. Software Architecture FIG.15is a block diagram1500illustrating a software architecture1504, which can be installed on any one or more of the devices described herein. The software architecture1504is supported by hardware such as a machine1502that includes processors1520, memory1526, and I/O components1538. In this example, the software architecture1504can be conceptualized as a stack of layers, where each layer provides a particular functionality. The software architecture1504includes layers such as an operating system1512, libraries1510, frameworks1508, and applications1506. Operationally, the applications1506invoke API calls1550through the software stack and receive messages1552in response to the API calls1550. The operating system1512manages hardware resources and provides common services. The operating system1512includes, for example, a kernel1514, services1516, and drivers1522. The kernel1514acts as an abstraction layer between the hardware and the other software layers, example, the kernel1514provides memory management, processor management (e.g., scheduling), component management, networking, and security settings, among other functionality. The services1516can provide other common services for the other software layers. The drivers1522are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1522can include display drivers, camera drivers, BLUETOOTH® or BLUETOOTH® Low Energy drivers, flash memory drivers, serial communication drivers (e.g., USB drivers), WI-FI® drivers, audio drivers, power management drivers, and so forth. The libraries1510provide a common low-level infrastructure used by the applications1506. The libraries1510can include system libraries1518(e.g., C standard library) that provide functions such as memory allocation functions, string manipulation functions, mathematic functions, and the like. In addition, the libraries1510can include API libraries1524such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as Moving Picture Experts Group-4 (MPEG4), Advanced Video Coding (H.264 or AVC), Moving Picture Experts Group Layer-3 (MP3), Advanced Audio Coding (AAC), Adaptive Multi-Rate (AMR) audio codec, Joint Photographic Experts Group (JPEG or JPG), or Portable Network Graphics (PNG)), graphics libraries (e.g., an OpenGL framework used to render in two dimensions (2D) and three dimensions (3D) in a graphic content on a display), database libraries (e.g., SQLite to provide various relational database functions), web libraries (e.g., WebKit to provide web browsing functionality), and the like. The libraries1510can also include a wide variety of other libraries1528to provide many other APIs to the applications1506. The frameworks1508provide a common high-level infrastructure that is used by the applications1506. For example, the frameworks1508provide various graphical user interface (GUI) functions, high-level resource management, and high-level location services. The frameworks1508can provide a broad spectrum of other APIs that can be used by the applications1506, some of which may be specific to a particular operating system or platform. In an example, the applications1506may include a home application1536, a contacts application1530, a browser application1532, a reader application1534, a location application1542, a media application1544, a messaging application1546, a game application1548, and a broad assortment of other applications such as a third-party application1540. The applications1506are programs that execute functions defined in the programs. Various programming languages can be employed to create one or more of the applications1506, structured in a variety of manners, such as object-oriented programming languages (e.g., Objective-C, Java, or C++) or procedural programming languages (e.g., C or assembly language). In a specific example, the third-party application1540(e.g., an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform) may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or another mobile operating system. In this example, the third-party application1540can invoke the API calls1550provided by the operating system1512to facilitate functionality described herein. Processing Components Turning now toFIG.16, there is shown a diagrammatic representation of a processing environment1600, which includes a processor1602, a processor1606, and a processor1608(e.g., a GPU, CPU or combination thereof). The processor1602is shown to be coupled to a power source1604, and to include (either permanently configured or temporarily instantiated) modules, namely a code detection component1610, reading material component1612, and a communication component1614. The code detection component1610detects codes in images611. Referring toFIG.6, the code detection component1610performs the functions associated with the code detection module608. The reading material module component1612performs the functions associated with providing enhanced AR for reading. The reading module component1612performs the functions associated with the reading module606ofFIG.6. The communication component1614is coupled to communications hardware and is configured to implement communication protocols. The communications component1614performs functions associated with communications622. As illustrated, the processor1602is communicatively coupled to both the processor1606and the processor1608. Glossary “Carrier signal” refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over a network using a transmission medium via a network interface device. “Client device” refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, an AR glasses, a VR glasses, an AR wearable device, a desktop computer, a laptop, a portable digital assistants (PDAs), smartphones, tablets, ultrabooks, netbooks, laptops, multi-processor systems, microprocessor-based or programmable consumer electronics, game consoles, set-top boxes, or any other communication device that a user may use to access a network. “Communication network” refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or other types of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1xRTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (CPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “Component” refers to a device, physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various examples, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software), may be driven by cost and time considerations. Accordingly, the phrase “hardware component”(or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering examples in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In examples in which multiple hardware components are configured or instantiated at different times, communications between such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some examples, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other examples, the processors or processor-implemented components may be distributed across a number of geographic locations. “Computer-readable storage medium” refers to both machine-storage media and transmission media. Thus, the terms include both storage devices/media and carrier waves/modulated data signals. The terms “machine-readable medium,” “computer-readable medium” and “device-readable medium” mean the same thing and may be used interchangeably in this disclosure. “Ephemeral message” refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “Machine storage medium” refers to a single or multiple storage devices and media (e.g., a centralized or distributed database, and associated caches and servers) that store executable instructions, routines and data. The term shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks The terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium.” “Non-transitory computer-readable storage medium” refers to a tangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine. “Signal medium” refers to any intangible medium that is capable of storing, encoding, or carrying the instructions for execution by a machine and includes digital or analog communications signals or other intangible media to facilitate communication of software or data. The term “signal medium” shall be taken to include any form of a modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure.
128,087
11861802
Unless otherwise specifically noted, articles depicted in the drawings are not necessarily drawn to scale. DETAILED DESCRIPTION For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiment or embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. It should be understood at the outset that, although exemplary embodiments are illustrated in the figures and described below, the principles of the present disclosure may be implemented using any number of techniques, whether currently known or not. The present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described below. Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred” over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description. Modifications, additions, or omissions may be made to the systems, apparatuses, and methods described herein without departing from the scope of the disclosure. For example, the components of the systems and apparatuses may be integrated or separated. Moreover, the operations of the systems and apparatuses disclosed herein may be performed by more, fewer, or other components and the methods described may include more, fewer, or other steps. Additionally, steps may be performed in any suitable order. As used in this document, “each” refers to each member of a set or each member of a subset of a set. Any module, unit, component, server, computer, terminal, engine or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Further, unless the context clearly indicates otherwise, any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors. An augmented reality (“AR”) system20in accordance with an embodiment of the present disclosure is shown inFIG.1. The AR system20provides a more interactive, and more realistic user experience than some augmented reality systems of the prior art. The AR system20includes a physical apparatus100, an AR application300that includes a set of computer-readable instructions and is stored in storage302and/or another computer readable medium of a server system304. Also stored in storage3A computing device306in the form of a smart phone is in communication with the server system304via the Internet308through a cellular base station310, or via any other suitable data communications system. The computing device306can execute the AR application300to show a virtual reality object inserted into one or more images captured by the computing device306. The server system304can be one or more computer systems that are co-located or topologically distributed to serve the AR application300. The AR application300can have a number of versions that are varied based on the type of computing device, the operating system and version thereof on which they are to be executed, the country, etc. Assets of the AR application may be hosted on different computer systems and cached. Further, the AR application300may rely on software and/or functionality that is already present or to be retrieved on the computing device306on which the AR application300is executed. For example, the AR application300may rely on an AR application programming interface (“API”) that forms part of an operating system. The AR application300includes apparatus data to assist in identifying the physical apparatus100in captured images. The apparatus data can include one or more colors of the physical apparatus100, an identification of fiducial indicia on the physical apparatus100, and/or model data representing the shape of the physical apparatus100. In an alternative embodiment, the apparatus data can be provisioned separate from the AR application300as, for example, a resource file to allow for new physical apparatuses without updating the AR application300. As shown, for example, inFIGS.11and12, the physical apparatus100is operable to change detectably by a human (i.e. by a user) between a first state (FIG.11) and a second state (FIG.12). The physical apparatus100includes a signal receiver102(FIG.3) and at least one controllable element104that is operable to effect the change between the first state and the second state upon receiving the signal. In the embodiment shown inFIG.1, the at least one controllable element104includes a cage106, which is configured for holding the virtual reality object502therein, when viewed via the computing device306. The cage106includes a plurality of bars108, and a floor110. Beneath the floor110, the cage106further includes a storage chamber112which can be seen inFIG.8. In the embodiment shown inFIG.8, the change between the first state and the second state is a change in position of the cage106. To that end, the storage chamber112houses a first motor114, a second motor116, a physical apparatus controller118, a first support member120and a second support member122. The first and second motors114and116may be any suitable type of motor, such as servomotors or stepper motors. The first and second motors114and116together make up an actuator124that is for actuating the cage106to move the cage106between the first and the second positions. Each motor114,116has an output shaft126on which a respective one of the first and second support members120,122is held. Each of the first and second support members120includes a first arm128and a second arm130, which are pivotally connected together at a pivot joint. A proximal end of the first arm128is mounted to the output shaft126. At a distal end of the second arm130is a pair of feet136which support the cage106on a support surface, such as a tabletop, shown at SS. As the motors114and116rotate to different positions they adjust the position and/or the tilt angle of the cage106. As can be seen inFIGS.11and12, the motors114and116are operable to drive the first arms128of the first and second support members120and122to first angular positions as shown inFIG.11and to second angular positions as shown inFIG.12. It will be understood that, when the angular positions of the first arms128are the same, then the cage106is level, and when the angular positions of the first arms128are different from one another, then the cage106is tilted at a non-zero tilt angle. It will be noted that the controllable element104shown inFIGS.11and12is an actuatable element, in the sense that it moves. Another example of at least one actuatable element may include a cage door148, which is shown inFIGS.4,7and19Ain a first position (an open position), and inFIGS.11,12and19Bin a second position (a closed position). As best shown inFIGS.19A and19B, the cage door148may be held in the open position via a latch member150. The latch member150may be moveable (e.g. pivotable) between a locking position shown inFIG.19A, in which the latch member150engages with a notch152on the cage door150to hold the cage door148in the open position, and a release position shown inFIG.19Bin which the latch member150is pivoted out of the notch152so as to permit the cage door148to close under the force of gravity. Optionally a biasing member (not shown) may be provided to assist in closing the cage door148more quickly than would occur under gravity alone. Alternatively, in embodiments in which gravity is not used to close the cage door148, the biasing member may itself be the sole driver of the cage door148to the closed position, for example, in embodiments where the cage door148swings upwards to close, similar to a drawbridge, or in embodiments in which the cage door148swings horizontally to close, similar to a typical door in a home. The cage door148can be manually opened by a user. Once it is opened sufficiently that the notch152presents itself to the latch member150, a latch member biasing member (e.g. a torsion spring, not shown) may urge the latch member150into engagement with the notch152so as to hold the cage door148in the open position. With the cage door148in the open position, the user can, via an application that is executed on the computing device306, capture a virtual reality object502in some embodiments, such as embodiments in which the virtual reality object502is a virtual reality character that wanders into the cage106. A solenoid154is shown as an example actuator that is operable to actuate the latch member150, and therefore actuates the cage door148to move from the open position to the closed position. The solenoid154may be connected to the latch member150by a cable156, or by any other suitable structure. With respect to the cage door148, the first state of the physical apparatus100may be the state in which the cage door148is open, and the second state of the physical apparatus100may be the state in which the cage door148is closed. Now referring toFIGS.2A and2B, the computing device306is shown having a touchscreen display312, a speaker314, a microphone316, a front-facing camera318, hardware controls in the form of a home button320, a power button322, a volume up button324, a volume down button326, a pair of rear-facing cameras328, and a flash330. The touchscreen display312can employ any suitable display for presenting images, such as an LCD display, an OLED display, etc. The touchscreen display312enables the registration of input via contact of a user with the touchscreen display312. The display may be a non-touchscreen display in other embodiments. The computing device306can have one or more speakers such as the speaker314, and one or more microphones, such as the microphone316. The home button320can be used to exit from an application, authenticate a user through the use of a touch sensor in the home button320, etc. The volume up and down buttons324,326can be provided with additional and/or alternative functionality within certain applications. The rear-facing cameras328can be used to capture images of objects, including people, behind the computing device306. The flash330can be used to provide additional illumination for capturing images, and can be provided with additional and/or alternative functionality within certain applications. FIG.2Cshows various additional components of the computing device306. As shown, the computing device306has a number of physical and logical components, including a processor332, random access memory (“RAM”)334, an input/output (“I/O”) interface336, a communications interface338, non-volatile storage340, and a local bus342enabling the processor332to communicate with the other components. The processor332executes at least an operating system, and any applications installed on the computing device306. While shown and described as having a single processor, the computing device306can have two or more processors that act to perform the functionality described herein. RAM334provides relatively responsive volatile storage to the processor332. The I/O interface334allows for input to be received from one or more devices, such as the home button320, the touchscreen display312, the power button322, the volume up and down buttons324,326, the microphone316, the front- and rear-facing cameras318,328, a mouse, etc., and outputs information to output devices, such as the touchscreen display312and/or the speaker314. The communications interface338permits communication with other computing devices over data communications networks such as the Internet308via wired or wireless communications. The wireless communications can be, for example, via cellular (such as LTE), Wi-Fi, Bluetooth, etc. The non-volatile storage340stores the operating system and programs, including computer-executable instructions for implementing the AR application300. During operation of computing device306, the operating system, the programs and the data may be retrieved from the non-volatile storage340and placed in RAM334to facilitate execution. The computer-readable mediums of RAM334and the non-volatile storage340can also include removable computer-readable media, such as flash cards, USB drives, etc. In order to use the AR system20, a user can cause the computing device306to download and retrieve the AR application300from the server system304. This may be done, for example, by visiting an “application store” and downloading the AR application300to the computing device306. In an alternative embodiment, the computing device may be pre-loaded with the AR application300. In another alternative embodiment, the AR application300can be made available to the computing device via removable media, such as a flash card, a USB drive, etc. While, herein, the computing device306will be shown and described with reference to a smart phone, other types of computing devices having one or more cameras, one or more displays, one or more communications interfaces, storage for storing the AR application, and one or more processors for executing the AR application as described hereinbelow will occur to those skilled in the art. The physical apparatus controller118is shown inFIG.8and is shown schematically inFIG.3. The physical apparatus controller118controls the operation of the actuator124(e.g. by controlling power from a power source such as a battery pack (not shown) to the motors114and116. The physical apparatus controller118includes a processor118a, RAM118b, an i/o interface118c, a communications interface118dand non-volatile storage118e, which are connected to one another via a bus118f. The signal receiver102may be connected to physical apparatus controller118(e.g. via the i/o interface118c), so that the physical apparatus controller118can receive signals from the signal receiver102. In such an embodiment the signal receiver102may be any suitable type of signal receiver, such as an optical sensor for receiving signals from a light emitting element on the computing device, or a microphone for receiving audio signals emitted by the computing device306. Alternatively, the signal receiver may be part of the communications interface118dand may include a Bluetooth chip for receiving signals from the computing device306over a Bluetooth network, or a Wi-Fi chip for receiving signals from the computing device306over a Wi-Fi network. The signal receiver102in the embodiment shown inFIG.3is a Bluetooth chip. The physical apparatus controller118is also connected to an optionally provided speaker142, permitting the physical apparatus100to emit sound, so as to enhance the realism of the user experience. The speaker142can be used to emit sounds that give the user the impression that the virtual reality object502is in the cage106. The physical apparatus controller118can control the output from the speaker142based on commands provided via the signals received from the signal receiver118, and/or from direct interaction of the user with the physical apparatus100(e.g. tipping or knocking the physical apparatus100, or manually moving a movable element of the physical apparatus). The physical apparatus controller118may receive signals from an accelerometer144. The accelerometer144may be, for example, a three-axis accelerometer similar to those used in smartphones currently, and may be mounted directly onto the physical apparatus controller118, as shown inFIG.3. The accelerometer144may be used for one or more of several purposes. For example, the accelerometer144may be used to provide input to the physical apparatus controller118that can be transmitted back to the computing device306to assist the computing device306in determining the instantaneous position of the physical apparatus100in the event that the physical apparatus100is moved, tipped, knocked. Another purpose for the accelerator144may be to provide feedback for the operation of the actuator124, so as to provide closed-loop control for the actuator124. This closed-loop control can be used to ensure that the target position for the cage106is the actual position that is reached. Furthermore, in situations where the accelerometer144indicates that there is a problem and that the cage106is unable to reach its intended position (e.g. due to an obstruction), the physical apparatus controller118can communicate the position of the cage106to the computing device306to ensure that the virtual reality object502is properly rendered. The physical apparatus controller118may receive signals from an orientation sensor146. The orientation sensor146, may be a three-axis orientation sensor (e.g. a three-axis gyro), and may be directly mounted to the physical apparatus controller118, as shown inFIG.3. The physical apparatus controller118may use signals from the orientation sensor146in similar manner to the uses described above for signals from the accelerometer144, namely, for transmission back to the computing device306to assist the computing device306to determine the instantaneous orientation (instead of, or in addition to, the instantaneous position noted above) of the physical apparatus100in the event that the physical apparatus is moved, tipped or knocked, or to provide closed loop control for the actuator124, or alternatively, to communicate the orientation of the cage106to the computing device in the event that the cage106unable to reach its intended orientation, (e.g. due to an obstruction). Once the AR application300has been installed, or otherwise made available for execution, on the computing device306, the AR application300can be executed to commence use of the AR system20. FIG.10shows the computing device306being positioned in front of the physical apparatus100so that the physical apparatus100is in the field-of-view of the rear-facing cameras328. As is shown, the AR application300generates an AR image500of the physical apparatus100captured via the rear-facing cameras328in which a virtual reality (“VR”) object in the form of a VRFIG.502is inserted. The AR image500is presented on the display312. The AR application300analyzes the at least one image captured by the two rear-facing cameras328and determines the pose of the physical apparatus100. The pose of the physical apparatus100includes the location and orientation of the physical apparatus100. The AR application300employs images from one or both of the rear-facing cameras328together with the apparatus data to identify the physical apparatus100in the one or more images and determine its pose relative to the computing device306. The color, fiducial indicia, and model data for the physical apparatus100can each assist in identifying the physical apparatus100in the image(s). Where model data is available for the physical apparatus100, the AR application300can determine a transformation to apply to the model that best matches the identified physical apparatus100in the one or more images. Alternatively, two or more images from the rear-facing cameras328, either positionally or temporally displaced, can be used to identify depth of field. Where two images that are taken using the same rear-facing camera328are used, the change in pose of the computing device306between capturing the first and second images can be used to generate depth information for the imaged physical apparatus100and other objects. The AR application300either generates model data using the one or more images captured with the rear-facing camera328or uses the model data provided with the AR application300after transformation. The AR application300is configured to generate a VRFIG.502in a range of positions in or on the physical object100. For example, the AR application300may be configured to generate the VRFIG.502in an initial pose (in a default location) within the physical apparatus100(i.e., the cage), and allow the VRFIG.502to move within the confines of the cage. Using the model data for the physical apparatus100, the AR application300can generate the VRFIG.502so that it does not intersect the physical apparatus100represented by the model data. Further, the AR application can occlude portions of the VR character based on line-of-sight to the generated VRFIG.502and the position of elements of the physical apparatus100. As shown inFIG.10, the VRFIG.502is positioned centrally in the physical apparatus100and rests atop of the floor110thereof, thus not intersecting any portion of the physical apparatus100. Further, the bars108of the physical apparatus100occlude the VRFIG.502, as would naturally occur if a figure were positioned inside the physical apparatus100. The AR application300executing on the computing device306provides a control interface via the touchscreen display312. A user can tap, slide, or press on different parts of the touchscreen display312corresponding to different parts of the physical apparatus100and/or the VRFIG.502. In other embodiments, one or more of the hardware controls, such as the volume up and down buttons324,326can trigger certain commands, such as an interaction with the VR character or a state change in the physical apparatus100. FIG.11shows the AR image500presented on the touchscreen display312in isolation. As shown, the VR character502is positioned centrally within the physical apparatus100. As previously discussed, the physical apparatus100is supported atop of the support surface SS via the two support members120,122in a default state. One mode in which a user can interact with the VRFIG.502is to tap on a region TR on the touchscreen display312. FIG.12shows a AR image504after the user has tapped the touchscreen display312in the region TR. The VRFIG.502is animated to simulate walking atop of the floor110towards a side of the physical apparatus100adjacent to the region TR. Curiosity of the VRFIG.502is expressed by its movement towards the region TR as if the physical apparatus100was directly tapped. As the VRFIG.502is about to take each step, the AR application300sends a command in a signal to the physical apparatus100with a state change identifier and a parameter. The state change identifier corresponds to a particular state change, and the parameter(s) correspond to modifiers for the state change. The state change identifiers and parameters are pre-defined to simplify communications between the computing device306and the physical apparatus100. In this described scenario, the state change identifier can correspond to the action “vibrate”, and the parameters can indicate the strength of the vibration, the pattern of vibration, the time period during which to vibrate, etc. In addition, as the VRFIG.502travels to the left lateral side of the physical apparatus100, the AR application300directs the computing device306to send commands in signals with a state change identifier of “rotate first support member”; i.e., first support member120. The parameter passed with this state change identifier is the absolute rotation angle of the first support member120. Alternatively, the parameter can be the relative amount of rotation of the first support member120. These can be send simultaneously with or interleaved with the vibrate signals. The VRFIG.502stops at a periphery of the range of positions through which the VRFIG.502can move. The last command transmitted by the computing device306via a signal at the direction of the AR application300instructed the physical apparatus100to rotate the first support member120to lean the cage106to a second state as is shown inFIG.12. This mimics an expected behavior of the cage106when a physical object simulated by the VRFIG.502travels to one side of the cage106. The listing of the cage106, together with the vibrations generated during the footsteps of the VRFIG.502, assist in bringing the VRFIG.502in the cage106to life in the mind of the user. The signals including the commands can be transmitted by the computing device306executing the AR application300in one of a number of ways. In the presently described embodiment illustrated inFIGS.1to12, the computing device306transmits the signals over wireless radio frequency communications systems. While, in this particular embodiment, Bluetooth communications are employed, other wireless radio frequency communications systems, such as Wi-Fi or LTE, can be employed. In another embodiment, the signals can be sent by the computing device306via audio. The AR application300can direct the computing device306to generate encoded audio signals that are received by a microphone of the physical apparatus100and decoded to extract the state change identifiers and parameters. In one particular embodiment, the signals are sent via audio that is ultrasonic. Where the signals are sent via audio, it is possible that loss may occur due to a noisy environment. It can therefore be desirable to retransmit signals. In such noisy environments, it can be desirable to transmit the signals in advance of a time when a state change is desired of the physical apparatus100. The parameters can be used to identify timing information for the status changes. In order to avoid synchronizing clocks on the computing device306and the physical apparatus100, the timing information can indicate to effect the identified status change in x seconds. The parameters of commands in subsequent audio signals transmitted can be adjusted to reflect the reduced time period to the time at which the state change is desired to be effected. In other embodiments, the signals can be transmitted via light. The AR application300can control the flash330of the computing device306to transmit encoded signals via light to a light sensor on the physical apparatus100. It can be desirable in some embodiments to synchronize the clock of both the computing device306and the physical apparatus100in order to express timing information in absolute times. The signals including the commands can be sent via a combination of two or more of the above modes. Using these signals, more complex interactions are enabled. In one particular embodiment, the VRFIG.502is initially outside of the cage106of the physical apparatus100. The door of the cage106is in an open position, as is shown inFIG.13A. The VRFIG.502is programmed to wander in and out of the cage106. The user of the mobile device306can touch a region TR of the touchscreen display312to cause the cage door to close. Upon the user tapping in the region TR, the AR application300directs the computing device306to send a signal to the physical apparatus100. The signal includes a command with a state change identifier for closing the door of the cage106. If the VRFIG.502was in the cage106at the time that the region TR was pressed, the VRFIG.502is subsequently shown captured in the cage106, as is shown inFIG.13B. In other embodiments, differentiated buttons can be presented on the touchscreen display312to enable the user to interact with the VRFIG.502or the physical apparatus100. FIGS.14A to14Cshow images presented on the touchscreen display312of the computing device306, wherein user interaction with the control interface causes a physical change in the physical apparatus100and leading to an animation sequence of the VRFIG.502(in this embodiment illustrated as a humanoid). InFIG.14A, the floor110of the physical apparatus100is shown as being continuous. The VRFIG.502is shown standing atop of the continuous floor110. A region TR can be tapped to cause a set of trap doors506in the floor110of the physical apparatus100that are in a closed state to open. FIG.14Bshows an image presented on the touchscreen display312of the computing device306after tapping on the region TR. Upon tapping on the region TR, the AR application300directs the computing device306to send a signal including a command to the physical apparatus100to open the trap door in the floor110. Once the signal is received by the physical apparatus100, the trap doors506are opened and imaged by the rear-facing cameras328of the computing device306and presented on the touchscreen display312in an open state, exposing an opening OP. FIG.14Cshows a subsequent image presented on the touchscreen display312of the computing device306a short time after that ofFIG.14Bis shown. The VRFIG.502has walked over the opening OP and fallen through. Reference is made toFIGS.15and16which show another example of at least one actuatable element, which in this instance is the floor110of the cage106. The floor110may be made of a material that can be elastically deformed by a selected amount. As shown inFIG.16, on a lower surface160of the floor110are positioned a plurality of magnetically-responsive elements162, such as ferritic elements or such as magnets, for example. As can be seen inFIG.15, these magnetically-responsive elements162may be arranged in a uniform array about the floor110. In an alternative embodiment, the magnetically-responsive elements162may be positioned in a non-uniform arrangement about the floor110. Underneath the floor, a plurality of electromagnets164are provided, each electromagnet164positioned facing a corresponding one of the magnetically-responsive elements162. By energizing an electromagnet164(such as the electromagnet identified at164ainFIG.16) the corresponding magnetically-responsive element162is drawn towards (and optionally into engagement with) the electromagnet164. As a result, a depression166can be seen in the floor110from a person viewing the floor from above, which can appear to the user that the virtual reality object502is present in that location. By sequentially activating different electromagnets164along a selected path, the appearance of travel of the virtual reality object502about the cage106can be created. Alternatively, instead of generating a depression on the floor110that is intended to be seen by the user as being caused by the perceived weight of the virtual reality object502or by a footstep in that location taken by the virtual reality object502, it is possible for each electromagnet164to be energized and deenergized quickly, so as to cause a brief flutter locally in the floor110, which can convey to the user that the virtual reality object502has taken a footstep in that location. The electromagnets164are shown inFIG.16as being connected via electrical conduits to the physical apparatus controller118, and their operation is controlled by the physical apparatus controller118, optionally based on signals received by the physical apparatus controller118from the computing device306via the signal receiver102. In the embodiment shown inFIGS.15and16, the first state of the physical apparatus100may be the state in which the floor110is undisturbed by any of the electromagnets164(e.g. as shown inFIG.15), and the second state may be the state in which the floor110is depressed by one of the electromagnets164, as shown inFIG.16. Alternatively, it can be determined that the first state could be the state in which a first one of the electromagnets (e.g. electromagnet164a) causes a depression or a disturbance in the floor110in a first location (e.g. directly above the first electromagnet164a, as shown inFIG.16) and the second state of the physical apparatus100may be the state in which a second one of the electromagnets (shown inFIG.16at164b) causes a depression or a disturbance in the floor110in a second location (e.g. directly above the second electromagnet164b). In embodiments in which the floor110is depressed or is otherwise disturbed, it is possible to enhance the visual disturbance that is provided by energization of the electromagnets164by dispersing a loose material on the floor110such as sand or granules of some other suitable loose material. In an embodiment shown inFIG.9, a vibration module194is shown and may be operated by the physical apparatus controller118in brief spurts to simulate footsteps taken by the virtual reality object502. The vibration module194may be similar to the vibration modules found in smartphones, for example. The vibration module194may be particularly useful when combined with loose material on the floor110as described above. Reference is made toFIGS.17and18, which show an alternative embodiment in which there is another example of at least one controllable element. In this embodiment, the at least one controllable element does not move. Instead, the at least one controllable element includes a plurality of light-emitting elements170, which are positioned beneath the floor110as can be seen inFIG.18, and which point upwards to illuminate the floor110from underneath. The floor110in the embodiment shown inFIGS.17and18is sufficiently transmissive of the light emitted by the light-emitting elements170(i.e., translucent) that the user can see the illumination on a surface111of the floor110(i.e., from above the floor110). A spot of illumination is shown inFIGS.17and18at172. The light-emitting elements170may be mounted directly to a printed-circuit board as shown inFIG.18, or may be connected via electrical conduits in any suitable way to permit the physical apparatus controller118to control their operation individually. Each light-emitting element170may thus be energized so as to illuminate a selected location of the surface111of the floor110of the cage106so as to appear to the user that the virtual reality object502is present in that location. By energizing and deenergizing successive ones of the light-emitting elements170, the appearance that the virtual reality object502is travelling about the cage106can be created. In the embodiments shown inFIGS.19A and19B,15and16, and17and18, the change in state is at least visual in the sense that it is detectable by the user visually. Embodiments can be provided wherein the change in state is detectable by the user aurally either alternatively or additionally to visually detecting the change in state. For example, in the embodiment shown inFIGS.19A and19B, when the cage door148moves to the closed position with the virtual reality object502present inside the cage the virtual reality object502(in embodiments in which it is a virtual reality character) may emit a surprised sound. In some embodiments, the physical apparatus100includes an electrical port190that either acts as a connection to a source of electrical power to operate the physical apparatus100or is a connection to a source of electrical power to charge an onboard battery pack that may be present in the physical apparatus, as noted above. In some embodiments the physical apparatus100includes an indicator light192that indicates the status of the physical apparatus such as whether it is on or off or charging in embodiments that permit charging. The operation of the indicator light192is controlled by the physical apparatus controller118. Computer-executable instructions for implementing the AR application can be provided in other manners, such as as a web application. In an alternative embodiment, the AR application does not include apparatus data for the physical apparatus. The AR application may, instead, be used with an arbitrary apparatus or object. Although specific advantages have been enumerated above, various embodiments may include some, none, or all of the enumerated advantages. The term ‘figure’ and ‘character’ are used interchangeably in the present specification. Persons skilled in the art will appreciate that there are yet more alternative implementations and modifications possible, and that the above examples are only illustrations of one or more implementations. The scope, therefore, is only to be limited by the claims appended hereto and any amendments made thereto. LIST OF REFERENCE NUMERALS 20AR system100physical apparatus102signal receiver104controllable element106cage108bars110floor111surface112storage chamber114first motor116second motor118controller120first support member122second support member124actuator126output shaft128first arm130second arm136feet118aprocessor118bRAM118ci/o interface118dcommunications interface118enon-volatile storage118fbus142speaker144accelerometer146orientation sensor148cage door150latch member152notch154solenoid156cable160lower surface162magnetically-responsive elements164electromagnet166depression170light-emitting elements172spot of illumination190electrical port192indicator light300AR application302storage304server system306computing device308Internet310cellular base station312touchscreen display314speaker316microphone318front-facing camera320home button322power button324volume up button326volume down button328rear-facing camera330flash332processor334RAM336I/O interface338communications interface340non-volatile storage342local bus500AR image502VR FIG.504AR imageSS support surfaceTR tap region
38,983
11861803
DETAILED DESCRIPTION In the following description of examples, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the disclosed examples. Mixed Reality Environment Like all people, a user of a mixed reality system exists in a real environment that is, a three-dimensional portion of the “real world,” and all of its contents, that are perceptible by the user. For example, a user perceives a real environment using one's ordinary human senses sight, sound, touch, taste, smell—and interacts with the real environment by moving one's own body in the real environment. Locations in a real environment can be described as coordinates in a coordinate space; for example, a coordinate can include latitude, longitude, and elevation with respect to sea level; distances in three orthogonal dimensions from a reference point; or other suitable values. Likewise, a vector can describe a quantity having a direction and a magnitude in the coordinate space. A computing device can maintain, for example in a memory associated with the device, a representation of a virtual environment. As used herein, a virtual environment is a computational representation of a three-dimensional space. A virtual environment can include representations of any object, action, signal, parameter, coordinate, vector, or other characteristic associated with that space. In some examples, circuitry (e.g., a processor) of a computing device can maintain and update a state of a virtual environment; that is, a processor can determine at a first time t0, based on data associated with the virtual environment and/or input provided by a user, a state of the virtual environment at a second time t1. For instance, if an object in the virtual environment is located at a first coordinate at time t0, and has certain programmed physical parameters (e.g., mass, coefficient of friction); and an input received from user indicates that a force should be applied to the object in a direction vector; the processor can apply laws of kinematics to determine a location of the object at time t1 using basic mechanics. The processor can use any suitable information known about the virtual environment, and/or any suitable input, to determine a state of the virtual environment at a time t1. In maintaining and updating a state of a virtual environment, the processor can execute any suitable software, including software relating to the creation and deletion of virtual objects in the virtual environment; software (e.g., scripts) for defining behavior of virtual objects or characters in the virtual environment; software for defining the behavior of signals (e.g., audio signals) in the virtual environment; software for creating and updating parameters associated with the virtual environment; software for generating audio signals in the virtual environment; software for handling input and output; software for implementing network operations; software for applying asset data (e.g., animation data to move a virtual object over time); or many other possibilities. Output devices, such as a display or a speaker, can present any or all aspects of a virtual environment to a user. For example, a virtual environment may include virtual objects (which may include representations of inanimate objects; people; animals; lights; etc.) that may be presented to a user. A processor can determine a view of the virtual environment (for example, corresponding to a “camera” with an origin coordinate, a view axis, and a frustum); and render, to a display, a viewable scene of the virtual environment corresponding to that view. Any suitable rendering technology may be used for this purpose. In some examples, the viewable scene may include only some virtual objects in the virtual environment, and exclude certain other virtual objects. Similarly, a virtual environment may include audio aspects that may be presented to a user as one or more audio signals. For instance, a virtual object in the virtual environment may generate a sound originating from a location coordinate of the object (e.g., a virtual character may speak or cause a sound effect); or the virtual environment may be associated with musical cues or ambient sounds that may or may not be associated with a particular location. A processor can determine an audio signal corresponding to a “listener” coordinate—for instance, an audio signal corresponding to a composite of sounds in the virtual environment, and mixed and processed to simulate an audio signal that would be heard by a listener at the listener coordinate—and present the audio signal to a user via one or more speakers. Because a virtual environment exists only as a computational structure, a user cannot directly perceive a virtual environment using one's ordinary senses. Instead, a user can perceive a virtual environment only indirectly, as presented to the user, for example by a display, speakers, haptic output devices, etc. Similarly, a user cannot directly touch, manipulate, or otherwise interact with a virtual environment; but can provide input data, via input devices or sensors, to a processor that can use the device or sensor data to update the virtual environment. For example, a camera sensor can provide optical data indicating that a user is trying to move an object in a virtual environment, and a processor can use that data to cause the object to respond accordingly in the virtual environment. A mixed reality system can present to the user, for example using a transmissive display and/or one or more speakers (which may, for example, be incorporated into a wearable head device), a mixed reality environment (“MRE”) that combines aspects of a real environment and a virtual environment. In some embodiments, the one or more speakers may be external to the head-mounted wearable unit. As used herein, an MRE is a simultaneous representation of a real environment and a corresponding virtual environment. In some examples, the corresponding real and virtual environments share a single coordinate space; in some examples, a real coordinate space and a corresponding virtual coordinate space are related to each other by a transformation matrix (or other suitable representation). Accordingly, a single coordinate (along with, in some examples, a transformation matrix) can define a first location in the real environment, and also a second, corresponding, location in the virtual environment; and vice versa. In an MRE, a virtual object (e.g., in a virtual environment associated with the MRE) can correspond to a real object (e.g., in a real environment associated with the MRE). For instance, if the real environment of an MRE includes a real lamp post (a real object) at a location coordinate, the virtual environment of the MRE may include a virtual lamp post (a virtual object) at a corresponding location coordinate. As used herein, the real object in combination with its corresponding virtual object together constitute a “mixed reality object.” It is not necessary for a virtual object to perfectly match or align with a corresponding real object. In some examples, a virtual object can be a simplified version of a corresponding real object. For instance, if a real environment includes a real lamp post, a corresponding virtual object may include a cylinder of roughly the same height and radius as the real lamp post (reflecting that lamp posts may be roughly cylindrical in shape). Simplifying virtual objects in this manner can allow computational efficiencies, and can simplify calculations to be performed on such virtual objects. Further, in some examples of an MRE, not all real objects in a real environment may be associated with a corresponding virtual object. Likewise, in some examples of an MRE, not all virtual objects in a virtual environment may be associated with a corresponding real object. That is, some virtual objects may solely in a virtual environment of an MRE, without any real-world counterpart. In some examples, virtual objects may have characteristics that differ, sometimes drastically, from those of corresponding real objects. For instance, while a real environment in an MRE may include a green, two-armed cactus—a prickly inanimate object—a corresponding virtual object in the MRE may have the characteristics of a green, two-armed virtual character with human facial features and a surly demeanor. In this example, the virtual object resembles its corresponding real object in certain characteristics (color, number of arms); but differs from the real object in other characteristics (facial features, personality). In this way, virtual objects have the potential to represent real objects in a creative, abstract, exaggerated, or fanciful manner; or to impart behaviors (e.g., human personalities) to otherwise inanimate real objects. In some examples, virtual objects may be purely fanciful creations with no real-world counterpart (e.g., a virtual monster in a virtual environment, perhaps at a location corresponding to an empty space in a real environment). Compared to VR systems, which present the user with a virtual environment while obscuring the real environment, a mixed reality system presenting an MRE affords the advantage that the real environment remains perceptible while the virtual environment is presented. Accordingly, the user of the mixed reality system is able to use visual and audio cues associated with the real environment to experience and interact with the corresponding virtual environment. As an example, while a user of VR systems may struggle to perceive or interact with a virtual object displayed in a virtual environment—because, as noted above, a user cannot directly perceive or interact with a virtual environment—a user of an MR system may find it intuitive and natural to interact with a virtual object by seeing, hearing, and touching a corresponding real object in his or her own real environment. This level of interactivity can heighten a user's feelings of immersion, connection, and engagement with a virtual environment. Similarly, by simultaneously presenting a real environment and a virtual environment, mixed reality systems can reduce negative psychological feelings (e.g., cognitive dissonance) and negative physical feelings (e.g., motion sickness) associated with VR systems. Mixed reality systems further offer many possibilities for applications that may augment or alter our experiences of the real world. FIG.1Aillustrates an example real environment100in which a user110uses a mixed reality system112. Mixed reality system112may include a display (e.g., a transmissive display) and one or more speakers, and one or more sensors (e.g., a camera), for example as described below. The real environment100shown includes a rectangular room104A, in which user110is standing; and real objects122A (a lamp),124A (a table),126A (a sofa), and128A (a painting). Room104A further includes a location coordinate106, which may be considered an origin of the real environment100. As shown inFIG.1A, an environment/world coordinate system108(comprising an x-axis108X, a y-axis108Y, and a z-axis108Z) with its origin at point106(a world coordinate), can define a coordinate space for real environment100. In some embodiments, the origin point106of the environment/world coordinate system108may correspond to where the mixed reality system112was powered on. In some embodiments, the origin point106of the environment/world coordinate system108may be reset during operation. In some examples, user110may be considered a real object in real environment100; similarly, user110's body parts (e.g., hands, feet) may be considered real objects in real environment100. In some examples, a user/listener/head coordinate system114(comprising an x-axis114X, a y-axis114Y, and a z-axis114Z) with its origin at point115(e.g., user/listener/head coordinate) can define a coordinate space for the user/listener/head on which the mixed reality system112is located. The origin point115of the user/listener/head coordinate system114may be defined relative to one or more components of the mixed reality system112. For example, the origin point115of the user/listener/head coordinate system114may be defined relative to the display of the mixed reality system112such as during initial calibration of the mixed reality system112. A matrix (which may include a translation matrix and a Quaternion matrix or other rotation matrix), or other suitable representation can characterize a transformation between the user/listener/head coordinate system114space and the environment/world coordinate system108space. In some embodiments, a left ear coordinate116and a right ear coordinate117may be defined relative to the origin point115of the user/listener/head coordinate system114. A matrix (which may include a translation matrix and a Quaternion matrix or other rotation matrix), or other suitable representation can characterize a transformation between the left ear coordinate116and the right ear coordinate117, and user/listener/head coordinate system114space. The user/listener/head coordinate system114can simplify the representation of locations relative to the user's head, or to a head-mounted device, for example, relative to the environment/world coordinate system108. Using Simultaneous Localization and Mapping (SLAM), visual odometry, or other techniques, a transformation between user coordinate system114and environment coordinate system108can be determined and updated in real-time. FIG.1Billustrates an example virtual environment130that corresponds to real environment100. The virtual environment130shown includes a virtual rectangular room104B corresponding to real rectangular room104A; a virtual object122B corresponding to real object122A; a virtual object124B corresponding to real object124A; and a virtual object126B corresponding to real object126A. Metadata associated with the virtual objects122B,124B,126B can include information derived from the corresponding real objects122A,124A,126A. Virtual environment130additionally includes a virtual monster132, which does not correspond to any real object in real environment100. Real object128A in real environment100does not correspond to any virtual object in virtual environment130. A persistent coordinate system133(comprising an x-axis133X, a y-axis133Y, and a z-axis133Z) with its origin at point134(persistent coordinate), can define a coordinate space for virtual content. The origin point134of the persistent coordinate system133may be defined relative/with respect to one or more real objects, such as the real object126A. A matrix (which may include a translation matrix and a Quaternion matrix or other rotation matrix), or other suitable representation can characterize a transformation between the persistent coordinate system133space and the environment/world coordinate system108space. In some embodiments, each of the virtual objects122B,124B,126B, and132may have their own persistent coordinate point relative to the origin point134of the persistent coordinate system133. In some embodiments, there may be multiple persistent coordinate systems and each of the virtual objects122B,124B,126B, and132may have their own persistent coordinate point relative to one or more persistent coordinate systems. Persistent coordinate data may be coordinate data that persists relative to a physical environment. Persistent coordinate data may be used by MR systems (e.g., MR system112,200) to place persistent virtual content, which may not be tied to movement of a display on which the virtual object is being displayed. For example, a two-dimensional screen may only display virtual objects relative to a position on the screen. As the two-dimensional screen moves, the virtual content may move with the screen. In some embodiments, persistent virtual content may be displayed in a corner of a room. An MR user may look at the corner, see the virtual content, look away from the corner (where the virtual content may no longer be visible because the virtual content may have moved from within the user's field of view to a location outside the user's field of view due to motion of the user's head), and look back to see the virtual content in the corner (similar to how a real object may behave). In some embodiments, persistent coordinate data (e.g., a persistent coordinate system and/or a persistent coordinate frame) can include an origin point and three axes. For example, a persistent coordinate system may be assigned to a center of a room by an MR system. In some embodiments, a user may move around the room, out of the room, re-enter the room, etc., and the persistent coordinate system may remain at the center of the room (e.g., because it persists relative to the physical environment). In some embodiments, a virtual object may be displayed using a transform to persistent coordinate data, which may enable displaying persistent virtual content. In some embodiments, an MR system may use simultaneous localization and mapping to generate persistent coordinate data (e.g., the MR system may assign a persistent coordinate system to a point in space). In some embodiments, an MR system may map an environment by generating persistent coordinate data at regular intervals (e.g., an MR system may assign persistent coordinate systems in a grid where persistent coordinate systems may be at least within five feet of another persistent coordinate system). In some embodiments, persistent coordinate data may be generated by an MR system and transmitted to a remote server. In some embodiments, a remote server may be configured to receive persistent coordinate data. In some embodiments, a remote server may be configured to synchronize persistent coordinate data from multiple observation instances. For example, multiple MR systems may map the same room with persistent coordinate data and transmit that data to a remote server. In some embodiments, the remote server may use this observation data to generate canonical persistent coordinate data, which may be based on the one or more observations. In some embodiments, canonical persistent coordinate data may be more accurate and/or reliable than a single observation of persistent coordinate data. In some embodiments, canonical persistent coordinate data may be transmitted to one or more MR systems. For example, an MR system may use image recognition and/or location data to recognize that it is located in a room that has corresponding canonical persistent coordinate data (e.g., because other MR systems have previously mapped the room). In some embodiments, the MR system may receive canonical persistent coordinate data corresponding to its location from a remote server. With respect toFIGS.1A and1B, environment/world coordinate system108defines a shared coordinate space for both real environment100and virtual environment130. In the example shown, the coordinate space has its origin at point106. Further, the coordinate space is defined by the same three orthogonal axes (108X,108Y,108Z). Accordingly, a first location in real environment100, and a second, corresponding location in virtual environment130, can be described with respect to the same coordinate space. This simplifies identifying and displaying corresponding locations in real and virtual environments, because the same coordinates can be used to identify both locations. However, in some examples, corresponding real and virtual environments need not use a shared coordinate space. For instance, in some examples (not shown), a matrix (which may include a translation matrix and a Quaternion matrix or other rotation matrix), or other suitable representation can characterize a transformation between a real environment coordinate space and a virtual environment coordinate space. FIG.1Cillustrates an example MRE150that simultaneously presents aspects of real environment100and virtual environment130to user110via mixed reality system112. In the example shown, MRE150simultaneously presents user110with real objects122A,124A,126A, and128A from real environment100(e.g., via a transmissive portion of a display of mixed reality system112); and virtual objects122B,124B,126B, and132from virtual environment130(e.g., via an active display portion of the display of mixed reality system112). As above, origin point106acts as an origin for a coordinate space corresponding to MRE150, and coordinate system108defines an x-axis, y-axis, and z-axis for the coordinate space. In the example shown, mixed reality objects include corresponding pairs of real objects and virtual objects (i.e.,122A/122B,124A/124B,126A/126B) that occupy corresponding locations in coordinate space108. In some examples, both the real objects and the virtual objects may be simultaneously visible to user110. This may be desirable in, for example, instances where the virtual object presents information designed to augment a view of the corresponding real object (such as in a museum application where a virtual object presents the missing pieces of an ancient damaged sculpture). In some examples, the virtual objects (122B,124B, and/or126B) may be displayed (e.g., via active pixelated occlusion using a pixelated occlusion shutter) so as to occlude the corresponding real objects (122A,124A, and/or126A). This may be desirable in, for example, instances where the virtual object acts as a visual replacement for the corresponding real object (such as in an interactive storytelling application where an inanimate real object becomes a “living” character). In some examples, real objects (e.g.,122A,124A,126A) may be associated with virtual content or helper data that may not necessarily constitute virtual objects. Virtual content or helper data can facilitate processing or handling of virtual objects in the mixed reality environment. For example, such virtual content could include two-dimensional representations of corresponding real objects; custom asset types associated with corresponding real objects; or statistical data associated with corresponding real objects. This information can enable or facilitate calculations involving a real object without incurring unnecessary computational overhead. In some examples, the presentation described above may also incorporate audio aspects. For instance, in MRE150, virtual monster132could be associated with one or more audio signals, such as a footstep sound effect that is generated as the monster walks around MRE150. As described further below, a processor of mixed reality system112can compute an audio signal corresponding to a mixed and processed composite of all such sounds in MRE150, and present the audio signal to user110via one or more speakers included in mixed reality system112and/or one or more external speakers. Example Mixed Reality System Example mixed reality system112can include a wearable head device (e.g., a wearable augmented reality or mixed reality head device) comprising a display (which may include left and right transmissive displays, which may be near-eye displays, and associated components for coupling light from the displays to the user's eyes); left and right speakers (e.g., positioned adjacent to the user's left and right ears, respectively); an inertial measurement unit (IMU)(e.g., mounted to a temple arm of the head device); an orthogonal coil electromagnetic receiver (e.g., mounted to the left temple piece); left and right cameras (e.g., depth (time-of-flight) cameras) oriented away from the user; and left and right eye cameras oriented toward the user (e.g., for detecting the user's eye movements). However, a mixed reality system112can incorporate any suitable display technology, and any suitable sensors (e.g., optical, infrared, acoustic, LIDAR, EOG, GPS, magnetic). In addition, mixed reality system112may incorporate networking features (e.g., Wi-Fi capability) to communicate with other devices and systems, including other mixed reality systems. Mixed reality system112may further include a battery (which may be mounted in an auxiliary unit, such as a belt pack designed to be worn around a user's waist), a processor, and a memory. The wearable head device of mixed reality system112may include tracking components, such as an IMU or other suitable sensors, configured to output a set of coordinates of the wearable head device relative to the user's environment. In some examples, tracking components may provide input to a processor performing a Simultaneous Localization and Mapping (SLAM) and/or visual odometry algorithm. In some examples, mixed reality system112may also include a handheld controller300, and/or an auxiliary unit320, which may be a wearable beltpack, as described further below. FIGS.2A-2Dillustrate components of an example mixed reality system200(which may correspond to mixed reality system112) that may be used to present an MRE (which may correspond to MRE150), or other virtual environment, to a user.FIG.2Aillustrates a perspective view of a wearable head device2102included in example mixed reality system200.FIG.2Billustrates a top view of wearable head device2102worn on a user's head2202.FIG.2Cillustrates a front view of wearable head device2102.FIG.2Dillustrates an edge view of example eyepiece2110of wearable head device2102. As shown inFIGS.2A-2C, the example wearable head device2102includes an example left eyepiece (e.g., a left transparent waveguide set eyepiece)2108and an example right eyepiece (e.g., a right transparent waveguide set eyepiece)2110. Each eyepiece2108and2110can include transmissive elements through which a real environment can be visible, as well as display elements for presenting a display (e.g., via imagewise modulated light) overlapping the real environment. In some examples, such display elements can include surface diffractive optical elements for controlling the flow of imagewise modulated light. For instance, the left eyepiece2108can include a left incoupling grating set2112, a left orthogonal pupil expansion (OPE) grating set2120, and a left exit (output) pupil expansion (EPE) grating set2122. Similarly, the right eyepiece2110can include a right incoupling grating set2118, a right OPE grating set2114and a right EPE grating set2116. Imagewise modulated light can be transferred to a user's eye via the incoupling gratings2112and2118, OPEs2114and2120, and EPE2116and2122. Each incoupling grating set2112,2118can be configured to deflect light toward its corresponding OPE grating set2120,2114. Each OPE grating set2120,2114can be designed to incrementally deflect light down toward its associated EPE2122,2116, thereby horizontally extending an exit pupil being formed. Each EPE2122,2116can be configured to incrementally redirect at least a portion of light received from its corresponding OPE grating set2120,2114outward to a user eyebox position (not shown) defined behind the eyepieces2108,2110, vertically extending the exit pupil that is formed at the eyebox. Alternatively, in lieu of the incoupling grating sets2112and2118, OPE grating sets2114and2120, and EPE grating sets2116and2122, the eyepieces2108and2110can include other arrangements of gratings and/or refractive and reflective features for controlling the coupling of imagewise modulated light to the user's eyes. In some examples, wearable head device2102can include a left temple arm2130and a right temple arm2132, where the left temple arm2130includes a left speaker2134and the right temple arm2132includes a right speaker2136. An orthogonal coil electromagnetic receiver2138can be located in the left temple piece, or in another suitable location in the wearable head unit2102. An Inertial Measurement Unit (IMU)2140can be located in the right temple arm2132, or in another suitable location in the wearable head device2102. The wearable head device2102can also include a left depth (e.g., time-of-flight) camera2142and a right depth camera2144. The depth cameras2142,2144can be suitably oriented in different directions so as to together cover a wider field of view. In the example shown inFIGS.2A-2D, a left source of imagewise modulated light2124can be optically coupled into the left eyepiece2108through the left incoupling grating set2112, and a right source of imagewise modulated light2126can be optically coupled into the right eyepiece2110through the right incoupling grating set2118. Sources of imagewise modulated light2124,2126can include, for example, optical fiber scanners; projectors including electronic light modulators such as Digital Light Processing (DLP) chips or Liquid Crystal on Silicon (LCoS) modulators; or emissive displays, such as micro Light Emitting Diode (μLED) or micro Organic Light Emitting Diode (μOLED) panels coupled into the incoupling grating sets2112,2118using one or more lenses per side. The input coupling grating sets2112,2118can deflect light from the sources of imagewise modulated light2124,2126to angles above the critical angle for Total Internal Reflection (TIR) for the eyepieces2108,2110. The OPE grating sets2114,2120incrementally deflect light propagating by TIR down toward the EPE grating sets2116,2122. The EPE grating sets2116,2122incrementally couple light toward the user's face, including the pupils of the user's eyes. In some examples, as shown inFIG.2D, each of the left eyepiece2108and the right eyepiece2110includes a plurality of waveguides2402. For example, each eyepiece2108,2110can include multiple individual waveguides, each dedicated to a respective color channel (e.g., red, blue and green). In some examples, each eyepiece2108,2110can include multiple sets of such waveguides, with each set configured to impart different wavefront curvature to emitted light. The wavefront curvature may be convex with respect to the user's eyes, for example to present a virtual object positioned a distance in front of the user (e.g., by a distance corresponding to the reciprocal of wavefront curvature). In some examples, EPE grating sets2116,2122can include curved grating grooves to effect convex wavefront curvature by altering the Poynting vector of exiting light across each EPE. In some examples, to create a perception that displayed content is three-dimensional, stereoscopically-adjusted left and right eye imagery can be presented to the user through the imagewise light modulators2124,2126and the eyepieces2108,2110. The perceived realism of a presentation of a three-dimensional virtual object can be enhanced by selecting waveguides (and thus corresponding the wavefront curvatures) such that the virtual object is displayed at a distance approximating a distance indicated by the stereoscopic left and right images. This technique may also reduce motion sickness experienced by some users, which may be caused by differences between the depth perception cues provided by stereoscopic left and right eye imagery, and the autonomic accommodation (e.g., object distance-dependent focus) of the human eye. FIG.2Dillustrates an edge-facing view from the top of the right eyepiece2110of example wearable head device2102. As shown inFIG.2D, the plurality of waveguides2402can include a first subset of three waveguides2404and a second subset of three waveguides2406. The two subsets of waveguides2404,2406can be differentiated by different EPE gratings featuring different grating line curvatures to impart different wavefront curvatures to exiting light. Within each of the subsets of waveguides2404,2406each waveguide can be used to couple a different spectral channel (e.g., one of red, green and blue spectral channels) to the user's right eye2206. (Although not shown inFIG.2D, the structure of the left eyepiece2108is analogous to the structure of the right eyepiece2110.) FIG.3Aillustrates an example handheld controller component300of a mixed reality system200. In some examples, handheld controller300includes a grip portion346and one or more buttons350disposed along a top surface348. In some examples, buttons350may be configured for use as an optical tracking target, e.g., for tracking six-degree-of-freedom (6DOF) motion of the handheld controller300, in conjunction with a camera or other optical sensor (which may be mounted in a head unit (e.g., wearable head device2102) of mixed reality system200). In some examples, handheld controller300includes tracking components (e.g., an IMU or other suitable sensors) for detecting position or orientation, such as position or orientation relative to wearable head device2102. In some examples, such tracking components may be positioned in a handle of handheld controller300, and/or may be mechanically coupled to the handheld controller. Handheld controller300can be configured to provide one or more output signals corresponding to one or more of a pressed state of the buttons; or a position, orientation, and/or motion of the handheld controller300(e.g., via an IMU). Such output signals may be used as input to a processor of mixed reality system200. Such input may correspond to a position, orientation, and/or movement of the handheld controller (and, by extension, to a position, orientation, and/or movement of a hand of a user holding the controller). Such input may also correspond to a user pressing buttons350. FIG.3Billustrates an example auxiliary unit320of a mixed reality system200. The auxiliary unit320can include a battery to provide energy to operate the system200, and can include a processor for executing programs to operate the system200. As shown, the example auxiliary unit320includes a clip2128, such as for attaching the auxiliary unit320to a user's belt. Other form factors are suitable for auxiliary unit320and will be apparent, including form factors that do not involve mounting the unit to a user's belt. In some examples, auxiliary unit320is coupled to the wearable head device2102through a multiconduit cable that can include, for example, electrical wires and fiber optics. Wireless connections between the auxiliary unit320and the wearable head device2102can also be used. In some examples, mixed reality system200can include one or more microphones to detect sound and provide corresponding signals to the mixed reality system. In some examples, a microphone may be attached to, or integrated with, wearable head device2102, and may be configured to detect a user's voice. In some examples, a microphone may be attached to, or integrated with, handheld controller300and/or auxiliary unit320. Such a microphone may be configured to detect environmental sounds, ambient noise, voices of a user or a third party, or other sounds. FIG.4shows an example functional block diagram that may correspond to an example mixed reality system, such as mixed reality system200described above (which may correspond to mixed reality system112with respect toFIG.1). As shown inFIG.4, example handheld controller400B (which may correspond to handheld controller300(a “totem”)) includes a totem-to-wearable head device six degree of freedom (6DOF) totem subsystem404A and example wearable head device400A (which may correspond to wearable head device2102) includes a totem-to-wearable head device 6DOF subsystem404B. In the example, the 6DOF totem subsystem404A and the 6DOF subsystem404B cooperate to determine six coordinates (e.g., offsets in three translation directions and rotation along three axes) of the handheld controller400B relative to the wearable head device400A. The six degrees of freedom may be expressed relative to a coordinate system of the wearable head device400A. The three translation offsets may be expressed as X, Y, and Z offsets in such a coordinate system, as a translation matrix, or as some other representation. The rotation degrees of freedom may be expressed as sequence of yaw, pitch and roll rotations, as a rotation matrix, as a quaternion, or as some other representation. In some examples, the wearable head device400A; one or more depth cameras444(and/or one or more non-depth cameras) included in the wearable head device400A; and/or one or more optical targets (e.g., buttons350of handheld controller400B as described above, or dedicated optical targets included in the handheld controller400B) can be used for 6DOF tracking. In some examples, the handheld controller400B can include a camera, as described above; and the wearable head device400A can include an optical target for optical tracking in conjunction with the camera. In some examples, the wearable head device400A and the handheld controller400B each include a set of three orthogonally oriented solenoids which are used to wirelessly send and receive three distinguishable signals. By measuring the relative magnitude of the three distinguishable signals received in each of the coils used for receiving, the 6DOF of the wearable head device400A relative to the handheld controller400B may be determined. Additionally, 6DOF totem subsystem404A can include an Inertial Measurement Unit (IMU) that is useful to provide improved accuracy and/or more timely information on rapid movements of the handheld controller400B. In some examples, it may become necessary to transform coordinates from a local coordinate space (e.g., a coordinate space fixed relative to the wearable head device400A) to an inertial coordinate space (e.g., a coordinate space fixed relative to the real environment), for example in order to compensate for the movement of the wearable head device400A relative to the coordinate system108. For instance, such transformations may be necessary for a display of the wearable head device400A to present a virtual object at an expected position and orientation relative to the real environment (e.g., a virtual person sitting in a real chair, facing forward, regardless of the wearable head device's position and orientation), rather than at a fixed position and orientation on the display (e.g., at the same position in the right lower corner of the display), to preserve the illusion that the virtual object exists in the real environment (and does not, for example, appear positioned unnaturally in the real environment as the wearable head device400A shifts and rotates). In some examples, a compensatory transformation between coordinate spaces can be determined by processing imagery from the depth cameras444using a SLAM and/or visual odometry procedure in order to determine the transformation of the wearable head device400A relative to the coordinate system108. In the example shown inFIG.4, the depth cameras444are coupled to a SLAM/visual odometry block406and can provide imagery to block406. The SLAM/visual odometry block406implementation can include a processor configured to process this imagery and determine a position and orientation of the user's head, which can then be used to identify a transformation between a head coordinate space and another coordinate space (e.g., an inertial coordinate space). Similarly, in some examples, an additional source of information on the user's head pose and location is obtained from an IMU409. Information from the IMU409can be integrated with information from the SLAM/visual odometry block406to provide improved accuracy and/or more timely information on rapid adjustments of the user's head pose and position. In some examples, the depth cameras444can supply 3D imagery to a hand gesture tracker411, which may be implemented in a processor of the wearable head device400A. The hand gesture tracker411can identify a user's hand gestures, for example by matching 3D imagery received from the depth cameras444to stored patterns representing hand gestures. Other suitable techniques of identifying a user's hand gestures will be apparent. In some examples, one or more processors416may be configured to receive data from the wearable head device's 6DOF headgear subsystem404B, the IMU409, the SLAM/visual odometry block406, depth cameras444, and/or the hand gesture tracker411. The processor416can also send and receive control signals from the 6DOF totem system404A. The processor416may be coupled to the 6DOF totem system404A wirelessly, such as in examples where the handheld controller400B is untethered. Processor416may further communicate with additional components, such as an audio-visual content memory418, a Graphical Processing Unit (GPU)420, and/or a Digital Signal Processor (DSP) audio spatializer422. The DSP audio spatializer422may be coupled to a Head Related Transfer Function (HRTF) memory425. The GPU420can include a left channel output coupled to the left source of imagewise modulated light424and a right channel output coupled to the right source of imagewise modulated light426. GPU420can output stereoscopic image data to the sources of imagewise modulated light424,426, for example as described above with respect toFIGS.2A-2D. The DSP audio spatializer422can output audio to a left speaker412and/or a right speaker414. The DSP audio spatializer422can receive input from processor419indicating a direction vector from a user to a virtual sound source (which may be moved by the user, e.g., via the handheld controller320). Based on the direction vector, the DSP audio spatializer422can determine a corresponding HRTF (e.g., by accessing a HRTF, or by interpolating multiple HRTFs). The DSP audio spatializer422can then apply the determined HRTF to an audio signal, such as an audio signal corresponding to a virtual sound generated by a virtual object. This can enhance the believability and realism of the virtual sound, by incorporating the relative position and orientation of the user relative to the virtual sound in the mixed reality environment—that is, by presenting a virtual sound that matches a user's expectations of what that virtual sound would sound like if it were a real sound in a real environment. In some examples, such as shown inFIG.4, one or more of processor416, GPU420, DSP audio spatializer422, HRTF memory425, and audio/visual content memory418may be included in an auxiliary unit400C (which may correspond to auxiliary unit320described above). The auxiliary unit400C may include a battery427to power its components and/or to supply power to the wearable head device400A or handheld controller400B. Including such components in an auxiliary unit, which can be mounted to a user's waist, can limit the size and weight of the wearable head device400A, which can in turn reduce fatigue of a user's head and neck. WhileFIG.4presents elements corresponding to various components of an example mixed reality system, various other suitable arrangements of these components will become apparent to those skilled in the art. For example, elements presented inFIG.4as being associated with auxiliary unit400C could instead be associated with the wearable head device400A or handheld controller400B. Furthermore, some mixed reality systems may forgo entirely a handheld controller400B or auxiliary unit400C. Such changes and modifications are to be understood as being included within the scope of the disclosed examples. Session Manager MR systems may be uniquely positioned to enable interactive virtual collaboration between users. Because MR systems may present virtual content three-dimensionally and in a user's physical environment, MR collaboration systems and methods may enable remote collaboration that can be at least as effective as local collaboration. In some embodiments, MR collaboration can allow users to see and/or manipulate virtual content in three-dimensional space. For example, a first user may launch an MR collaboration session and may see two virtual 3D models, a text document, and a messaging interface. A second user may join the session locally (e.g., the second user may walk into the same room as the first user), and the second user may see the same two virtual 3D models, text document, and messaging interface in the same location as the first user. In some embodiments, a third user may join the session remotely (e.g., the third user may not be in the same room as the first and the second users), and the third user may see the two virtual 3D models, text document, and messaging interface in the third user's environment. In some embodiments, the virtual content may share spatial relationships with each other (e.g., the virtual content may be arranged the same way) for all session users. In some embodiments, MR collaboration may allow users in the same physical space to leverage the shared physical context to enjoy more meaningful shared experiences involving virtual content. In some embodiments, displaying and/or synchronizing virtual content across multiple MR systems may pose challenges. For example, it can be beneficial to develop systems and methods for ensuring each MR system displays shared virtual content in a manner that is consistent with other MR systems in a session. It can also be beneficial to develop systems and methods that may enable cross-application collaboration (e.g., virtual content that may be generated using applications created by different developers). In some embodiments, it can be beneficial to develop systems and methods that may allow users that are local to each other (e.g., users that are in the same room) to collaborate with each other as well as with users that are remote (e.g., in a different room). In some embodiments, it can be beneficial to develop systems and methods that may enable collaboration sessions to persist over time such that session users may continue collaborating at a later time. In some embodiments, it can be beneficial to develop systems and methods that may enable content persistence such that a session user to continue working on virtual content even without collaborating live with other users. In some embodiments, a session may be broadly defined as a group of users (with identifiers) that can collaborate and share a series of experiences over time and space. In some embodiments, a session can include a communication and collaboration experience that provides network connectivity, common spatial references and a centralized user interface for chatting and sharing prisms with other MR users. Session participants can be remote or local in the same physical location. In some embodiments, a session manager can include a centralized backend service that manages some or all activity within a session. In some embodiments, session manager can include one or more user-facing, front-end controls and/or expressions representing session manager and/or configured to receive user input (e.g., a menu and/or a session handle). In some embodiments, session manager can include a background service and/or daemon that orchestrates and manages various session events through various session states. Session manager may also drive the user experience by allowing users to be discovered and get connected with other users. In some embodiments, session manager may also manage various UI components such as a menu and/or session UI related states. In some embodiments, collaboration can be facilitated by configuring virtual content in a collaboration session to behave similarly to real objects in collaboration sessions. For example, in a “real” collaboration session, users may sit around a table with documents and/or objects. Users may refer to “this” document and/or “that” document by pointing at a particular document. In some embodiments, users in a real collaboration session may refer to objects using relational terms (e.g., that object to the right). This behavior may occur naturally to users as a result of years of conditioning and working physically with other people. It can therefore be desirable to develop systems and methods for MR collaboration to enable natural interactions between users and the content on which they are collaborating on. In some embodiments, MR collaboration sessions can enable users to refer to colocated virtual content (e.g., virtual content that may appear in the same position in a real environment to multiple users) as if it were real content present in the user's physical environment. In some embodiments, MR collaboration sessions can persist. For example, all users may exit a session, and a user may launch the same session several weeks later. In some embodiments, the user may see all virtual content in the state at which it existed (e.g., in the same relative positions and/or with the same edits) when the users previously exited the session. In some embodiments, a session can include a platform for presenting, synchronizing, managing, and/or storing virtual content used in a mixed reality collaboration session. For example, session users may have a recurring weekly meeting in which virtual content (e.g., word documents, 3D models, presentation slides, conversation history, etc.) are discussed and/or worked on. In some embodiments, users may leverage the platform of sessions to consolidate virtual content (which may be created by different developers) into a single virtual space that may persist over time. For example, loading a single session instance may present to a user a 3D model (generated using a first application created by a first developer), a text document describing goals and/or changes to the 3D model (generated using a second application created by a second developer), and a conversation history between session users related to this session. This virtual content may persist across time and across session users, such that the same user or a different session user may load the session and see the same session contents as any other session user. In some embodiments, a session may enable user presence flexibility (e.g., local users may share virtual content placement in their local space, but remote users may also see virtual content with the same spatial relationships in their remote space). In some embodiments, a session may enable capability flexibility. For example, capabilities (e.g., corresponding to third-party applications) can be interacted with/enabled/disabled without leaving a centralized session platform. In some embodiments, applications (e.g., third-party applications) may leverage the session platform to forgo building proprietary sharing platforms that may not be compatible with other apps. In some embodiments, a session may enable temporal flexibility. For example, users may access sessions at different times, and a live call with other users may not be necessary. In some embodiments, changes made by users can be synchronized such that the change may be reflected for other session users (whether they are currently in the session or enter the session at a later time). In some embodiments, a session may include virtual content shared with one or more users over time. A session may have one or more owners, and in some embodiments, a user who created the session may be considered a session owner. A session may have one or more participants who may have access to the session. In some embodiments, a session owner may control what participants may join the session. In some embodiments, a session may have a session identifier. In some embodiments, each user (e.g., owner or participant) may have a user identifier. In some embodiments, a session may include one or more user avatars, which may represent a remote user's positioning relative to other objects in a session. In some embodiments, a session may include location data (e.g., location data corresponding to each user, location data corresponding to locations the session has been opened in, etc.). Location data may include persistent coordinate data. In some embodiments, location data may include one or more transforms (e.g., one or more transformation matrices), which may relate a position to persistent coordinate data. In some embodiments, a session can include one or more capabilities. A session capability may include one or more features that users can select and/or enable in a session. For example, virtual object sharing may be considered a session capability. In some embodiments, determining whether users are local to other users may be considered a session capability. In some embodiments, projecting a user avatar may be considered a session capability. In some embodiments, casting a user's screen to other users may be considered a session capability. In some embodiments, a capability can have one or more capability instances (e.g., a capability can have multiple instances running at the same time). For example, two virtual objects may be shared with users in a session, and each virtual object may be considered a separate capability instance. In some embodiments, a session may be persistent. For example, a session may continue to exist even after all users have exited a session. In some embodiments, a session may continue to store session information such as session capabilities used (e.g., sharing a virtual object, what position the virtual object was in, etc.), user locations, user identifications, etc. Persistent sessions may facilitate long-term collaboration between users. For example, users may continue where they left off without having to rearrange their virtual workspace to their preference. In some embodiments, session persistence may enable a different user to enter the session at a later time and see virtual content arranged as it was when a previous user exited the session. FIGS.5A-5Cillustrate an exemplary MR collaboration session, according to some embodiments.FIG.5Aillustrates an exemplary mixed reality collaboration session where users508a,508b, and508cmay be at a first location (e.g., a first room) together.FIG.5Billustrates an exemplary mixed reality collaboration session where users508dand508emay be at a second location (e.g., a second room) together.FIG.5Cillustrates an exemplary mixed reality collaboration session where a session handle has been moved. In some embodiments, users508a,508b,508c,508d, and508emay all be part of the same mixed reality collaboration session500. In some embodiments, a collaboration session can include a session handle502a(which may be a virtual object). Session handle502amay serve as a local anchor for a session. For example, all session users in the same location (e.g., users508a,508b, and508cmay be considered in the same location if they share common persistent coordinate data) may be presented virtual content positioned relative to session handle502a, which may give the virtual content the appearance of being located in a particular location and orientation in the real world, similar to a real/physical object. In some embodiments, session handle502amay be positioned relative to persistent coordinate data (e.g., using a transform). In some embodiments, users508a,508b, and508cmay be using canonical persistent coordinate data, which may enable consistent placement of session handle502ain each user's MR system. In some embodiments, users508a,508b, and508cmay all see session handle502aat the same location (e.g., the users may all see session handle502aon the floor at the same location). In some embodiments, whether users can be considered local to each other may be determined using persistent coordinate data. For example, an MR system for user508amay receive (e.g., from one or more remote servers) canonical persistent coordinate data based on an identified environment for user508a. An MR system for user508amay use location data (e.g., GPS, WiFi, and/or cellular data) and/or image recognition data (e.g., recognizing a known environment by comparing captured images with images of known environments) to identify an environment for user508a. In some embodiments, an MR system for user508amay transmit its received persistent coordinate data to other MR systems in a session (e.g., an MR system for user508b). In some embodiments, other MR systems in a session may receive canonical persistent coordinate data and compare the transmitted data received from other MR systems with canonical persistent coordinate already in use (and/or canonical persistent coordinate data received from one or more remote servers). If it is determined (e.g., using unique identifiers) that one or more instances of canonical persistent coordinate data is shared between MR systems in a session, it can be determined that the MR systems are local to each other. In some embodiments, if MR systems do not share instances of canonical persistent coordinate data, it may be determined that the MR systems are remote from each other. In some embodiments, a session handle (e.g., session handle502a) may be displayed in relation to one or more shared instances of persistent canonical persistent coordinate data, which may enable session handle502ato be presented in the same location to users508a,508b, and508c. In some embodiments, session500can include a shared virtual object504a. Shared virtual object504amay be considered a session capability instance. In some embodiments, users508a,508b, and508cmay all see virtual object504ain the same location (e.g., the users may all see virtual object504aat the end of a real table). In some embodiments, shared virtual object504amay be positioned relative to session handle502a(e.g., using a transform). In some embodiments, shared virtual object504amay be positioned relative to persistent coordinate data (e.g., canonical persistent coordinate data). In some embodiments, a user (e.g., user508c) may manipulate shared virtual object504a. For example, user508cmay move object504afrom the edge of the table to the center of the table. In some embodiments, users508aand508bmay also see object504amove from the edge of the table to the center of the table. In some embodiments, if a user (e.g., user508b) points to a portion of object504a(e.g., the helmet), other users (e.g.,508aand508c) may also see user508bas pointing at the same portion of object504a. In some embodiments, session handle502amay also be moved. For example, inFIG.5C, user508amay move session handle502ato the left. In some embodiments, any virtual content displayed as part of a session may also move, thereby maintaining the same relative positioning to session handle502a. For example, as session handle502ais moved to the left, object504amay also be moved to the left by the same amount. In some embodiments, moving a session handle at one location (e.g., session handle502a) may not move a session handle at a different location (e.g., session handle502b). It can be beneficial to allow each group of local users to manage their own session handle placement. For example, because virtual content may be positioned relative to a session handle, each local group may determine an optimal location for their virtual content for their respective local physical environments. Session500can involve users that may not share the same location. For example, inFIG.5B, users508dand508emay also be part of session500. In some embodiments, users508dand508emay be considered remote to users508a,508b, and508c(e.g., because there may not be common persistent coordinate data between users508d/508eand508a/508b/508c). In some embodiments, users508dand508emay see a second session handle502b. In some embodiments, each user (or group of users) that does not have common persistent coordinate data with other users (or groups of users) may see their own session handle. Shared virtual content displayed to users508dand508emay be displayed relative to session handle502b. For example, shared virtual object504bmay correspond to object504a. In some embodiments, object504bmay be positioned in the same spot relative to session handle502bas object504ais positioned relative to session handle502a. In some embodiments, if object504ais moved relative to session handle502a, object504bmay also move relative to session handle502b(and vice versa). In some embodiments, session handle502bmay not move if session handle502ais moved. This may enable local users to manage how session contents are presented to the local group of users. In some embodiments, session500can include a user avatar506e. In some embodiments, user avatar506ecan represent a user in session500that may be remote to other users in the session. For example, users508a,508b, and508cmay be considered local to each other (e.g., because they may share persistent coordinate data), and user508emay be considered remote from users508a,508b, and508c(e.g., because user508emay not share persistent coordinate data with the other users). In some embodiments, user508e(inFIG.5B) may also be part of session500, and user avatar506emay correspond to user508e. In some embodiments, user avatar506emay enable user508eto collaborate with users508a,508b, and508c. In some embodiments, avatar506emay mirror one or more movement of user508e. For example, as user508eapproaches session handle502b, user avatar506emay approach session handle502a, thereby maintaining the same relative positioning between user508eand session handle502b. In some embodiments, user508emay point to object504b, and avatar506emay correspondingly point to object504aat the same location. Similarly, avatar506bmay represent user508b, and avatar506amay represent user508a. As user508aapproaches object504a, avatar506amay also approach object504baccordingly. In some embodiments, a remote user may not broadcast an avatar to other users. For example, user508dmay be remote to users508a,508b, and508c, but user508dmay not project a corresponding avatar for session handle502a. In some embodiments, session persistence may allow users to dynamically localize to different session locations. For example, users508a,508b, and508cmay be in a first room, and users508dand508emay be in a second room, which may be down the hall from the first room. In some embodiments, user508amay leave the first room, walk down the hall and enter the second room, and virtual content may be displayed to user508arelative to session handle502b. In some embodiments, each MR system used by a user may periodically poll the user's location (e.g., using GPS data and/or image recognition). In some embodiments, an MR system may trigger a new location query (e.g., by using geofencing). FIG.6illustrates an exemplary session manager architecture, according to some embodiments. In some embodiments, session manager604may run on MR system602, which may include one or more computer systems and can correspond to MR systems112,200. In some embodiments, session manager604can include a process, sub-process, thread, and/or service. In some embodiments, session manager604can include one or more data structures configured to store information. In some embodiments, session manager604can include a service (e.g., a background operating system service). In some embodiments, a process, sub-process, thread, and/or service of session manager604can be configured to continually run (e.g., in the background) while an operating system of a host system is running. In some embodiments, session manager604can include an instantiation of a parent background service, which may serve as a host process to one or more background processes and/or sub-processes. In some embodiments, session manager604can include a sub-process of a parent process. In some embodiments, session manager604can include a thread of a parent process. Session manager604may include one or more session instances606aand/or606b. In some embodiments, a session instance can correspond to an MR collaboration session (e.g., session500). In some embodiments, a session instance may manage information used in an MR collaboration session. In some embodiments, a session instance may include one or more data structures configured to store information. In some embodiments, a session instance may include one or more processes, sub-processes, threads, and/or services. In some embodiments, one or more session instances may be stored at one or more remote servers. In some embodiments, session instances may be encrypted before it is stored (locally at an MR device or at one or more remote servers). In some embodiments, a session instance may be configured to communicate with one or more capability instances. For example, session instance606bmay be configured to communicate with capability instances608band608c. A capability instance may correspond to one or more session capabilities. For example, capability instance608bmay correspond to shared object504a. In some embodiments, a capability instance may include one or more data structures configured to store information. In some embodiments, a capability instance may include one or more processes, sub-processes, threads, and/or services. In some embodiments, a capability instance can be configured to communicate with one or more connectivity services, such as application connectivity platform610aand/or collaboration core610b. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include a process, sub-process, thread, and/or service. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include one or more data structures configured to store information. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include a service (e.g., a background operating system service). In some embodiments, a process, sub-process, thread, and/or service of application connectivity platform610aand/or collaboration core610bcan be configured to continually run (e.g., in the background) while an operating system of a host system is running. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include an instantiation of a parent background service, which may serve as a host process to one or more background processes and/or sub-processes. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include a sub-process of a parent process. In some embodiments, application connectivity platform610aand/or collaboration core610bcan include a thread of a parent process. In some embodiments, application connectivity platform610acan provide a low-latency communication pathway between MR systems in a colocation session to enable real-time virtual object colocation. In some embodiments, application connectivity platform610acan include one or more implementations of Web Real-Time Communication (“WebRTC”). For example, in some embodiments, data may be transmitted via one or more Twilio tracks for low-latency communication. In some embodiments, capability instances may utilize application connectivity platform610ato send and/or receive low-latency data (e.g., relational transform data as a shared virtual object moves) from MR systems in a session. In some embodiments, application connectivity platform610acan be configured to communicate with other application connectivity platforms running on other MR systems. In some embodiments, collaboration core610bcan provide data synchronization services for simultaneous edits. In some embodiments, collaboration core610bcan be configured to receive edit data from one or more capability instances. In some embodiments, collaboration core610bcan be configured to communicate with external synchronization services (e.g., Firebase) to synchronize simultaneous edits to virtual content in a session. In some embodiments, application connectivity platform610aand/or collaboration core610bmay communicate with session manager604. In some embodiments, session manager604may provide privileged information directly to application connectivity platform610aand/or collaboration core610b(e.g., user identification data). It can be beneficial to shield privileged information from capability instances because a capability instance may be developed by an unknown developer, which may pose a security risk to the privileged data. Although application connectivity platform610aand collaboration core610bare depicted as separate services, it is also contemplated that functions provided by each could be provided as a single service or as two or more services. In some embodiments, session manager604may communicate with one or more remote servers and/or with one or more MR systems to synchronize session instances. For example, a second MR system may initiate a session and invite MR system602to participate in the session. In some embodiments, session manager604may create a new session instance corresponding to the newly joined session. In some embodiments, the new session instance may be a copy of a session instance on the second MR system. In some embodiments, a session instance may be received from one or more remote servers. In some embodiments, session instance data may be transmitted to one or more remote servers (e.g., if a capability instance has been updated, it can be desirable to transmit the update to other session users). In some embodiments, session instance data can be transmitted to one or more remote servers at an end of a session (e.g., when the last user leaves a session), so that session data may be preserved and re-accessed at a later time. In some embodiments, session manager and/or a session instance may communicate with one or more services (e.g., one or more services provided by application connectivity platform610a) to synchronize session instance data with other session instances (that may be stored at another MR system or a remote server). In some embodiments, session manager and/or a session instance may communicate with one or more services to establish a real-time and/or low-latency communication link with one or more remote end points (e.g., other MR systems in a session). FIG.7illustrates an exemplary session instance architecture, according to some embodiments. In some embodiments, session instance702may correspond to session instance604aand/or604b. In some embodiments, session instance702can include one or more data structures, which can be configured to store one or more additional data structures (e.g., capabilities module704, participants module708, locations module712, and/or presence module716). Capabilities module704may manage data and/or data structures corresponding to one or more capability instances in a session. For example, instance706amay correspond to a virtual object. In some embodiments, instance706amay include transform data, which may relate the virtual object's position to persistent coordinate data and/or one or more session handle locations. In some embodiments, instance706amay include one or more references to a collaboration core service. In some embodiments, references to a collaboration core service may enable instance706ato be properly notified and/or updated if a change is made to instance706aby a user. In some embodiments, instance706amay include application connectivity platform data (e.g., where data should be sent to, what pipes should be used, etc.). In some embodiments, capabilities module704may be configured to communicate with one or more capability instances (e.g., capability instance608a). In some embodiments, session instance702may include participants module708. Participants module708may manage data and/or data structures corresponding to one or more users in a session. For example, user710amay include an identifier for an MR system used by a user. In some embodiments, user710amay include avatar data (e.g., appearance, size, color, etc.). In some embodiments, user710amay include location data. In some embodiments, location data can include GPS data, WiFi data, cellular data, persistent coordinate data, etc. In some embodiments, session instance702may include locations module712. Locations module712may manage data and/or data structures corresponding to one or more locations in a session. For example, location714amay include persistent coordinate data, transformation data, data corresponding to a floor plane, etc. In some embodiments, location714amay correspond to a user location. In some embodiments, location714amay correspond to a session handle location. In some embodiments, session instance702may include presence module716. Presence module716may manage data and/or data structures corresponding to local and/or remote status of one or more users. For example, instance718amay indicate that a first user is remote from a second user, and a third user is local to the second user. In some embodiments, instance718amay include data used for communication between users (e.g., using application connectivity platform610a). Systems, methods, and computer-readable media are disclosed. According to some examples, a system comprises: a wearable device comprising a transmissive display; and one or more processors configured to execute a method comprising: receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via the transmissive display of a wearable device, wherein the first position is determined based on the persistent coordinate data; presenting a virtual object to the first user at a second position via the transmissive display, wherein the second position is determined based on the first position; receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; and presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is determined based on the location data, and wherein the third position is further determined based on the first position. In some examples, the method further comprises: receiving an input from the second user; and in response to receiving the input from the second user, presenting the virtual object to the first user at a fourth position via the transmissive display. In some examples, the virtual object is presented to the first user at the second position in response to an input from the first user, and the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: receiving an input from the first user; in response to receiving the input from the first user, presenting the first session handle to the first user at a fourth position; and presenting the virtual object to the first user at a fifth position via the transmissive display, the fifth position determined based on the fourth position. In some examples, the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: storing in a memory the first position, the second position, and the third position associated with a session instance at a first time; and receiving from the memory the first position, the second position, and the third position at a second time later than the first time. According to some examples, a method comprises: receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via a transmissive display of a wearable device, wherein the first position is determined based on the persistent coordinate data; presenting a virtual object to the first user at a second position via the transmissive display, wherein the second position is determined based on the first position; receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; and presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is determined based on the location data, and wherein the third position is further determined based on the first position. In some examples, the method further comprises: receiving an input from the second user; and in response to receiving the input from the second user, presenting the virtual object to the first user at a fourth position via the transmissive display. In some examples, the virtual object is presented to the first user at the second position in response to an input from the first user, and the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: receiving an input from the first user; in response to receiving the input from the first user, presenting the first session handle to the first user at a fourth position; and presenting the virtual object to the first user at a fifth position via the transmissive display, the fifth position determined based on the fourth position. In some examples, the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: storing in a memory the first position, the second position, and the third position associated with a session instance at a first time; and receiving from the memory the first position, the second position, and the third position at a second time later than the first time. According to some examples, a non-transitory computer-readable medium stores instructions that, when executed by one or more processors, cause the one or more processors to execute a method comprising: receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via a transmissive display of a wearable device, wherein the first position is determined based on the persistent coordinate data; presenting a virtual object to the first user at a second position via the transmissive display, wherein the second position is determined based on the first position; receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; and presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is determined based on the location data, and wherein the third position is further determined based on the first position. In some examples, the method further comprises: receiving an input from the second user; and in response to receiving the input from the second user, presenting the virtual object to the first user at a fourth position via the transmissive display. In some examples, the virtual object is presented to the first user at the second position in response to an input from the first user, and the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: receiving an input from the first user; in response to receiving the input from the first user, presenting the first session handle to the first user at a fourth position; and presenting the virtual object to the first user at a fifth position via the transmissive display, the fifth position determined based on the fourth position. In some examples, the method further comprises transmitting the input from the first user to the second user. In some examples, the method further comprises: storing in a memory the first position, the second position, and the third position associated with a session instance at a first time; and receiving from the memory the first position, the second position, and the third position at a second time later than the first time. According to some examples, a system comprises: a first wearable device comprising a first transmissive display and one or more sensors; and one or more processors configured to execute a method comprising: receiving a first input from a first session user associated with the first wearable device; in response to receiving the first input, generating a session instance, wherein the session instance is configured to store data corresponding to one or more capability instances, and wherein the session instance is further configured to store data corresponding to one or more session users; presenting a virtual session handle to the first session user at a first session handle position via the first transmissive display of the first wearable device; receiving a second input from the first session user; and in response to receiving the second input: generating a first capability instance associated with a process, the process comprising projecting a virtual object; presenting the virtual object to the first session user at a first object position via the first transmissive display; presenting the virtual object to a second session user at a second object position via a second transmissive display of a second wearable device; and storing data corresponding to the first session handle position, the first object position, the second object position, a first session user position, and a second session user position in the session instance, wherein the first session user position is determined via the one or more sensors of the first wearable device and the second session user position is determined via one or more sensors of the second wearable device. In some examples, the first object position is related to the first session handle position using transform data. In some examples, the transform data is stored in the session instance. In some examples, the method further comprises storing data corresponding to a localization status, wherein the localization status is based on the first session user position and the second session user position. In some examples, the second input is received at a first time, and the method further comprises: receiving a third input from the first session user at a second time later than the first time; and in response to receiving the third input: receiving data corresponding to the first object position and the first session user position; and presenting the virtual object to the first session user at the first object position. In some examples, the method further comprises: generating a second capability instance, the second capability instance associated with a second process, the second process comprising projecting a virtual avatar of the second session user; presenting the virtual avatar to the first session user via the first transmissive display; and storing data corresponding to the virtual avatar in the session instance. In some examples, the method further comprises generating a second capability instance, the second capability instance associated with a second process, the second process comprising casting a view associated with the first session user to the second session user. According to some examples, a method comprises: receiving a first input from a first session user; in response to receiving the first input, generating a session instance, wherein the session instance is configured to store data corresponding to one or more capability instances, and wherein the session instance is further configured to store data corresponding to one or more session users; presenting a virtual session handle to the first session user at a first session handle position via a first transmissive display of a first wearable device; receiving a second input from the first session user; and in response to receiving the second input: generating a first capability instance associated with a process, the process comprising projecting a virtual object; presenting the virtual object to the first session user at a first object position via the first transmissive display; presenting the virtual object to a second session user at a second object position via a second transmissive display of a second wearable device; and storing data corresponding to the first session handle position, the first object position, the second object position, a first session user position, and a second session user position in the session instance, wherein the first session user position is determined via one or more sensors of the first wearable device and the second session user position is determined via one or more sensors of the second wearable device. In some examples, the first object position is related to the first session handle position using transform data. In some examples, the transform data is stored in the session instance. In some examples, the method further comprises storing data corresponding to a localization status, wherein the localization status is based on the first session user position and the second session user position. In some examples, the second input is received at a first time, and the method further comprises: receiving a third input from the first session user at a second time later than the first time; and in response to receiving the third input: receiving data corresponding to the first object position and the first session user position; and presenting the virtual object to the first session user at the first object position. In some examples, the method further comprises: generating a second capability instance, the second capability instance associated with a second process, the second process comprising projecting a virtual avatar of the second session user; presenting the virtual avatar to the first session user via the first transmissive display; and storing data corresponding to the virtual avatar in the session instance. In some examples, the method further comprises generating a second capability instance, the second capability instance associated with a second process, the second process comprising casting a view associated with the first session user to the second session user. According to some examples, a non-transitory computer-readable medium stores instructions that, when executed by one or more processors, cause the one or more processors to execute a method comprising: receiving a first input from a first session user; in response to receiving the first input, generating a session instance, wherein the session instance is configured to store data corresponding to one or more capability instances, and wherein the session instance is further configured to store data corresponding to one or more session users; presenting a virtual session handle to the first session user at a first session handle position via a first transmissive display of a first wearable device; receiving a second input from the first session user; in response to receiving the second input: generating a first capability instance associated with a process, the process comprising projecting a virtual object; presenting the virtual object to the first session user at a first object position via the first transmissive display; presenting the virtual object to a second session user at a second object position via a second transmissive display of a second wearable device; and storing data corresponding to the first session handle position, the first object position, the second object position, a first session user position, and a second session user position in the session instance, wherein the first session user position is determined via one or more sensors of the first wearable device and the second session user position is determined via one or more sensors of the second wearable device. In some examples, the first object position is related to the first session handle position using transform data. In some examples, the transform data is stored in the session instance. In some examples, the method further comprises storing data corresponding to a localization status, wherein the localization status is based on the first session user position and the second session user position. In some examples, the second input is received at a first time, and the method further comprises: receiving a third input from the first session user at a second time later than the first time; and in response to receiving the third input: receiving data corresponding to the first object position and the first session user position; and presenting the virtual object to the first session user at the first object position. In some examples, the method further comprises: generating a second capability instance, the second capability instance associated with a second process, the second process comprising projecting a virtual avatar of the second session user; presenting the virtual avatar to the first session user via the first transmissive display; and storing data corresponding to the virtual avatar in the session instance. In some examples, the method further comprises generating a second capability instance, the second capability instance associated with a second process, the second process comprising casting a view associated with the first session user to the second session user. Although the disclosed examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. For example, elements of one or more implementations may be combined, deleted, modified, or supplemented to form further implementations. Such changes and modifications are to be understood as being included within the scope of the disclosed examples as defined by the appended claims.
89,429
11861804
DETAILED DESCRIPTION Before turning to the figures, which illustrate the exemplary embodiments in detail, it should be understood that the application is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology is for the purpose of description only and should not be regarded as limiting. Referring generally to the figures, systems and methods for selectively smoothing artifacts, such as staircase artifacts, for medical surface meshes are shown and described. The systems and methods described herein rely upon a generated surface mesh from computer tomography (CT) or magnetic resonance imaging (MM) data. The target anatomy is isolated using either manual or computer-aided segmentation techniques. The image data is then transformed into a surface mesh by any known surface mesh method, such as, but not limited to, the Marching Cubes algorithm, or marching tetrahedrons methods. By result of these processes, undesired artifacts can arise that produce model error. The systems and methods described herein generally identify these features and assign weighted values to each vertex in the surface mesh based on its proximity to said features. The model and its weighted values are then used, according to several embodiments described herein, to generate a new surface mesh or modify the existing surface mesh data. The outputted smoothed surface mesh can be used for more accurate surgical visualization and planning, intraoperative navigation, and control of robotically-assisted surgical devices. The systems and methods described herein more accurately and more fully remove staircase artifacts from surface meshes while better maintaining the original mesh model volume. The disclosed embodiments thereby allows for more accurate surface models of anatomy, especially in high curvature anatomy applications (e.g., acetabular rim, ankle bones, vertebrae, etc.). For example, as described in detail below, the analysis of artifacts in relation to three or more reference directions (reference vectors) allows more complete identification of artifact vertices. This disclosure also describes more efficient and robust methods for running surface-smoothing computational processes not previously described in prior art. These include improved methods for normalizing artifact characteristics and vertex weights for use across different sized surface mesh or surface mesh features. A surface mesh can be described herein by the set of vertices and faces that define the surface mesh. In some embodiments, the surface mesh can also be described by sets of vertices, edges, and/or faces. A vertex of a surface mesh is defined by a location in a three dimensional space as defined by the data structure of the surface mesh. An edge is defined as the vector between two vertices. The faces of a surface mesh are geometric surfaces defined by a plurality of neighboring vertices. Alternatively, a face can be described in terms of its associated edges, or a combination of its associated edges and vertices. Any adjustment of a given vertex described herein refers to adjustments made to the location of said vertex. Additionally, adjustments to a vertex imply a change in geometric shape or orientation to the vertex's associated edges and faces. In regards to the description of staircase artifact features herein, the staircase artifacts can be described by the associated vertices, edges, faces, or some combination thereof that compose the staircase feature. Descriptions of the size or magnitude of a staircase artifact are in reference to the magnitude of induced model error generated by the artifact. That is, a larger artifact is defined by a set of vertices that deviate in position from the ideal surface structure of the scanned anatomy by a greater magnitude. U.S. Pat. No. 8,010,180, titled “Haptic Guidance System and Method,” granted Aug. 30, 2011, which is hereby incorporated by reference herein in its entirety, describes an exemplary surgical system with which the presently described system and methods may be used. FIG.1depicts, according to one embodiment, an assistive surgical system100comprising a computing system105, a robotic device115, a display screen110, and a tracking device125. The system100, by way of robotic device115, provides feedback and control to a surgical tool120during a surgical procedure. In some embodiments, the robotic device115provides haptic feedback. In other embodiments, the robotic device115performs automated or autonomous movements. The surgical system100is also configured to incorporate pertinent anatomical data of target anatomy135for patient130to plan surgical procedures before operation, display on screen110, and set virtual control boundaries for robotic device115. In some embodiments, the assistive surgical system100is the haptic guidance system described in the aforementioned U.S. patent “Haptic Guidance System and Method”. Within the computing system105, there exist hardware and software for operation and control of surgical system100, which enable surgical system100to perform various functions related to surgical planning, navigation, image guidance, and/or tool guidance. The computing system105may also be configured to provide boundary conditions to the feedback device115. A user (such as a surgeon) may also use the computing system105for pre-operation planning and visualization for target anatomy135. Computing system105comprises a memory device that holds saved computer CT image files of the target anatomy135. In another embodiment, the computing system105accesses patient image data via an external database. Computing system105may store segmentation and surface-mesh software configured to convert said two-dimensional slices into three-dimensional representations. In another embodiment, the isolated surface mesh is pre-processed and available in memory of computing system105. In yet another embodiment, a surface mesh can be accessed via an external database. Computing system105may also contain software and hardware to selectively correct undesired model artifacts in said surface mesh. In one embodiment, the computing system105applies a feature-specific smoothing algorithm to identify and remove staircase artifacts. The surface mesh representation can then be displayed to a screen110for view. The smoothed surface mesh can also be used to establish virtual boundaries for providing feedback to the user via robotic device115, to control the robotic device115to perform automated or autonomous actions, to provide intraoperative surgical navigation (e.g., via display screen110or some other display, for example an augmented reality display), and/or provide other intraoperative functionality based on the smoothed surface mesh. In the example shown, the robotic device115is a surgical device configured to be manipulated by a user (such as a surgeon) to move a surgical tool120to perform a procedure on a patient, such as sculpting a surface of a bone to receive an implant. In one embodiment, the robotic device115provides haptic guidance to the surgeon to maintain the tool120within a predefined virtual boundary. The haptic object establishes a desired relationship between the anatomy and the tool120, such as desired position, orientation, velocity, and/or acceleration of the tool120relative to the anatomy135. In operation, when the surgeon moves the tool120in a manner that violates the desired relationship (such as when the tool120contacts a virtual boundary), the haptic device115provides haptic guidance in the form of tactile feedback (e.g., vibration) and/or force feedback (e.g., force and/or torque) to the surgeon. The haptic guidance may be experienced by the surgeon, for example, as resistance to further tool movement in the direction of the virtual boundary. As a result, the surgeon may feel as if the tool120has encountered a physical object, such as a wall. In this manner, the virtual boundary functions as a virtual cutting guide. Thus, the surgical system100limits the surgeon's ability to physically manipulate the robotic device115(e.g., by providing haptic guidance and/or a limit on user manipulation of the haptic device115) by implementing control parameters based on a relationship between the anatomy and a position, an orientation, a velocity, and/or an acceleration of a portion of the haptic device115, such as the tool115. In addition to haptic objects, the relationship may be based on predefined parameters, such as a predefined depth that limits total travel of the tool120. In other embodiments, the robotic device115is configured to perform automated and/or autonomous movements to facilitate performance of a surgical procedure. For example, the robotic device115may autonomously move the tool120relative to the anatomy135to modify (cut, drill, ream, etc.) the anatomy135with the tool120. In still other embodiments, the robotic device115is a handheld robotic device configured to control operation and movement of the tool120relative to the anatomy135. For example, the robotic device115may be configured to retract and/or stop operation of the tool120when the tool120reaches a virtual boundary, thereby confining the tool120from operating on anatomical features outside the virtual boundary. These and other variations of robotically-assisted feedback, assistance, automation, etc. are possible in various embodiments. The tracking (or localizing) system125is a system generally configured to determine the pose (i.e., the position and orientation) of one or more objects during a surgical procedure to detect movement and capture poses of said objects. In some embodiments, the tracking system125determines the pose of surgical tool120in relation to the target anatomy135. The tracking system125can use any combination of hardware and software, such as, but not limited to, optical, electromagnetic, radio, or acoustic methods as are well known. FIG.2Aillustrates a surface mesh200of a patent anatomy. The anatomy can be of an organ or tissue of any nature, including bone, vascular, or muscular anatomy. Surface215inFIG.2Brepresents a smoothed surface mesh of a sample anatomy after being selectively smoothed by an embodiment of the present invention. The smoothed surface215ideally preserves small-detailed features of the scanned anatomy, such as small veins, protrusions, or any other anatomical feature, while removing staircase artifacts. This helps improve surface models to more accurately represent the smooth nature of most biological anatomies. The surface mesh200is generated from CT image data for the target anatomy by any given surface mesh software. For example, 2-dimensional CT images may be arranged and aligned in a stack, and a 3-dimensional segmentation volume may be generated to intersect a surface of the target anatomy shown in each of the CT images in the stack. By nature of discretization and the layering of two-dimensional CT images into a 3D-format, the software construction may create three-dimensional artifacts, such as staircase artifacts205. For example, staircase artifacts205may be caused by jumps between adjacent CT image layers in the 3-D model. These artifacts are particularly problematic near areas of high curvature, such as portion210shown inFIG.2. It is noted that these staircase artifacts205create erroneous representations of the smooth-faced features of the scanned anatomy and introduces error in, for example, virtual boundary creation or surgical navigation. Artifacts205can be oriented in any of the three reference directions of the segmentation volume (such as, the defined x, y, and z axes). Generically, the reference directions can be described in several ways, such as the defined two-dimensions of the CT images plus the stack direction of the image slices. Referring now toFIGS.3-4, a staircase-aware surface smoothing system300is shown inFIG.3which may be configured to execute process400shown inFIG.4, according to exemplary embodiments. The smoothing system300may contain a processing circuit305and a communication interface350. The smoothing system300may be configured to selectively smooth staircase artifacts in surface meshes to generate smoothed surface meshes. In some embodiments, the smoothing system300may be implemented locally at the computing system105of the surgical system100. In some embodiments, the smoothing system300is provided at an imaging system, for example at a CT scanner and within image processing circuitry associated therewith. In some embodiments, the smoothing system300is provided remotely from an imaging system and from the surgical system100, for example on a cloud-based computing resource, at a processing center of a segmentation service provider, or on a personal computer of a health care provider (e.g., surgeon, technician, etc.). The smoothing system300may also be distributed across multiple computing resources in some embodiments. The staircase-aware surface smoothing system300, in some embodiments, communicates with an external system355via I/O communications interface350. Communication interface350can be or include wired or wireless communication interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications between system300and external systems355. In various communications via interface350may be direct (e.g., local wired or wireless communications) or via a communications network (e.g., a WAN, the internet, a cellular network, etc.). For example, interface350can include an Ethernet card and port for sending and receiving data via an Ethernet-based communications link or network. In another example, interface350can include a Wi-Fi transceiver for communicating via a wireless communications network. Multiple external systems355can be interfaced to I/O communication interface350based on various embodiments and can be implemented within the same or separate interfaces. In some embodiments, the external system355is an external storage device, and the communication interface350is configured to receive from the external storage device data for a given surface mesh or a command to smooth a given surface mesh. Communication interface350may also be configured to export the output smoothed surface mesh to the external storage device after processing. In some embodiments, external system355is a display, such as display110, or a processing circuit coupled to a display, in which the output smoothed surface mesh can be viewed on the display. In some embodiments, the external system355is a user input device, and communication interface350is a user interface. In some embodiments, the external device355is a display system. The display system may be configured to render a graphical representation of the output smoothed surface mesh on a display. The display system may be configured to view the smoothed surface mesh from several perspectives. The rendered graphical representation may also be integrated within a particular application configured to further manipulate or change the output surface mesh. In some embodiments, the external system355is a computing system for controlling the assistive surgical system100, such as computing system105. In some such embodiments, the communication interface350is configured receive data from computing system105to smooth a given surface mesh, and to export the output smoothed surface mesh back to the computing system105for use in the assistive surgical system100. Assistive surgical system100may use the smoothed surface mesh in a haptic feedback system. Assistive surgical system100may display the output smoothed surface mesh on display110. Assistive surgical system100may also export or store the smoothed surface mesh to an external server or storage device. In some embodiments, the smoothing system300may be provided within the assistive surgical system100or within computing system105. Communications interface350may be configured to use any interface included in computing system105for interfacing to devices connected to computing system105. For example, communications interface350may interface with a user input interface of the computing system105to receive and send information to a user. For haptic feedback control, smoothing system300may be configured to control a robotic device based on an output smoothed surface mesh by defining a smooth haptic boundary based on the smoothed surface mesh. Smoothing system300may be configured to provide or otherwise assist in smooth haptic feedback via the robotic device based on an interaction between the smooth haptic boundary and an interaction point associated with a surgical tool of the robotic device. The interaction point may be a location of a contact point on the surgical tool relative to an anatomy of the patient, the contact point being a part of the surgical tool that touches or affects the anatomy. The haptic feedback may define safe areas for the surgical tool and restricted areas of the surgical tool or contact point. The haptic feedback may restrict or impede movement of the surgical tool based on the safe and restricted areas. By using the smoothed surface mesh for haptic feedback, the haptic boundary can more accurately reflect the actual profile or desired boundary of the modeled anatomy, and may result in a smoother-feeling haptic feedback. Embodiments of haptic feedback control are further described in U.S. Pat. No. 8,010,180, which is incorporated by reference herein in its entirety. The processing circuit305comprises a processor310and a memory device315. Processor305may be or include one or more microprocessors, an application specific integrated circuit (ASIC), a circuit containing one or more processing components, a group of distributed processing components, circuitry for supporting a microprocessor, or other hardware configured for processing. According to an exemplary embodiment, processor305is configured to execute computer code stored in memory315to complete and facilitate the activities described herein. The memory device315(e.g., memory, memory unit, storage device) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory device315is communicably connected to the processor310to provide computer code or instructions to the processor310for executing at least some of the processes described herein. Moreover, the memory device315comprises or includes tangible, non-transient volatile memory or non-volatile memory. Accordingly, the memory device315may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described herein. For example, memory315is shown to include various modules which are computer code modules (e.g., executable code, object code, source code, script code, machine code, etc.) configured for execution by processor310. When executed by processor310, processing circuit305is configured to complete the activities described herein. Memory315is shown to include various modules for executing process400on processor310. Memory315includes an artifact identification module320, a weight assignment module325, and a smoothing algorithm module330. The artifact identification module320stores code that, when executed by the processor310, enables the processing circuit305to identify which vertices of a surface mesh are related to a staircase artifact. Weight assignment module325stores code that, when executed by the processor310, enables the processing circuit305to assign values to all vertices of a surface mesh based on their proximity to an artifact vertex. Smoothing algorithm module330stores code that, when executed by the processor310, enables the processing circuit305to apply a known smoothing algorithm to the vertices of a surface mesh in relation to their weighted values. The functions stored in modules320-330are described in greater detail below with respect to subsequent figures. Memory315also includes one or more buffers335-340for temporarily storing data during the smoothing process. Memory315includes an artifact buffer335for storing the identity of vertices declared artifact vertices. In some embodiments, artifact buffer335stores a master set of the identities all identified artifact vertices in a surface mesh. In some embodiments, artifact buffer335is partition to store identified artifact vertices based on the reference direction or directions by which they were identified, and may include data that associates each artifact vertex with the reference directions used to identify the vertex as an artifact vertex. Memory315also includes a weights buffer340for storing the assigned weighted values for all vertices of a surface mesh. The use of buffers335and340will be described in greater detail below with respect to subsequent figures. Memory315can also include database345to store and retrieve patient data, CT data, or surface mesh data. Database345may be any storage medium, server, or device. In some embodiments, database345contains temporary storage of surface mesh data while the surface mesh is being smoothed by system300. In some embodiments, database345is integrated into external systems355. Referring again toFIG.4, a flowchart of the staircase-aware smoothing process400performed by staircase smoothing system300is shown. Staircase smoothing system300can be configured to receive an input surface mesh (405), identifying vertices associated with staircase artifacts (410), generate weighted values for each vertex in the input surface mesh based on the identified staircase artifacts (415) and selectively smoothing the relevant surfaces using a smoothing algorithm (420). In some embodiments, staircase smoothing system300may also be configured to export the output surface mesh after completion of process400(step425), such as to the external system355via I/O interface350or to database345for storage or use in assistive surgical system100. Staircase smoothing system300receives an input surface mesh at step405. In some embodiments, system300receives the input surface mesh from computing system105of the assistive surgical system100. In some embodiments, system300receives the input surface mesh from a storage device, such as an external storage device or database345. In some embodiments, system300also receives a command to smooth the received input surface mesh. Staircase smoothing system300then implements an artifact identification process at step410. The system300generates a list of identified artifact vertices from the input surface mesh. In some embodiments, system300uses multiple reference vectors to calculate face orientations near a particular vertex. In some embodiments, artifact identification process of step410is executed by the artifact identification module320and stores the resulting identities in artifact buffer335. The various functions performed at step410will be described in more detail in references toFIG.5. After identifying artifact vertices at step410, staircase smoothing system300then generates weighted value for each vertex at step415. The generated weighted values are indicative of the extent to which each vertex in a surface mesh should be smoothed based on their proximity to an artifact vertex identified by system300at step410. Proximity can be determined by a distance calculation, which can be of any known method, such as topological distance or Euclidean distance. In one embodiment, the weight value process415is executed by the weight assignment module325of system300and stores weight values in weight buffer340. In some embodiments, the weighted values are normalized to a scale between zero and one. Various implementations of the functions performed at415will be described in more detail in reference toFIG.8. Process400may include a smoothing algorithm process420. In smoothing process420, the weight values of each vertex are used to selectively smooth a surface mesh. The smoothing process can be any known smoothing algorithm, such as, but not limited to, Laplacian, Vollmer, or Taubin smoothing algorithms. Uniform smoothing algorithms generally adjust the position of each vertex based at least on the positions of its neighboring vertices. Thus, vertices that exist as outliers to their neighboring vertices receive adjusted positions that are closer in proximity to their neighboring vertices. In step420, the extent to which vertices are adjusted is tuned based at least in part on the weighted value assigned to each vertex. The smoothing process420may be executed by smoothing algorithm module330using values stored in the weights buffer340. The smoothing process420, in some embodiments, generates a new smoothed surface mesh data set. In other embodiments, the smoothing process420modifies an existing surface mesh in its current storage location, including, but not limited to, an internal storage buffer, an internal database, or an external system. The smoothing process420, in some embodiments, can be implemented by adapting a known general smoothing algorithm to accept weighted values from the weights buffer340, for example. In this embodiment, any known general smoothing algorithm can be used, such as Laplacian, Vollmer, or Taubin smoothing algorithms. The weighted values can generally be used to determine the extent to which a given vertex will be corrected by the smoothing method. For example, a small weight assigned to a vertex implies little to no correction will be made to the vertex, whereas a large weighted value indicates the vertex will be subject to the normal correction of the smoothing algorithm. An example of an adapted smoothing algorithm is described in reference toFIG.8. Referring now toFIG.5, a flowchart of the artifact identification process410is shown, according to an exemplary embodiment. In executing the artifact identification process410, staircase smoothing system300generates a list of identified artifact vertices in a surface mesh based on the calculated change in face orientation for each vertex. The list of identified artifact vertices can then be used by system300in subsequent steps of process410to generate weighted values based on the identified artifact vertices. In some embodiments, the list is saved in an artifact buffer335when completed by module320. Process410includes step500in which the staircase smoothing system300identifies a set of reference vectors to use to identify artifacts in the input surface mesh. In some embodiments, the set of reference vectors is chosen by a user via a user input device connected to I/O interface350. In some embodiments, the set of reference vectors are stored in memory315. In some embodiments, the coordinate vectors associated with the input surface mesh or input surface mesh data structure are used as the set of reference vectors (which may be referred to as the x, y and z directions). In some embodiments, the set of reference vectors is the two dimensions of the CT image and the stack direction of the CT image collection used to generate the surface mesh. A set of reference vectors may include more or less reference vectors than in the examples described herein (e.g., two reference vectors, four reference vectors, five reference vectors, etc.), and may be orthogonal to one another or may have some other relative orientation in various embodiments. For example, a set of reference vectors may be selected based on the orientation of typical or expected artifacts to be smoothed by the staircase smoothing system300. At step502, system300selects one of the reference vectors from the set of reference vectors to calculate a face orientation for each face of the input surface mesh at505. System300calculates the face orientation for each face of the surface mesh relative to the selected reference vector. Referring toFIG.6A, a face orientation605can be calculated for each face610of a surface mesh600by comparing the normal vector615of the given face610in relation to a determined reference vector625of the set of reference vectors620. The face orientation605is then calculated as the angle between the two vectors. In some embodiments, the normal vector615and reference vector625are both unit vectors. In another embodiment, the same two vectors are identified, and the dot product is taken between the two vectors to generate the face orientation angle. The face orientation may be calculated in some embodiments according to the equation: afi=1-abs⁢(Nfi·DR)where⁢⁢0≤afi≤1 Where afiis the normalized angle for the face orientation of face fiin the input surface mesh, Nfiis the normal unit vector of face fi, DRis the chosen reference unit vector, and Nfi·DRis the dot product between Nfiand DR. This implementation may allow for faster computation since dot product calculations are faster on a processing circuit than inverse trigonometric functions. By using unit vectors, the face orientation afiwill always be between zero and one. As understood with dot product calculation, vectors that are perpendicular produce a dot product value of zero, and vectors that are parallel produce a dot product value of one; thus, according to this implementation, a face with a normal vector parallel to the reference vector generates a face orientation value of zero and normal vectors perpendicular to the reference vector generates a face orientation value of one. One advantage of comparing normal vectors615to a reference vector625is so that only artifacts oriented in the reference direction are considered, since often times artifacts are generated as a byproduct of digitization of otherwise continuous data. In this way, artifact-shaped features present in the surface mesh model, but which were not generated as a result of the surface mesh process, are preserved. At step510, system300calculates the change in face orientation for each vertex based on the orientation of the surface faces adjacent to the said vertex. In one embodiment, the change in face orientation is calculated as the difference between the largest face orientation among neighboring faces adjacent to the vertex and the smallest face orientation among the same neighboring faces. In some embodiments, this change in face orientation is normalized to be between zero and one, where zero represents no change in face orientation, and one represents a 90 degree change in face orientation. In embodiments where a dot product is used in step505with unit vectors, the change in face orientation is already normalized to a scale between zero and one and may not require further modification in regard to normalization. InFIG.6B, it is observed that for surface mesh vertices and faces characterizing a staircase feature630, the two sets of normal vectors635and640that make up a stair artifact are perpendicular to each other (or nearly perpendicular), whereas general face surfaces (such as600inFIG.6A) typically have near-parallel normal vectors to neighboring faces. In this embodiment of process410, this distinction is the basis for identifying a vertex as an artifact. At step515, system300defines a vertex as an artifact vertex if the change in face orientation of step510is greater than a pre-defined threshold and does so for each vertex in the input surface mesh. The threshold can be chosen to vary how strict system300defines a vertex as an artifact. Threshold values may also be based on the system requirements for subsequent use, for example selected to facilitate creation of smooth and accurate haptic control boundaries. More strict thresholds (i.e., a threshold that identify fewer artifact vertices on average than other thresholds) protect non-artifact vertices from being incorrectly identified as an artifact, but is more likely to fail to recognize otherwise-known artifact vertices as being artifacts. Conversely, a less strict threshold (i.e., a threshold that identifies more artifact vertices on average than other thresholds) ensures nearly all artifact vertices are identified but may overcorrect non-artifact surfaces. In some embodiments, for example, a strict threshold may have a value of approximately 0.9, and a less strict threshold may have a value of approximately 0.6. This threshold is thus generally up to the discretion of the application or user preference. At525, system300determines if the functions performed at steps502,505,510, and515should be repeated for a different reference vector in the set of reference vectors. Repeating the functions of502,505,510, and515for different reference vectors can identify additional vertices that represent artifact features. In small-sized mesh applications, such as small-bone scanning, the artifacts generated in all reference directions are of sufficient magnitude to affect accuracy of the guidance, navigation, and/or robotic control that previous staircase-smoothing algorithms do not account for. By using multiple reference directions to identify artifact vertices, an improved system smoothing surface meshes can be achieved for improved guidance, navigation, control, and, in some cases, surgical outcomes. At525, system300determines if there is a reference vectors in the set of reference vectors that has not been used in identifying artifact vertices. For example, three reference vectors may be used to identify artifact vectors before proceeding to step520. If a determination is made at step525that another reference vector is available to identify additional artifact vertices, the process410proceeds to step530where the next reference vector is identified for selection at step502. The next reference vector is then used in step505to calculate a face orientation for each face, and those face orientations are used to define artifact vertices in step510and step515as described above. Additional artifact vertices may thereby be identified by each cycle through steps500-530. If a determination is made at step525that all reference vectors have been used to identify artifact vertices (i.e., used in execution of steps502-515), the process410proceeds to step520where the artifact vertices associated with all reference vectors are aggregated into a master set of artifact vertices. The system300may thereby produce a master set of identified artifact vertices that include artifacts identified relative to multiple reference vectors, for example three reference vectors (e.g., x-, y-, and z-directions). The list of identified artifact vertices may be stored in artifact buffer535or some other portion of memory315. Referring toFIG.7, a set of illustrations of an input surface mesh is shown with artifacts highlighted, according to an exemplary embodiment. In particular,FIG.7shows a first view702showing artifacts identified using the x-direction as the reference vector, a second view704showing artifacts identified using the y-direction as the reference vector, a third view706showing artifacts identified using the z-direction as the reference vector, a fourth view708showing the master set of artifacts identified using all three reference vectors as described for process410. The dotted and solid lines ofFIG.7represent staircase artifacts in the input surface mesh, where the lines are understood to comprise a plurality of at least vertices, edges, or faces that define the staircase artifact. The input surface mesh can comprise staircase artifacts relative to each of the three reference directions (here, the three reference directions are the x, y, and z directions of the input surface mesh coordinate system). Staircase artifacts represented by solid lines indicate vertices that were identified as artifact vertices relative to the subfigure's specified reference direction; staircase artifacts represented by dotted lines indicate vertices that were not identified relative to the subfigure's specified reference direction. It should be noted that view702(x-direction reference vector), view704(y-direction reference vector), and view706(z-direction reference vector) ofFIG.7do not necessarily highlight the same sets of vertices. Thus, artifact vertices identified relative to only one reference direction may not properly identify all artifact vertices in an input surface mesh, and multiple reference directions may be used to identify all artifact vertices. By identifying artifact vertices relative to multiple reference directions, a master set of vertices is generated, which is shown in view708ofFIG.7. In some embodiments, substantially all of the artifacts in the input mesh can be identified using the approach of process410. Referring now toFIG.8, a flowchart for the calculation of weighted values process415is shown. The weights assigned to each vertex determine the extent to which the system300correct a vertex's position during smoothing process420. Various methods may be implemented by system300to generate weighted values for vertices based on identified artifact vertices. A preferred embodiment of generating weighted values is shown inFIG.8and described below. In one embodiment, system300begins the functions of process415at800by calculating the distance of each vertex to the nearest artifact vertex (sometimes referred to herein as the distance Difor a vertex i). To determine the distance of a vertex to the nearest artifact vertex, system300may calculated the distance between that vertex and every identified artifact vertex in buffer335, and determine which distance is the minimum distance. The determination of distance across the surface mesh can be of any known method, such as, but not limited to, Euclidean distances or geodesic distances. In some embodiments, the distances are scalar, and thus exclude negative numbers and are direction independent. In some embodiments, the distance may be a distance function, such as a sigmoid or exponential function. In some embodiments, the system300may be configured to store an association between the distance Diand identity of the nearest artifact vertex for each vertex i. At805, system300normalizes the distances determined at800into weighted values (used interchangeably with the term weights) such that the algorithm may be used for any scale or size of surface mesh. In some embodiments, the distances can be normalized to a simple linear scale. For example, after the distance to the nearest artifact is calculated for all vertices in step800, the maximum of these distances is identified and used as the upper bound of the normalization process and zero as the lower bound (since any artifact vertex will have a distance of zero to the nearest artifact vertex, i.e., itself). The normalization process, then, maps all identified distances from800to a weighted value within the range of zero and one, where the weighted value for a given vertex can be calculated as: wi=(Dmax-Di)(Dmax-Dmin) where wiis the weighted value for vertex i, Diis the distances between vertex i and the nearest artifact vertex, and Dmaxand Dminare the maximum and minimum distances Diof all vertices in the surface mesh, respectively. In some embodiments, Dminis always set to zero. In this implementation, artifact vertices are assigned a weighted value of one (since artifact vertices have a distance to the nearest identified artifact vertex Diequal to zero), and vertices furthest from an artifact vertex are assigned a weighted value of zero (i.e., the vertex is identified as not needing to be smoothed as it is not close to an artifact). In some embodiments, the distances are normalized on a quadratic scale to more distinctly isolate vertices most proximate to artifact vertices from those of general, non-artifact surfaces. In such an embodiment, the normalization process of step805identifies the maximum value from all calculated distances determined in step800and uses said maximum distance as the upper bound for the normalization process. The assigned weighted value for a vertex is calculated by: wi=((Dmax-Di)(Dmax-Dmin))2 where wiis the weighted value for vertex i, Diis the distances between vertex i and the nearest artifact vertex, and Dmaxand Dminare the maximum and minimum distances Diof all vertices in the surface mesh, respectively. In some embodiments, Dminis always set to zero. In these embodiments, artifact vertices are assigned a weighted value of one (since artifact vertices have a distance to the nearest identified artifact vertex Diequal to zero), and vertices furthest from an artifact vertex are assigned a weighted value of zero (i.e., the vertex is identified as not needing to be smoothed as it is not close to an artifact). By squaring the quotient, quotient values between zero and one are mapped according to a quadratic curve and shifts those values closer to zero than in the linear scale embodiment described above, thus focusing the smoothing algorithm on only vertices closest to artifact vertices. By using a quadratic scale, smaller surface mesh features can be preserved in the surface mesh model rather than being smoothed by overly-general smoothing algorithms. Additionally, the total surface mesh volume can be more accurate to the ideal surface mesh volume since overly general smoothing algorithms diminish model volumes. At810, system300may compare the normalized weights to a chosen threshold to make additional adjustments. In one embodiment, the vertices with weights that do not meet the threshold are deemed too insignificant in relation to an artifact feature and thus have their weighted values reduced to diminish the extent of later smoothness correction. In some embodiments, the vertices with weights that do not meet the threshold have their weights reduced to zero. In some embodiment, the threshold value could be a based on a vertex's distance to the nearest artifact vertex. The threshold can be varied to specify the extent to which the surface mesh should be smoothed. A larger threshold preserves more vertices in their original location, while a smaller threshold subjects more vertices to being smoothed in subsequent steps. In some embodiments, a large threshold distance may have a value of approximately 0.5, and a small threshold distance may have a value of approximately 0.2. In other embodiments, step805may be performed before or after step810in various embodiments. An advantage to performing the functions of step805prior to those of step810is that a universal threshold can be used rather than an application specific threshold. Since the weighted values are normalized before being compared to the threshold value (step805), the same universal threshold can be used so as to smooth all artifacts. Additionally, when using multiple reference directions in artifact identification process410, artifacts relative to different reference directions may have different magnitudes and thus should not be subjected to the same threshold. At step815, system300may make additional adjustments to the weight values for general smoothness correction. In some embodiments, a uniform weight value is added to each weight such that the smoothing algorithm creates a more generally-smooth final surface. This uniform adjustment can be made at the discretion of the use in application. In other embodiments, the adjusted weight values are calculated as the product of the weights from step810by the difference between one and the uniform offset, added to the uniform offset, e.g., (weight=(1.0−min weight)*weight+min weight). This allows that the final weighted values only exist between the normalized range of zero and one after the additional adjustment. In some embodiments of process415, the functions performed at step815may be omitted. In some embodiments, the final weighted values generated during step815are then stored in the weights buffer340. The final weights will later be used by system300in step430of process400for smoothing by the modified smoothing algorithm. A general smoothing algorithm can be modified such that, a vertex with a weighted value of one is fully subject to being smoothed and thus adjusted by the smoothing algorithm, while a weighted value of zero will receive no adjustment from the smoothing algorithm. All values between zero and one can be proportionally adjusted based at least on their weighted value. For example, given a Laplacian smoothing filter, the Laplacian smoothing algorithm may be modified and applied to the surface mesh as follows: vi,n⁢e⁢w=vi+(k*wi)⁢∑j=1M⁢uj-viM where viis the position of vertex i in the surface mesh, vi,newis the adjusted position of vertex i, k is a uniform smoothing constant, wiis the weighted value assigned to vertex i, ujis the position of a neighboring vertex j to vertex i (a vertex j is neighboring to vertex i if there exists an edge or face that connects that vertex to vertex i), and M is the number of neighboring vertices to vertex i. The construction and arrangement of the systems and methods as shown in the various exemplary embodiments are illustrative only. Although only a few embodiments have been described in detail in this disclosure, many modifications are possible (e.g., variations in sizes, dimensions, structures, shapes and proportions of the various elements, values of parameters, mounting arrangements, use of materials, colors, orientations, etc.). For example, the position of elements can be reversed or otherwise varied and the nature or number of discrete elements or positions can be altered or varied. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. The order or sequence of any process or method steps can be varied or re-sequenced according to alternative embodiments. Other substitutions, modifications, changes, and omissions can be made in the design, operating conditions and arrangement of the exemplary embodiments without departing from the scope of the present disclosure. The present disclosure contemplates methods, systems and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure can be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions. Although the figures show a specific order of method steps, the order of the steps may differ from what is depicted. Also two or more steps can be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
48,212
11861805
DETAILED DESCRIPTION The following described implementations may be found in the disclosed electronic device and method of eyeball positioning for three-dimensional (3D) head modeling. Exemplary aspects of the disclosure may include an electronic device and a method to acquire a set of images, which include an eye of an object. For example, the electronic device may be associated with a set of image sensors, which may be controlled to capture a set of images of the object from a corresponding set of viewpoints. The object may be, for example, an animate object (such as a human or an animal) or an inanimate object (such as a 3D figure of a person or a toy with human-like features). A 3D mesh of a head portion of the object may be acquired. For example, the acquisition of the 3D mesh may be based on an extraction of the 3D mesh from a server or a database communicatively coupled to the electronic device. Prior to the acquisition of the 3D mesh, the 3D mesh may be estimated based on a plurality of images of the object. The plurality of images of the object may include at least the set of images which include the eye of the object. Additionally, a 3D template mesh of an eyeball may be acquired. For example, the acquisition of the 3D template mesh may be based on an extraction of the 3D template mesh from the server or a database communicatively coupled to the electronic device. The acquired set of images may be processed to extract 3D feature points associated with one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. Thereafter, a sphere may be fit to the extracted 3D feature points. Further, an initial pose transformation between the 3D template mesh and the fitted sphere may be estimated. Moreover, one or more operations may be executed by using the 3D template mesh, to interpolate a first set of points that correspond to the one or more regions of the eye. Thereafter, a second set of points, which corresponds to the one or more regions of the eye, may be determined based on sampling parameters associated with the interpolated first set of points. A final pose transformation may be determined based on a minimization of a difference between the first set of points and the second set of points. Further, the 3D template mesh may be fit into an eyeball socket of the 3D mesh, based on the determined final pose transformation. Typically, a 3D mesh of a head portion of an object may not have separate structures for an eyeball in the head portion of the object. Further, the quality of the 3D mesh for a region of the eyeball may be low due to a high specular reflection of the surface of the eyeballs and an occlusion caused by eyelashes. To impart realism to the 3D model of the object, the 3D mesh corresponding to the 3D model may have to be refined. In conventional methods, a 3D head mesh (represents a 3D shape/geometry of the head portion) may be manually refined to accurately represent and position the eyeball in the 3D mesh. A computer graphics artist, designer, modeler, or an expert (hereinafter referred as a human modeler) may refine the 3D mesh by a manual selection of vertices of the 3D mesh and an update of locations of the selected vertices in the 3D mesh to position the eyeball in the 3D mesh. However, manual refinement of the 3D mesh may require significant amount of time and effort and may be prone to errors. In contrast, the present disclosure may provide a new method for automated eyeball positioning in the 3D mesh of the head portion of the object. In the present disclosure, the 3D template mesh, which may be an eyeball mesh, may be used for determination of a final pose transformation of the eyeball. The 3D template mesh of the eyeball may be scaled to fit into the eyeball socket of the 3D mesh, and thus may be realistically sized for the 3D mesh. This may result in a higher accuracy eyeball positioning with improved quality of the eyeball region from the 3D template mesh as compared with that from the conventional methods. As the eyeball may be positioned automatically, manual effort and time may be saved, as compared to conventional methods. FIG.1is a block diagram that illustrates an exemplary network environment for eyeball positioning for three-dimensional (3D) head modeling, in accordance with an embodiment of the disclosure. With reference toFIG.1, there is shown a network environment100. The network environment100may include an electronic device102, a server104, a set of image sensors106, and a communication network108. For example, the set of image sensors106may include a first image sensor106A and a second image sensor106B. InFIG.1, there is further shown an object110that may be scanned by the set of image sensors106. The electronic device102may be communicatively coupled to the server104and the set of image sensors106, via the communication network108. InFIG.1, the server104and the set of image sensors106are shown as two entities which may be separate from the electronic device102. In some embodiments, some or all of the functionalities of the server104and/or the set of image sensors106may be incorporated in the electronic device102, without a deviation from the scope of the present disclosure. The electronic device102may include suitable logic, circuitry, interfaces, and/or code that may be configured to position an eyeball of the object110in a 3D mesh of a head portion of the object110. The 3D mesh may represent a 3D shape of the head portion of the object110. The object110may be an animate object (such as a human subject or an animal) or an inanimate object (such as a statue or a portrait of a human subject). Examples of the electronic device102may include, but are not limited to, a computing device, a video-conferencing system, a virtual reality-based device, an augmented reality-based device, a gaming device, a mainframe machine, a server, a computer work-station, and/or a consumer electronic (CE) device. The server104may include suitable circuitry, interfaces, and/or code that may be configured to store a 3D template mesh of an object, such as the object110. The 3D template mesh may be an eyeball mesh that resembles the shape and other visual attributes of a real-life eyeball. The eyeball mesh may include an anterior (front) segment and a posterior (back) segment. The anterior segment may be made up of cornea, iris, and lens. The server104may be configured to receive a request for the stored 3D template mesh from the electronic device102. In response to such a request from the electronic device102, the server104may transmit the stored 3D template mesh to the electronic device102. Examples of the server104may include, but are not limited to, an application server, a cloud server, a web server, a database server, a file server, a gaming server, a mainframe server, or a combination thereof. The set of image sensors106may include suitable logic, circuitry, interfaces, and/or code that may be configured to capture a set of images of the object110from a set of viewpoints. For example, the set of image sensors106may include a first image sensor that may capture one or more first images of the object110(e.g., a human subject) from one or more first viewpoints. The set of image sensors106may further include a second image sensor that may capture one or more second images of the object110from one or more second viewpoints. The set of images captured by the set of image sensors106may include the one or more first images and the one or more second images. For example, the captured set of images may include a first image112A, a second image112B, and a third image112C. The set of image sensors106may be configured to transmit the captured set of images to the electronic device102, via the communication network108. In an embodiment, each image sensor of the set of image sensors106may be pre-calibrated and operations of the set of image sensors106may be synchronized such that the set of images is captured concurrently. Examples of an image sensor may include, but are not limited to, a charge-coupled device (CCD) sensor, a complementary metal-oxide semiconductor (CMOS) sensor, a wide-angle camera, an action camera, a camcorder, a digital still camera, a camera phone, a time-of-flight camera (ToF camera), and a night-vision camera. In one embodiment, the set of image sensors106may be integrated or embedded into the electronic device102. The communication network108may include a communication medium through which the electronic device102may communicate with the server104and the set of image sensors106. Examples of the communication network108may include, but are not limited to, the Internet, a cloud network, a Wireless Fidelity (Wi-Fi) network, a Personal Area Network (PAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN), a mobile wireless network, such as a Long-Term Evolution (LTE) network (for example, 4thGeneration or 5thGeneration (5G) mobile network (i.e. 5G New Radio)). Various devices of the network environment100may be configured to connect to the communication network108, in accordance with various wired or wireless communication protocols. Examples of such wired and wireless communication protocols may include, but are not limited to, at least one of a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Zig Bee, EDGE, IEEE 802.11, light fidelity (Li-Fi), 802.16, IEEE 802.11s, IEEE 802.11g, multi-hop communication, wireless access point (AP), device to device communication, cellular communication protocols, Bluetooth (BT) communication protocols, or a combination thereof. In operation, the set of image sensors106may be configured to capture a set of images from a set of viewpoints. Each image may include an eye of the object110from a respective viewpoint, which may be associated with a position of an image-sensor (of the set of image sensors106). As shown, for example, the captured set of images may include the first image112A, the second image112B, and the third image112C. The electronic device102may acquire the set of images from the set of image sensors106, through an input/output (1/O) interface or through a network interface associated with the communication network108. The electronic device102may be configured to further acquire a 3D mesh of a head portion of the object110from the server104. In an embodiment, the server104may be configured to estimate the 3D mesh of the head portion of the object110based on a plurality of images of the object110. In an embodiment, the plurality of images of the object110may include at least the set of images comprising the eye of object110. The server104may be configured to transmit the estimated 3D mesh of the head portion to the electronic device102. Thus, the electronic device102may acquire the 3D mesh from the server104. The electronic device102may be further configured to process the acquired set of images (e.g., the first image112A, the second image112B, and the third image112C) to extract 3D feature points associated with one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. The electronic device102may be further configured to fit a sphere to the extracted 3D feature points and may, thereafter, estimate an initial pose transformation between the 3D template mesh and the fitted sphere. The initial pose transformation may be estimated to initialize a pose of the eyeball in the 3D template mesh for further refinement of the pose. The electronic device102may be further configured to execute one or more operations by using the 3D template mesh to interpolate a first set of points. The first set of points may correspond to the one or more regions of the eye. Thereafter, the electronic device102may determine a second set of points based on sampling parameters associated with the interpolated first set of points. The second set of points may also correspond to the one or more regions of the eye. The electronic device102may be further configured to determine a final pose transformation based on a minimization of a difference between the first set of points and the second set of points. The final pose transformation may be determined to accurately position the eyeball in the 3D template mesh based on refinements on the initial pose transformation. The electronic device102may fit the 3D template mesh into an eyeball socket of the 3D mesh, based on the determined final pose transformation. By fitting the 3D template mesh, a final 3D mesh of the head portion of the object110may be generated. Since the process to obtain the final pose transformation is mostly automated; therefore, it may be possible to position and fit the 3D template mesh of the eye into the eyeball socket of the 3D mesh of the head portion, without significant human inputs. Various operations of the electronic device102for eyeball positioning for 3D head modeling are described further, for example, inFIGS.3A,3B,6,7,8,9A,9B,9C,9D,10,11A,11B,12, and13. FIG.2is a block diagram that illustrates an exemplary electronic device, in accordance with an embodiment of the disclosure.FIG.2is explained in conjunction with elements fromFIG.1. With reference toFIG.2, there is shown the electronic device102. The electronic device102may include circuitry202, a memory204, an input/output (1/O) device206, and a network interface208. The I/O device206may include a display screen206A. The circuitry202may be communicatively coupled to the memory204, the I/O device206, and the network interface208. The circuitry202may be configured to communicate with the server104and the set of image sensors106, by use of the network interface208, via the communication network108. The circuitry202may include suitable logic, circuitry, and interfaces that may be configured to execute program instructions associated with different operations to be executed by the electronic device102. The circuitry202may be implemented based on a number of processor technologies known in the art. Examples of the processor technologies may include, but are not limited to, a Central Processing Unit (CPU), an x86-based processor, a Reduced Instruction Set Computing (RISC) processor, an Application-Specific Integrated Circuit (ASIC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphical Processing Unit (GPU), a co-processor, or a combination thereof. The memory204may include suitable logic, circuitry, and/or interfaces that may be configured to store a set of instructions executable by the circuitry202. The memory204may be configured to store an operating system and associated applications. In accordance with an embodiment, the memory204may be also configured to store the acquired set of images of the object110. The memory204may also store the acquired three-dimensional (3D) mesh, the acquired 3D template mesh, information associated with the initial pose transformation, and information associated with the final pose transformation. Examples implementations of the memory204may include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Hard Disk Drive (HDD), a Solid-State Drive (SSD), a CPU cache, and/or a Secure Digital (SD) card. The I/O device206may include suitable logic, circuitry, interfaces, and/or code that may be configured to receive an input from a user. For example, the I/O device206may be configured to receive instructions to capture the set of images as a user input. Also, the I/O device206may receive one or more user inputs required for the automated eyeball positioning in the 3D template mesh. The I/O device206may be also configured to provide an output to the user. For example, as part of the I/O device206, the display screen206A may render a final 3D mesh of the head portion of the object110, based on the automated eyeball positioning in the 3D template mesh of the eye and the fitting of the 3D template mesh into the eyeball socket of the 3D mesh. The I/O device206may include various input and output devices, which may be configured to communicate with the circuitry202. Examples of the input devices may include, but are not limited to, a touch screen, a keyboard, a mouse, a joystick, and/or a microphone. Examples of the output devices may include, but are not limited to, the display screen206A and/or a speaker. The display screen206A may include suitable logic, circuitry, interfaces, and/or code that may be configured to render an application interface to display the final 3D mesh of the head portion of the object110. In accordance with an embodiment, the display screen206A may be a touch screen, where input from the user may be received via the application interface. The display screen206A may capture the input based on an input received from the user. The user may be able to provide inputs by activating and/or interacting with one or more of a plurality of buttons or UI elements displayed on the touch screen. In accordance with an embodiment, the display screen206A may receive the input through a virtual keypad, a stylus, a gesture-based input, and/or a touch-based input. The display screen206A may be realized through several known technologies such as, but not limited to, at least one of a Liquid Crystal Display (LCD) display, a Light Emitting Diode (LED) display, a plasma display, and/or an Organic LED (OLED) display technology, and/or other display. In accordance with an embodiment, the display screen206A may refer to a display screen of smart-glass device, a see-through display, a projection-based display, an electro-chromic display, and/or a transparent display. The network interface208may include suitable logic, circuitry, code, and/or interfaces that may be configured to facilitate communication among the circuitry202, the server104, and the set of image sensors106, via the communication network108. The network interface208may be implemented by use of various known technologies to support wired or wireless communication of the electronic device102with the communication network108. The network interface208may include, but not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, or a local buffer circuitry. The network interface208may be configured to communicate via wireless communication with networks, such as the Internet, an Intranet or a wireless network, such as a cellular telephone network, a wireless local area network (LAN), and a metropolitan area network (MAN). The wireless communication may be configured to use one or more of a plurality of communication standards, protocols and technologies, such as Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), wideband code division multiple access (W-CDMA), Long Term Evolution (LTE), code division multiple access (CDMA), a 5thgeneration network such as 5G new radio (NR) network, a 5G smart antenna, time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (such as IEEE 802.11a, IEEE 802.11b, IEEE 802.11g or IEEE 802.11n), voice over Internet Protocol (VoIP), light fidelity (Li-Fi), Worldwide Interoperability for Microwave Access (Wi-MAX), a protocol for email, instant messaging, and a Short Message Service (SMS). The network interface208may be capable to communicate with a 5G communication network and will include appropriate 5G support functionality such as, but not limited to, a 5G NR, a V2X Infrastructure, and a 5G Smart Antenna. Various operations of the circuitry202for eyeball positioning for 3D head modeling are described further, for example, inFIGS.3A,3B,4,5,6,7,8,9A,9B,9C,9D,10,11A,11B,12, and13. FIGS.3A and3B, collectively, depict a diagram that illustrates an exemplary processing pipeline for positioning an eyeball 3D mesh for three-dimensional (3D) head modeling, in accordance with an embodiment of the disclosure.FIGS.3A and3Bare explained in conjunction with elements fromFIG.1andFIG.2. With reference toFIG.3AandFIG.3B, there is shown a processing pipeline of operations from302to310for eyeball positioning for 3D head modeling. The circuitry202may execute the operations from302to322, as described herein. At302, an eye image acquisition operation may be executed. As part of the eye image acquisition operation, the set of image sensors106may capture a set of images of the object110from a set of viewpoints. The set of images may include at least an eye of the object110. Each of the set of image sensors106may be pre-calibrated and synchronized with one another before the set of images is captured. For example, the set of image sensors106may include a first image sensor that may capture one or more first images of the object110(e.g., a human subject) from one or more first viewpoints. The set of image sensors106may further include a second image sensor that may capture one or more second images of the object110from one or more second viewpoints. The set of images captured by the set of image sensors106may include the one or more first images and the one or more second images. As shown, for example, the captured set of images may include a first image324A, a second image324B, and a third image324C. The first image324A may be captured from a first viewpoint that may correspond to a non-frontal pose of the head of the object110at +30 degrees yaw axis. The second image324B may be captured from a second viewpoint that may correspond to a frontal pose of the head of the object110at a 0-degree yaw axis. Similarly, the third image324C may be captured from a third viewpoint that may correspond to another non-frontal pose of the head of the object110at a −30 degrees yaw axis. The set of image sensors106may be configured to transmit the set of images (e.g., the first image324A, the second image324B, and the third image324C) of the object110to the electronic device102, via the communication network108. Alternatively, the circuitry202may acquire the set of images (e.g., the first image324A, the second image324B, and the third image324C) from the set of image sensors106, through an I/O interface. For example, in a scenario where the set of image sensors106is integrated or embedded into the electronic device102, the circuitry202may acquire the set of images (e.g., the first image324A, the second image324B, and the third image324C) from the set of image sensors106, via the I/O interface. At304, a three-dimensional (3D) mesh may be acquired. In an embodiment, the circuitry202may be configured to acquire a 3D mesh of a head portion of the object110. The 3D mesh may be acquired from the server104. Prior to the acquisition of the 3D mesh, the server104may be configured to estimate the 3D mesh of the head portion of the object110based on a plurality of images of the object110captured by the set of image sensors106. The plurality of images of the object110may include at least a set of images, which includes the eye of object110. The server104may be configured to transmit the estimated 3D mesh to the electronic device102, via the communication network108. In an embodiment, prior to the acquisition of the 3D mesh, the circuitry202may be configured to estimate the 3D mesh and store the estimated 3D mesh in the memory204. The estimated and pre-stored 3D mesh may be acquired from the memory204at304. The method of estimation of the 3D mesh may include, for example, a photogrammetry-based method (such as structure from motion (SfM)), a method which requires stereoscopic images, or a method which requires monocular cues (such as shape from shading (SfS), photometric stereo, or shape from texture (SfT)). Such techniques may be known to one ordinarily skilled in the art; therefore, details of such techniques have been omitted from the disclosure for the sake of brevity. In an embodiment, a photogrammetric reconstruction method may be used to estimate the 3D mesh of the head portion of the object110based on the plurality of images of the object110. The photogrammetric reconstruction method may include operations, such as, but not limited to, a feature detection and matching operation, a sparse reconstruction operation, a multi-view stereo operation, and a fusion and meshing operation. By way of an example, and not limitation, the photogrammetric reconstruction may be a Structure-from-motion based reconstruction, as described in, Schönberger, Johannes L., and Jan-Michael Frahm, “Structure-from-motion revisited”, Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. By way of another example, and not limitation, the photogrammetric reconstruction may be based on a pixelwise view selection, as described in, Schönberger, Johannes L., et al., “Pixelwise view selection for unstructured multi-view stereo”, European Conference on Computer Vision (ECCV), 2016. As shown, for example, a 3D mesh326may be acquired. In an embodiment, prior to a use of the acquired 3D mesh326for eyeball positioning for 3D head modeling, as described further, for example, inFIGS.3A and3B, the acquired 3D mesh326may be processed through a mesh clean-up pipeline. The mesh clean-up pipeline may include a group of operations that may be executed by the electronic device102. In another embodiment, the group of operations may be executed by the server104and a cleaned-up 3D mesh, obtained based on the group of the operations may be acquired from the server104by the electronic device102. In an embodiment, the group of operations may include a first set of operations that may be executed on the acquired 3D mesh to generate a second 3D mesh. The first set of operations may include a removal of one or more regions which may be unneeded for object-shape estimation and/or a removal of one or more mesh artifacts associated with a 3D shape or a topology of the acquired 3D mesh. The group of operations may further include a processing of a 3D template head mesh to determine a set of filling patches which may corresponds to a set of holes in the second 3D mesh. The group of operations may further include execution of a hole filling operation, based on the second 3D mesh and the set of filling patches, to generate a cleaned-up 3D mesh. The cleaned-up 3D mesh associated with the head-portion of the object110may be used further for eyeball positioning. Hereinafter, the cleaned-up 3D mesh may be referred to as the 3D mesh326. In another embodiment, the acquired 3D mesh326may not be processed through the mesh clean-up pipeline. In such a case, the acquired 3D mesh326that may be considered as a raw 3D scan of the head portion of the object110. An example of the 3D mesh is provided, for example, inFIG.4. At306, a 3D template mesh of an eyeball may be acquired. In an embodiment, the circuitry202may be configured to acquire the 3D template mesh (e.g., a 3D template mesh328) of the eyeball of an object, such as the object110. The 3D template mesh328may be stored on the server104. The server104may be configured to transmit the 3D template mesh328to the electronic device102, via the communication network108. In an embodiment, the 3D template mesh328may be pre-stored in the memory204of the electronic device102. In such a case, the circuitry202may acquire the 3D template mesh328from the memory204. An example of the 3D template mesh and an eyeball socket of the 3D mesh326is provided, for example, inFIG.5. At308, 3D feature points may be extracted. In an embodiment, the circuitry202may be configured to process the acquired set of images to extract the 3D feature points. The 3D feature points may be associated with one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. In an embodiment, the circuitry202may be configured to identify a set of two-dimensional (2D) feature points of the eye in each of the acquired set of images (e.g., the first image324A, the second image324B, and the third image324C). Further, the circuitry202may determine a 3D position of each of the set of 2D feature points, based on a set of camera parameters associated with one or more image-capture devices (e.g., the set of image sensors106) that captured the set of images. The 3D features points may be extracted based on the determined 3D position. In an embodiment, the identification of the set of 2D feature points may be based on one or more of, but not limited to, a user input, an eyelid detection technique, or an eye part segmentation technique. The set of 2D feature points may include contour points along eyelids of the eye and a point at a center of a pupil of the eye. For example, a first set of 3D feature points330A associated with the contours of the eyelids and a second 3D feature point330B associated with the center of the pupil may be extracted based on the processing of the acquired set of images. The first set of 3D feature points330A and the second 3D feature point330B are shown in an eye portion330of the 3D mesh (e.g., the 3D mesh326). In an embodiment, the circuitry202may be configured to process a raw 3D scan (not shown inFIG.3A) of the head portion of the object110to extract 3D points corresponding to a sclera of the one or more regions of the eye. For example, inFIG.3A, there are shown 3D points332A corresponding to the sclera of an eye portion332of the raw 3D scan. The extraction of the 3D feature points, and the 3D points are described further, for example, inFIG.6. At310, a sphere fitting operation may be executed. In an embodiment, the circuitry202may be configured to execute the sphere fitting operation. The sphere fitting operation may include fitting of a sphere (e.g., a sphere334) to the extracted 3D feature points (for example, a set of 3D feature points334A).FIG.3Aalso depicts a point334B, which may be the center of the fitted sphere334. In an embodiment, the fitting of the sphere334to the extracted 3D feature points (for example, the set of 3D feature points334A) may be based on an expression (1), which is given as follows. minc,r∑i=1n[(xi-c·x)2+(yi-c·y)2+(zi-c·z)2-r2]2(1)such that rmin≤r≤rmax where,[xi,yi,zi]Tmay represent coordinates of an extracted 3D feature point;n may represent a number of the extracted 3D feature points.c.x, c.y, and c.z may represent coordinates of the center of the fitted sphere334;r may represent a fitted radius of the fitted sphere334; andrminand rmaxmay represent a minimum value and a maximum value for the fitted radius (i.e., r) based on a real-life size of the human eye. In an embodiment, the circuitry202may be configured to process the raw 3D scan of the head portion of the object110to extract the 3D points (e.g., the 3D points332A) corresponding to the sclera of the one or more regions of the eye, as described at308. The circuitry202may also fit the sphere334to the extracted 3D points (e.g., the 3D points332A), based on the expression (1). In an embodiment, the circuitry202may be further configured to estimate a scale factor (which may be denoted by “s”) that may correspond to a ratio of a radius (i.e., “r”) of the fitted sphere334to a radius (which may be denoted by “reye”) of the 3D template mesh328. The scale factor may be estimated based on an equation (2), which is given as follows: s=rreye(2) The estimation of the scale factor may be done to correctly scale the template 3D mesh328so that the template 3D mesh328matches a scale/size of the eyeball socket of the 3D mesh326. The 3D template mesh328may be fitted into an eyeball socket of the 3D mesh326based on the estimated scale factor (i.e., “s”). The scale factor may be referred as a scale parameter of a pose transformation. At312, an initial pose transformation may be estimated. In an embodiment, the circuitry202may be configured to estimate the initial pose transformation. The initial pose transformation may be between the 3D template mesh328and the fitted sphere334. In addition to the scale factor, a rotation parameter and a translation parameter of the initial pose transformation may be estimated. The estimation of the scale factor is described further, for example, at310. The circuitry202may be configured to estimate the rotation parameter of the initial pose transformation between a first vector along an axis of rotation of the 3D template mesh328and a second vector that may span from a center (e.g., the point334B) of the fitted sphere334to a 3D point that may correspond to a center of a pupil of the eye. Similarly, the circuitry202may be configured to estimate the translation parameter of the initial pose transformation based on an offset between the center (e.g., the point334B) of the fitted sphere334and the center of the 3D template mesh328. The estimation of the initial pose transformation based on the estimation of the rotation parameter and the translation parameter of the initial pose transformation is described further, for example, inFIG.7. At314, one or more operations may be executed to interpolate a first set of points. In an embodiment, the circuitry202may be configured to execute one or more operations by using the 3D template mesh328to interpolate a first set of points. The first set of points may correspond to the one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. In an embodiment, to interpolate the first set of points, the circuitry202may be configured to label contours of the one or more regions, including eyelids, a limbus, and a pupil in the acquired set of images (for example, the first image324A, the second image324B, and the third image324C). The circuitry202may be further configured to project one or more contours of the labeled contours to a 3D coordinate space, based on defined camera parameters. Further, the circuitry202may determine a set of contour points as intersecting points of the projection on the 3D template mesh328. The determination of the set of contour points is described further, for example, inFIG.8. In an embodiment, the executed one or more operations may include, but not limited to, a first operation to unwrap the 3D template mesh328to a UV coordinate space and a second operation to apply one or more interpolation methods. The unwrapped 3D template mesh may include the determined set of contour points in the UV coordinate space. Further, the one or more interpolation methods may be applied to fit spline curves into eyelid points of the set of contour points and fit a circle into limbus points of the set of contour points. The fitting of the spline curves and the circle may be based on the initial pose transformation and a parameter for sampling points used in the one or more interpolation methods. In an embodiment, the first set of points may correspond to points included in each of the fitted spline curves and the fitted circle. The first operation is described further, for example, inFIG.8. The second operation is described further, for example, inFIG.10. Examples of interpolated eyelid points and limbus points are described further, for example, inFIG.11A. In another embodiment, to interpolate the first set of points, the circuitry202may be configured to label one or more points on an iris mesh component of the template 3D mesh328. The labeled one or more points may correspond to a location of a pupil in the iris mesh component. The circuitry202may be configured to update positions of the labeled one or more points, based on a refractive index of a cornea of the eye and an intersection of a plane formed by the labeled one or more points with rays cast from a reference position outside the template 3D mesh328. The first set of points may include the updated positions of the labeled one or more points. The interpolation of pupil points is described further, for example, inFIGS.9A,9B,9C, and9D. Examples of the interpolated pupil points are described further, for example, inFIG.11B. In an embodiment, the circuitry202may be configured to process a raw 3D scan of the head portion of the object110to extract 3D points (e.g., the 3D points332A) corresponding to the sclera of the one or more regions of the eye. The circuitry202may be further configured to determine vertex positions corresponding to the sclera on the 3D template mesh328based on the extracted 3D points332A. Further, the circuitry202may determine reference 3D points on the 3D template mesh328based on the determined vertex positions. The determination of the reference 3D points on the 3D template mesh328is described further, for example, inFIG.10. At316, a second set of points may be determined. In an embodiment, the circuitry202may be configured to determine a second set of points, based on sampling parameters associated with the interpolated first set of points. Similar to the first set of points, the determined second set of points may correspond to the one or more regions of the eye. The determination of the second set of points is described further, for example, inFIG.12. At318, a final pose transformation may be determined. In an embodiment, the circuitry202may be configured to determine the final pose transformation, based on a minimization of a difference between the first set of points and the second set of points. In an embodiment, the determination of the final pose transformation may be further based on a minimization of a distance between the reference 3D points and the extracted 3D points. The determination of the final pose transformation is described further, for example, inFIG.12. At320, the 3D template mesh328may be fitted into the eyeball socket of the 3D mesh326. The 3D mesh326may include an empty eyeball socket with a space to include an eyeball mesh. The circuitry202may be configured to fit the 3D template mesh328into the eyeball socket of the 3D mesh326, based on the determined final pose transformation and the estimated scale factor (i.e., “s”, as described further at310). Based on the estimated scale factor, the 3D template mesh328may be scaled to a size that may be suitable for the space provided in the empty eyeball socket of the 3D mesh326. The scaled 3D template mesh328may then be fitted into the eyeball socket of the 3D mesh326. The scaled 3D template mesh328of the eyeball may be accurately positioned in the 3D template mesh328, based on the determined final pose transformation. The final pose transformation may specify an amount of rotation (or orientation) and translation required to accurately position the scaled 3D template mesh328into the eyeball socket. After fitting, the scaled 3D template mesh328may impart photorealism to an eye portion of the 3D mesh326. At322, the 3D mesh326may be refined. In an embodiment, the circuitry202may be configured to apply, around an eyelid contour of the 3D mesh326, an as-rigid-as-possible (ARAP) deformation over the 3D mesh326, to obtain a refined 3D mesh. The ARAP deformation may be applied based on a position of the eyelid contour and the final pose transformation. In an embodiment, the 3D template mesh328may be fitted into the eyeball socket of the refined 3D mesh. The refinement of the 3D mesh326is described further, for example, inFIG.13. In conventional methods, the 3D mesh326that may represent the 3D shape of the head portion of the object110may be manually refined to accurately represent and position the eyeball in the 3D mesh326. A human modeler may refine the 3D mesh326by manual selection of vertices of the 3D mesh326and may update locations of the selected vertices in the 3D mesh326to position the eyeball in the 3D mesh326. However, manual refinement of the 3D mesh326may require significant amount of time and effort and may be prone to errors. In contrast, the present disclosure provides a method for automated eyeball positioning in the 3D mesh326of the head portion of the object110. The present disclosure makes use of 3D/2D key points corresponding to an eye region to calculate a scale factor for a template eyeball mesh and to iteratively determine a pose transformation. The determination of the pose transformation may be modeled as an optimization problem (such as a minimization of an objective function). The pose transformation which results in the minimization of the objective function may be considered as the final pose transformation. Since the pose transformation is determined automatically, the disclosed method may not just save time, but may result in a more accurate eyeball positioning. FIG.4is a diagram that illustrates an example of a 3D mesh of a head portion of an object including an eye portion, in accordance with an embodiment of the disclosure.FIG.4is described in conjunction with elements fromFIGS.1,2,3A, and3B. With reference toFIG.4, there is shown a diagram400. The diagram400may include an exemplary 3D mesh402of a head portion of an object (e.g., the object110). The diagram400may further include an exemplary eye portion404A of the 3D mesh402and an expanded view404B of the eye portion404A of the 3D mesh402. In an embodiment, the circuitry202may be configured to acquire the 3D mesh402, as described, for example, inFIG.3A. The 3D mesh402may be a 3D model of the head portion of the object110. In an embodiment, the 3D mesh402may be a raw 3D scan of the head portion of the object110. In another embodiment, the 3D mesh402may be obtained by processing the raw 3D scan through a mesh clean-up pipeline. In such a case, the 3D mesh402may be a cleaned-up 3D mesh, as described, for example, inFIG.3A. The eye portion404A of the 3D mesh402may include a region of the 3D mesh402that includes eyes of the object110. The eyes of the object110may be open in the eye portion404A. Though the eye portion404A may be the region of the 3D mesh402that includes the eyes, the eye portion404A may not include separate a structure for each eyeball. In other words, the eye portion404A may include a structure for an entire eye, however, the eye portion404A may not include a specific eyeball structure. A quality of 3D mesh402for the eye portion404A may be low due to a high specular reflection on a surface of the eyeballs and an occlusion that may be caused by the eyelashes. The 3D mesh402and the eye portion404A ofFIG.4are merely shown as examples of a head mesh reconstructed from images and an eye portion of the head mesh. Such examples should not be construed as limiting the scope of the disclosure. FIG.5is a diagram that illustrates an example of a 3D template mesh of an eyeball and an example of an eyeball socket of a 3D mesh of a head portion of an object, in accordance with an embodiment of the disclosure.FIG.5is described in conjunction with elements fromFIGS.1,2,3A,3B, and4. With reference toFIG.5, there is shown a diagram500. The diagram500may include an exemplary 3D template mesh502of an eyeball. The diagram500may further include an exemplary eyeball socket504of a 3D mesh (e.g., the 3D mesh326) of a head portion of the object110. The circuitry202may be configured to acquire the 3D template mesh502, as described, for example, inFIG.3A. After the acquisition, the circuitry202may be configured to fit the 3D template mesh502into the eyeball socket504of the 3D mesh326, based on the determined final pose transformation. The determination of the final pose transformation is described further, for example, inFIGS.3A-3B and12. The 3D template mesh502may be fitted into the eyeball socket504further based on the estimated scale factor (i.e., “s”), as described further, for example, inFIG.3A. The 3D template mesh502and the eyeball socket504ofFIG.5are for exemplary purposes and should not be construed as limiting the scope of the disclosure. FIG.6is a diagram that illustrates an exemplary scenario for extraction of 3D feature points associated with one or more regions of an eye and extraction of 3D points corresponding to a sclera of the eye, in accordance with an embodiment of the disclosure.FIG.6is described in conjunction with elements fromFIGS.1,2,3A,3B,4, and5. With reference toFIG.6, there is shown an exemplary scenario600. The scenario600may include a set of images602, an eye portion608of the 3D mesh326, and an eye portion610of the raw 3D scan. The set of images602may include images of an eye of the object110. As shown, for example, the set of images602includes a first image602A, a second image602B, and a third image602C. The circuitry202may be configured to acquire the set of images602from the set of image sensors106, as described, for example, inFIG.3A. Thereafter, the circuitry202may be configured to process the acquired set of images602to extract the 3D feature points. The 3D feature points may be associated with one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. In an embodiment, the circuitry202may be configured to identify the set of 2D feature points of the eye in each of the acquired set of images602. The identification of the set of 2D feature points may be based on a user input, an eyelid detection technique, or an eye part segmentation technique. The set of 2D feature points may include contour points along eyelids of the eye and a point at a center of a pupil of the eye. For example, as shown inFIG.6, first contour points604A, second contour points604B, and third contour points604C may be identified in the first image602A, the second image602B, and the third image602C, respectively, as the contour points along eyelids of the eye. Additionally, a first point606A, a second point606B, and a third point606C may be identified as points at the center of the pupil of the eye in the first image602A, the second image602B, and the third image602C, respectively. The set of 2D feature points identified from the set of images602may include the first contour points604A, the second contour points604B, the third contour points604C, the first point606A, the second point606B, and the third point606C. The circuitry202may be further configured to determine a 3D position of each of the identified set of 2D feature points, based on a set of camera parameters associated with one or more image-capture devices (e.g., the set of image sensors106) that captured the set of images602. Such camera parameters may be intrinsic and extrinsic camera parameters. In an embodiment, the 3D position of each 3D feature point may be determined based on a triangulation of the identified set of 2D feature points. The 3D features points may be extracted based on the determined 3D position. For example, as shown inFIG.6, a first set of 3D feature points608A associated with the contours of the eyelids and a second 3D feature point608B associated with the center of the pupil may be extracted from the eye portion608of the 3D mesh326based on the processing of the acquired set of images602. In an embodiment, the circuitry202may be configured to process a raw 3D scan (not shown inFIG.6) of the head portion of the object110to extract 3D points corresponding to a sclera of the one or more regions of the eye. For example, as shown inFIG.6, 3D points610A corresponding to the sclera of the eye may be extracted from the eye portion610of the raw 3D scan based on the processing of the raw 3D scan. The circuitry202may execute an eye part segmentation operation on each of the acquired set of images602. Based on the executed eye part segmentation operation, the circuitry202may determine a set of regions in each of the acquired set of images602that may correspond to the sclera of the eye. Thereafter, the circuitry202may project the determined set of regions in each of the acquired set of images602to the raw 3D scan, to extract the 3D points corresponding to the sclera from the raw 3D scan. The projection of the determined set of regions may be based on the one or more camera parameters (associated with the set of image sensors106). The extracted 3D points corresponding to the sclera may be associated with vertices of the raw 3D scan. The 3D feature points (or the 3D points) may not be labeled from the 3D mesh326(or the raw 3D scan) as an eye region of the 3D mesh326(or the raw 3D scan) may not be accurate. The eye region may have a reflection on the surface of the 3D mesh326(or the raw 3D scan). In addition, in the 3D mesh326(or the raw 3D scan), the eye region may be occluded due to the eyelids. The scenario600ofFIG.6for the extraction of the 3D feature points and the extraction of the 3D points is for exemplary purposes and may not be used for limiting the scope of the disclosure. FIG.7is a diagram that illustrates an exemplary scenario for estimation of an initial pose transformation between a 3D template mesh of an eyeball and a sphere fitted to 3D feature points, in accordance with an embodiment of the disclosure.FIG.7is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5, and6. With reference toFIG.7, there is shown an exemplary scenario700. The scenario700may include a sphere702, an eye portion706of a 3D mesh (e.g., the 3D mesh326), and a 3D template mesh712. The scenario700may further include a first point704A on the sphere702, a second point704B in the eye portion706, a third point708in the eye portion706, and a fourth point718on the 3D template mesh712. The scenario700may further include a first vector716, an axis of rotation714of the 3D template mesh712, and a second vector710. The circuitry202may be configured to fit the sphere702to the extracted 3D feature points (for example, the first set of 3D feature points608A associated with the contours of the eyelids). The sphere702may be fitted further to the extracted 3D points (for example, the 3D points610A). The fitting of the sphere702is described further, for example, inFIG.3A. The first point704A may be the center of the sphere702. The second point704B may be a 3D location in the eye portion706of the 3D mesh326and may the correspond to the center of the sphere702. The first point704A and the second point704B are denoted by “C”. The third point708(which may be denoted by “CP”) may be the center of the pupil in the eye portion706of the 3D mesh326. The third point708may be determined based on the extracted 3D feature points (for example, the second 3D feature point608B associated with the center of the pupil). The fourth point718(which may be denoted by “C0”) may be the center of the 3D template mesh712. The first vector716(denoted by “a0”) may be a vector along the axis of rotation714of the 3D template mesh712. Further, the second vector710(denoted by “a1”) may be a vector that spans from the center (e.g., the second point704B, denoted by “C”) of the fitted sphere702to a 3D point (e.g., the third point708, denoted by “CP”) that corresponds to a center of a pupil of the eye. The circuitry202may be configured to estimate a rotation parameter (denoted by “R”) of an initial pose transformation, between the first vector716(denoted by “a0”) and the second vector710(denoted by “a1”). By way of example, and not limitation, the rotation parameter may be estimated by use of equations (3), (4), (5), (6), and (7), which are given as follows: v=a0×a1(3)ss=v(4)c=a0·a1(5)[v]x=def[0-v3v2v30-v1-v2v10](6)R=I+[v]x+[v]x2⁢1-cs⁢s2(7) where,v may represent a cross-product of the first vector716(denoted by “a0”) and the second vector710(denoted by “a1”);ss may represent sine of an angle between the first vector716(denoted by “a0”) and the second vector710(denoted by “a1”);c may represent cosine of an angle between the first vector716(denoted by “a0”) and the second vector710(denoted by “a1”);[v]xmay represent a skew-symmetric cross-product matrix of “v”;I may represent an identity matrix; 1-cs⁢s2 may be simplified as 11+c; andR may represent a rotation matrix between the first vector716(denoted by “a0”) and the second vector710(denoted by “a1”). In an embodiment, the circuitry202may be configured to estimate a translation parameter (denoted by “t”) of the initial pose transformation based on an offset between the center (e.g., the second point704B, denoted by “C”) of the fitted sphere702and the center (e.g., the fourth point718, denoted by “C0”) of the 3D template mesh712. The translation parameter (denoted by “t”) may be estimated by use of the following equation (8): t=C−C0(8) The initial pose transformation between the 3D template mesh712and the fitted sphere702may be estimated based on the estimated rotation parameter (denoted by “R”) and the estimated translation parameter (denoted by “t”) of the initial pose transformation. The scenario700ofFIG.7is for exemplary purpose and should not be construed as limiting the scope of the disclosure. FIG.8is a diagram that illustrates an interpolation of a first set of points that correspond to one or more regions of an eye, in accordance with an embodiment of the disclosure.FIG.8is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6, and7. With reference toFIG.8, there is shown an diagram800that illustrates a first set of operations for interpolation of a first set of points that correspond to one or more regions of an eye. In the diagram800, there is shown a set of images802, which may include, for example, a first image802A, a second image802B, and a third image802C in an image space. The diagram800depicts a first set of operations804and a 3D template mesh806. The first set of operations804may be executed on an image (e.g., the first image802A of the set of images802) in the image space and the 3D template mesh806. In the diagram, there is further shown a 3D space808corresponding to the image space associated with the set of images802and a UV space810corresponding to both the 3D space808and the image space. The circuitry202may be configured to acquire the set of images802from the set of image sensors106, as described, for example, inFIG.3A. Thereafter, the circuitry202may be configured to label contours of one or more regions of the eye, including eyelids, a limbus, and a pupil in the acquired set of images802. The contours of the one or more regions may be labeled based on a user input, an eye segmentation technique, or a combination of the eye segmentation technique and a user input. In an embodiment, to execute the first set of operations804, the circuitry202may be configured to project one or more contours of the labeled contours to a 3D coordinate space (e.g., the 3D space808), based on defined camera parameters of each of the set of image sensors106. The camera parameters may include a first set of intrinsic camera parameters and a second set of extrinsic camera parameters. To execute the first set of operations804, the circuitry202may be configured to determine a set of contour points as intersecting points of the projection on the 3D template mesh806. For example, as shown inFIG.8, for the first image802A, rays from a center of an image plane (associated with the image sensor) may be cast onto the 3D template mesh806, such that the rays may pass through the one or more contours of the labeled contours in the first image802A. The circuitry202may determine the set of contour points on the 3D template mesh806as points of intersection between the rays of the projection on the 3D template mesh806. For example, the intersections of the projected rays and the pre-positioned eyeball (in the 3D template mesh806) may be determined as the eyelid contour and the limbus contour on the 3D template mesh806. The projection of one or more contours or points for the pupil is described further for example, inFIGS.9A,9B,9C, and9D. The execution of one or more operations (to interpolate the first set of points), including a first operation to unwrap the 3D template mesh806to a UV coordinate space (e.g., the UV space810) and a second operation to apply one or more interpolation methods on the set of contour points, are described further, for example, inFIG.10. The diagram800ofFIG.8is for exemplary purpose and should not be construed as limiting the scope of the disclosure. FIG.9Ais a diagram that illustrates an exemplary image of an eye including a pupil, in accordance with an embodiment of the disclosure.FIG.9Ais described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7, and8. With reference toFIG.9A, there is shown a diagram900A of an exemplary image902of an eye and a pupil902A in the image902. The pupil902A is an anatomical structure that remains inside an outer eyeball structure of the eye. The size of the pupil902A in the eye may vary based on ambient light conditions. FIGS.9B and9Care diagrams that illustrate interpolation of a first set of points corresponding to the iris of an eye, in accordance with an embodiment of the disclosure.FIG.9Bis described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8, and9A. With reference toFIG.9B, there is shown a diagram900B. The diagram900B includes a 3D template mesh904of an eyeball, an iris mesh component906of the 3D template mesh904, a horizontal plane908, and a vertical plane910perpendicular to the horizontal plane908. The vertical plane910(which may also be referred as an imaging plane910) may include an imaging slit910A. As shown in the diagram900B, a set of rays, including a first ray912may be cast from the imaging slit910A to the 3D template mesh904. The angle between the first ray912and a normal914at a point of an intersection of the first ray912and an outer surface of the 3D template mesh904may be referred to as a first angle916A (denoted by81). The first ray912may be diffracted inside the 3D template mesh904due to a difference of refractive indices of air and the cornea of the eye. The angle between the diffracted ray and the normal914at the point of the intersection of the first ray912and the outer surface of the 3D template mesh904may be referred to as a second angle916B (denoted by θ2). The circuitry202may be configured to label one or more points906A on the iris mesh component906of the 3D template mesh904. The labeled one or more points906A may correspond to a location of a pupil in the iris mesh component906. In an embodiment, the one or more points906A corresponding to the location of the pupil in the iris mesh component906may be labeled based on a user input. The circuitry202may be configured to update positions of the labeled one or more points906A, based on a refractive index of the cornea of the eye. The refractive index of the cornea of the eye may be determined based on equation (9), which is given as follows: n1sin θ1=n2sin θ2(9) where,n1may represent incident index (e.g., a refractive index of air);n2may represent refracted index (e.g., a refractive index of the cornea of the eye);θ1may represent an incident angle (e.g., the first angle916A); andθ2may represent a refracted angle (e.g., the second angle916B). The update of the positions of the labeled one or more points906A may be further based on an intersection of a plane formed by the labeled one or more points906A with rays cast from a reference position outside the 3D template mesh904, as described further, for example, inFIG.9C. With reference toFIG.9C, there is shown a diagram9000. The diagram9000may include the labeled one or more points906A that may correspond to the location of the pupil in the iris mesh component906of the 3D template mesh904. The diagram9000may further include a plane918that may extend from the labeled one or more points906A. The diagram9000may include an expanded view918A of the plane918. The expanded view918A may include intersection points920of rays (e.g., the first ray912) that may be cast from a reference position922outside the 3D template mesh904. The circuitry202may be configured to update of the positions of the labeled one or more points906A further based on an intersection (at the intersection points920) of the plane918formed by the labeled one or more points906A with rays (e.g., the first ray912) cast from the reference position922outside the 3D template mesh904. Thus, the positions of the labeled one or more points906A corresponding to the pupil (in the iris mesh component906of the 3D template mesh904) may be updated based on the intersection points920. In other words, the positions of the labeled one or more points906A may be updated to the positions of the intersection points920. An example of the updated positions of the labeled one or more points906A is provided, for example, inFIG.9D. FIG.9Dis a diagram that illustrates update of positions of labeled one or more points corresponding to the pupil of the eye in an iris mesh component of a 3D template mesh, in accordance with an embodiment of the disclosure.FIG.9Dis described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B, and9C. With reference toFIG.9D, there is shown a diagram900D. The diagram900D may include a first 3D template mesh924A and a second 3D template mesh924B. The diagram900D may further include first positions926A of the labeled one or more points corresponding to the pupil of the eye and second positions926B of the labeled one or more points corresponding to the pupil of the eye. The first positions926A may lie in the first 3D template mesh924A and the second positions926B may lie in the in the second 3D template mesh924B. The first 3D template mesh924A may be same as the 3D template mesh904and the first positions926A may be same as the positions of the labeled one or more points906A corresponding to the pupil in the iris mesh component906of the 3D template mesh904. The second 3D template mesh924B may be an eye mesh that may be obtained from the first 3D template mesh924A based on the update of the first positions926A. The first positions926A may be updated to the second positions926B based on the positions of the intersection points920, as described further, for example, inFIG.9C. The diagrams900A,900B,9000, and900D ofFIGS.9A,9B,9C, and9D, respectively, are for exemplary purpose and should not be construed as limiting the scope of the disclosure. FIG.10is a diagram that illustrates an exemplary scenario for interpolation of a first set of points that correspond to one or more regions of an eye, in accordance with an embodiment of the disclosure.FIG.10is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B,9C, and9D. With reference toFIG.10, there is shown an exemplary scenario1000for interpolation of a first set of points that correspond to one or more regions of an eye. In the scenario1000, there is shown the set of images802, which includes, for example, the first image802A, the second image802B, and the third image802C in the image space. The scenario1000may include the 3D space808corresponding to the image space associated with the set of images802and the UV space810corresponding to both the 3D space808and the image space. The scenario1000may further include an operation1002for determination of reference 3D points corresponding to the sclera on an 3D template mesh (e.g., the 3D template mesh806). The scenario1000may further include an operation1004to fit spline functions in the UV space810. The circuitry202may be further configured to process the raw 3D scan of the head portion of the object110to extract the 3D points corresponding to the sclera of the one or more regions of the eye, as described, for example, inFIG.6. To execute the operation1002, the circuitry202may be configured to determine vertex positions corresponding to the sclera on the 3D template mesh806based on the extracted 3D points (e.g., the 3D points610A). The circuitry202may be further configured to determine reference 3D points on the 3D template mesh806based on the determined vertex positions corresponding to the sclera on the 3D template mesh806. For example, the reference 3D point corresponding to the sclera may be determined based on a Barycentric coordinate of a corresponding vertex position on a line that may connect the vertex position to the center of the eyeball. For each 3D reference point Vr(that corresponds to the 3D point in the sclera region extracted from the raw 3D scan), the circuitry202may connect the extracted 3D point with the center of the eyeball template such that the connected line may intersect with a triangle of the eyeball template (e.g., a triangle with 3D coordinates Va, Vb, Vc). The intersection may be represented as a Barycentric coordinate Vi, where Vi=a*Va+b*Vb+c*Vc, where a, b, and c may be Barycentric coefficients. The reference 3D points may correspond to points which lie on the 3D template mesh806and are closest to the extracted 3D points. In an embodiment, the execution of one or more operations to interpolate the first set of points may include a first operation to unwrap the 3D template mesh806to a UV coordinate space (e.g., the UV space810) and a second operation to apply one or more interpolation methods on the set of contour points. The circuitry202may be configured to execute the first operation to unwrap the 3D template mesh806from the 3D space808to the UV space810. The unwrapped 3D template mesh may include the determined set of contour points in the UV coordinate space, i.e., the UV space810. The extracted 3D points (i.e., labeled points) for the eyelids and the limbus on the 3D template mesh806may be projected from the 3D space808to the UV space810. The circuitry202may be further configured to execute the second operation to apply the one or more interpolation methods on the set of contour points. To execute the second operation, the circuitry202may execute the operation1004to fit spline functions in the UV space810. As part of the operation1004, the circuitry202may be configured to fit spline curves into eyelid points of the set of contour points and fit a circle into limbus points of the set of contour points. In an embodiment, the fitting of the spline curves and the circle may be based on the initial pose transformation and a parameter for sampling points used in the one or more interpolation methods. The first set of points may correspond to points included in each of the fitted spline curves and the fitted circle. For example, based on the extracted 3D feature points (e.g., the first set of 3D feature points608A) of the eyelid, the circuitry202may fit a spline function (e.g., a function denoted by “Eyelid(.)”) in the UV space810. Parameters that may be used for the fitting of the spline function may include a pose parameter (denoted by ρ) of the eyeball and parameter values for sampling points (denoted by c) of the spline function. The pose parameter (denoted by ρ) may be a part of the initial pose transformation and may be known. Alternatively, in case of later iterations, the pose parameter may be a pose estimated from a previous iteration. The parameter values for the sampling points may be initialized as equidistantly positioned control points of the spline curve. s By way of example, and not limitation, the circuitry202may fit two fourth-order spline curves (i.e., a first spline curve to an upper eyelid contour and a second spline curve to a lower eyelid contour) with six control points, each to the contour of the upper and the lower eyelids in the UV space810. The circuitry202may use equation (10), as follows, to fit the spline curves: ailid=Camera(Eyeball(Eyelid(cilid,ρ),ρ))  (10) where,ailidmay represent 2D coordinates of the labeled contour points of the eyelid on the set of images802;cilidmay represent the parameter values for sampling points for the eyelid;ρ may represent the pose parameter (e.g., the initial pose transformation);Eyelid(.) may represent a spline function for the eyelid contours;Eyeball(.) may represent a function that may project the labeled points from UV space810to the 3D space808; andCamera(.) may represent a function that may project points from the 3D space808to the image space (i.e., the 2D space). The circuitry202may fit a first circle into the limbus points of the set of contour points by use of equation (11), which may be given as follows: ailim=Camera(Eyeball(Limbus(cilim),ρ))  (11) where,ailimmay represent 2D coordinates of the labeled contour points of the limbus on the set of images802;cilimmay represent the parameter values for sampling points for the limbus;ρ may represent the pose parameter (e.g., the initial pose transformation including the scale, the rotation, and the transformation parameters);Limbus(.) may represent a function that may be a circle function for the limbus contour;Eyeball(.) may represent a function that may project the labeled points from UV space810to the 3D space808; andCamera(.) may represent a function that may project points from the 3D space808to the image space (i.e., the 2D space). Based on the fitted first circle for the limbus, the circuitry202may estimate a radius (denoted by rlimbus) of the limbus and an angle (θlimbus) for each limbus point (corresponding to a labeled limbus point in the 3D space808) in the UV space810. Further, the circuitry202may fit a second circle for the pupil on the extended 3D plane (e.g., the plane918). As described inFIG.9C, the plane918may be formed by the labeled one or more points906A with rays (e.g., the first ray912) cast from the reference position922outside the 3D template mesh904. Based on the fitted second circle for the pupil, the circuitry202may estimate a radius (rpupil) of the pupil and an angle (θpupil) for each labeled point of the pupil on an extended 3D plane (e.g., the plane918) for the pupil. The first set of points may include at least points on the fitted two spline curves for the eyelid points, the fitted first circle for the limbus points, and the fitted second circle for the pupil points. Examples of labeled points of the set of contour points and the first set of points are provided, for example, inFIGS.11A and11B. The scenario1000ofFIG.10is for exemplary purpose and should not be construed as limiting the scope of the disclosure. FIGS.11A and11Bare diagrams that illustrate exemplary labeled points of a set of contour points of an eye and exemplary interpolated first set of points, in accordance with an embodiment of the disclosure.FIGS.11A and11Bare described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B,9C,9D, and10. With reference toFIG.11A, there is shown a diagram1100A which includes a first UV space1102A and a second UV space1102B. The first UV space1102A may include a first set of contour points1104A associated with upper eyelids of an eye and a second set of contour points1106A associated with lower eyelids of the eye. The first UV space1102A may further include a third set of contour points1108A associated with a limbus of the eye. The first set of contour points1104A, the second set of contour points1106A, and the third set of contour points1108A may be contour points, selected based on the labeled contours of the one or more regions of the eye. The contour points may lie on the 3D template mesh806that may be unwrapped from the 3D space808to the UV space810. The second UV space1102B may include a first set of contour points1104B associated with upper eyelids of the eye and a second set of contour points1106B associated with lower eyelids of the eye. The second UV space1102B may further include a third set of contour points1108B associated with a limbus of the eye. The first set of contour points1104B may correspond to the first spline curve that may be fitted to the contour points of the upper eyelids of the eye, and the second set of contour points1106B may correspond to the second spline curve that may be fitted to the contour points of the lower eyelids of the eye. The third set of contour points1108B may correspond to the first circle fitted to the contour points of the limbus of the eye. The interpolated first set of points in the second UV space1102B (for example, the UV space810) may include the first set of contour points1104B, the second set of contour points1106B, and the third set of contour points1108B. With reference toFIG.11B, there is shown a diagram1100B that includes a first 3D space1110A and a second 3D space1110B. The first 3D space1110A may include a first set of contour points1112A associated with a pupil of the eye and the second 3D space1110B may include a second set of contour points1112B associated with the pupil of the eye. The first set of contour points1112A may correspond to the labeled one or more points corresponding to a location of the pupil in the iris mesh component906. The second set of contour points1112B may correspond to the second circle fitted to the contour points of the pupil of the eye. The interpolated first set of points in the second 3D space1110B (for example, the 3D space808) may include the second set of contour points1112B. The diagrams1100A and1100B ofFIGS.11A and11Bare for exemplary purpose and may not be used for limiting the scope of the disclosure. FIG.12is a diagram that illustrates an exemplary scenario for determination of final pose transformation, in accordance with an embodiment of the disclosure.FIG.12is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B,9C,9D,10,11A, and11B. With reference toFIG.12, there is shown an exemplary scenario1200for one or more operations for determination of a final pose transformation for a 3D template mesh (for example, the 3D template mesh806). In the scenario1200, there is shown the set of images802, which includes, for example, the first image802A, the second image802B, and the third image802C in the image space. The scenario1200may include the 3D space808corresponding to the image space associated with the set of images802and the UV space810corresponding to both the 3D space808and the image space. The scenario1200may further include an operation1202for pose optimization based on distance minimization. In an embodiment, the circuitry202may be configured to determine a second set of points, which may correspond to the one or more regions of the eye, based on sampling parameters associated with the interpolated first set of points. For example, sampling parameters, such as, the various control points (i.e., “c”) of the two curves fitted for the eyelids, may be varied to determine the second set of points corresponding to the eyelids. Further, the fitted first circle for the limbus may be shifted towards a center of the UV space810to determine the second set of points corresponding to the limbus. In addition, the fitted second circle for the pupil may be shifted towards a center of the extended 3D plane (e.g., the plane918) to determine the second set of points corresponding to the pupil. To execute the operation1202for pose optimization based on distance minimization, the circuitry202may be configured to determine a final pose transformation based on a minimization of a difference between the first set of points and the second set of points. The difference may be specified in terms of a distance measure in the 3D space808to be estimated between the reference 3D points and the extracted 3D points associated with the sclera, and a distance measure in the 3D space808between the first set of points and the second set of points associated with the pupil. Also, the difference may be specified in terms of a distance measure in the UV space810between the first set of points and the second set of points associated with the eyelids and a distance measure in the UV space810between the first set of points and the second set of points associated with the limbus. In an embodiment, the determination of the final pose transformation may be an iterative process in which the initial pose transformation (such as the pose parameter, ρ) may be iteratively updated till distance measure is minimum. The determination of the reference 3D points is described further, for example, inFIG.10and the extraction of the 3D points is described further, for example, inFIG.6. In an embodiment, the circuitry202may determine the final pose transformation by use of equations (12), (13), (14), (15), (16), (17), (18), and (19), which may be given as follows: Eannotation(P)=Elimbus+Eeyelid+Escan+Epupil(12)xilim=Camera(Eyeball(Limbus(cilim),ρ))(13)Elimbus=wlimbus·1nlim·∑i=1nlimxilim-ailim2(14)xilid=Camera(Eyeball⁢(Eyelid⁢(cilid,ρ),ρ))(15)Ee⁢y⁢elid=weyelid·1nlid·∑i=1nlidxilid-ailid2(16)Es⁢c⁢a⁢n=ws⁢c⁢a⁢n·1nr⁢e⁢f·∑i=1nr⁢e⁢fxiscl-piscl2(17)ripup=(Refract⁢(cam-1(aipup),Eyeball(ρ))(18)Epupil=wpupil·1npup·∑i=1pupxipup(ρ,r,θ)-ripup2(19) where,Eannotation(P) may represent an objective function associated with a pose of an eye that may be optimized by minimization for the determination of the final pose transformation;Elimbusmay represent an energy term for the limbus of the eye;Eeyelidmay represent an energy term for the eyelids of the eye;Escanmay represent an energy term for the sclera of the eye;Epupilmay represent an energy term for the pupil of the eye;wlimbusmay represent a weight associated with the energy term (i.e., Elimbus) for the limbus of the eye;weyelidmay represent a weight associated with the energy term (i.e., Eeyelid) for the eyelids of the eye;wscanmay represent a weight associated with the energy term (i.e., Escan) for the sclera of the eye;wpupilmay represent a weight associated with the energy term (i.e., Epupil) for the pupil of the eye;nlimmay represent a number of points (i.e., the first set of points or the second set of points) associated with the limbus in the UV space810;nlidmay represent a number of contour points (i.e., the first set of points or the second set of points) associated with the eyelid in the UV space810;nrefmay represent a number of the reference 3D points (or the extracted 3D points) corresponding to the sclera in the 3D space808;npupmay represent a number of points (i.e., the first set of points or the second set of points) associated with the pupil in the 3D space808;xilimmay represent the second set of points interpolated for the limbus;ailimmay represent 2D coordinates of the labeled contour points of the limbus on the set of images802;cilimmay represent the parameter values for the sampling points for the limbus;ρ may represent the pose parameter (e.g., the initial pose transformation including the scale, the rotation, and the transformation parameters);Limbus(.) may represent a function that may a circle function for the limbus contour;Eyeball(.) may represent a function that may project the labeled points from UV space810to the 3D space808; andCamera(.) may represent a function that may project points from the 3D space808to the image space (i.e., the 2D space)xilidmay represent the second set of points interpolated for the eyelid;ailidmay represent 2D coordinates of the labeled contour points of the eyelid on the set of images802;cilidmay represent the parameter values for sampling points for the eyelid;Eyelid(.) may represent a spline function for the eyelid contours;xisclmay represent coordinates of the extracted 3D points corresponding to the sclera in the 3D space808;pisclmay represent coordinates of the reference 3D points corresponding to the sclera in the 3D space808;ripupmay represent 2D coordinates of a radius of the pupil;aipupmay represent 2D coordinates of the labeled one or more points of the pupil;cam−1(.) may represent an inverse Camera(.) function that may project points from the image space (i.e., the 2D space) to the 3D space808;Refract(.) may represent a function that may model refraction of incident rays of the pupil at the cornea of the eye; andXipup(ρ,rpupil,θpupil) may represent triangulated 3D position for the pupil (represented by polar coordinates (rpupil,θpupil)), which may be moved towards the center of the extended 3D plane (e.g., the plane918). In an embodiment, the circuitry202may optimize (i.e., minimize) the energy term Elimbus(for the limbus of the eye) and the energy term Eeyelid(for the eyelids of the eye) in the 2D space (for example, in the UV space810). Further, the circuitry202may optimize (i.e., minimize) the energy term Escan(for the sclera of the eye) and the energy term Epupil(for the pupil of the eye) in the 3D space808. The optimization may be executed iteratively such that the interpolated second set of points (e.g., xilim, xilid, piscl, and Xipup(ρ,rpupil,θpupil)) associated with the one or more regions of the eye of a previous iteration may be used to initialize the first set of points for the next iteration and interpolate the second set of points for the next iteration. For example, in each iteration, the pose ρ may be known from initialization or the previous iteration. The first set of points may be the labeled 2D points (for example, for eyelids and limbus) or 3D points (for example, for sclera and pupil) and may be fixed. Spline and circle fitting may be used to interpolate the labeled 2D points in the UV space810. Once the second set of points is determined, the objective function may be minimized to estimate the pose ρ. The process may be repeated with the next iteration. The optimization may continue until a target value for the objective function (i.e., Eannotation(P)) may be achieved or until the objective function cannot be minimized further. The final value of the pose determined at the end of the optimization may correspond to the final pose transformation. In an embodiment, the circuitry202may be configured to fit the 3D template mesh806into an eyeball socket of the 3D mesh326, based on the determined final pose transformation, as described, for example, inFIG.3B. In an embodiment, the circuitry202may be further configured to apply, around an eyelid contour of the 3D mesh326, an as-rigid-as-possible (ARAP) deformation over the 3D mesh326, to obtain a refined 3D mesh, as described further, for example, inFIG.13. The scenario1200ofFIG.12is for exemplary purpose and should not be construed as limiting the scope of the disclosure. FIG.13is a diagram that illustrates an exemplary scenario to obtain a refined 3D mesh, in accordance with an embodiment of the disclosure.FIG.13is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B,9C,9D,10,11A,11B, and12. With reference toFIG.13, there is shown an exemplary scenario1300. The scenario1300may include an eye portion1302of the 3D mesh326, an eye portion1304of a refined 3D mesh, a set of vertices1306corresponding to eyelid contours in the eye portion1302, a set of target positions1308corresponding to the set of vertices1306, a set of deformation vectors1310, and a deformation region1312in the eye portion1302. After the final pose transformation (as described inFIG.12) for the 3D template mesh806is determined, the 3D mesh326may have to be refined to remove gaps or holes between the boundary vertices of the 3D template mesh806of the eyeball and surrounding mesh vertices of the eyeball socket. Thus, the circuitry202may refine vertex positions where the eyelids touch the eyeball in the 3D mesh326for a smooth and seamless fitting of the 3D template mesh806in the eyeball socket. Such refinement of the 3D mesh326may be based on the estimated final pose of the eyeball and the fitted 3D curve of the eyelid contours. The circuitry202may be configured to apply an as-rigid-as-possible (ARAP) deformation over the 3D mesh326to obtain the refined 3D mesh. The ARAP deformation may be applied around an eyelid contour (including the set of vertices1306) of the 3D mesh326to obtain the refined 3D mesh (as shown in the eye portion1304, inFIG.13). The ARAP deformation may be applied based on a position of the eyelid contour (including the set of vertices1306) and the final pose transformation (which may be associated with the set of deformation vectors1310). For example, based on the set of deformation vectors1310, the set of vertices1306corresponding to the eyelid contours in the eye portion1302may be updated to the set of target positions1308corresponding to the set of vertices1306. Further, based on the update of the set of vertices1306, the deformation region1312may be deformed. In an embodiment, the circuitry202may be configured to fit the 3D template mesh328into the eyeball socket of the refined 3D mesh. The scenario1300ofFIG.13is for exemplary purpose and may not be used for limiting the scope of the disclosure. FIG.14is a flowchart that illustrates exemplary operations for eyeball positioning for 3D head modeling, in accordance with an embodiment of the disclosure.FIG.14is described in conjunction with elements fromFIGS.1,2,3A,3B,4,5,6,7,8,9A,9B,9C,9D,10,11A,11B,12, and13. With reference toFIG.14, there is shown a flowchart1400. The flowchart1400may include operations from1404to1422and may be implemented on the electronic device102. The flowchart1400may start at1402and proceed to1404. At1404, the set of images comprising the eye of the object110may be acquired. In an embodiment, the circuitry202may be configured to acquire the set of images (e.g., the first image324A, the second image324B, and the third image324C). The set of images may include the eye of the object110. The set of image sensors106may capture the set of images and transmit the captured set of images to the electronic device102. The circuitry202may acquire the captured set of images from the set of image sensors106. The acquisition of the set of images is described further, for example, inFIG.3A. At1406, the 3D mesh326of the head portion of the object110may be acquired. In an embodiment, the circuitry202may be configured to acquire the 3D mesh326of the head portion of the object110. In an embodiment, the 3D mesh326may be acquired from the server104. The acquisition of the 3D mesh is described further, for example, inFIG.3A. At1408, the 3D template mesh of the eyeball may be acquired. In an embodiment, the circuitry202may be configured to acquire the 3D template mesh (e.g., a 3D template mesh328) of the eyeball of an object, such as, the object110(for example, a human subject, or an animal, or a statue/portrait of a human subject or an animal). The acquisition of the 3D template mesh is described further, for example, inFIG.3A. At1410, the acquired set of images may be processed to extract the 3D feature points associated with the one or more regions of the eye. In an embodiment, the circuitry202may be configured to process the acquired set of images to extract the 3D feature points. The 3D feature points may be associated with one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. In an embodiment, the circuitry202may be configured to identify the set of 2D feature points of the eye in each of the acquired set of images (e.g., the first image324A, the second image324B, and the third image324C). Further, the circuitry202may determine a 3D position of each of the set of 2D feature points, based on a set of camera parameters associated with one or more image-capture devices (e.g., the set of image sensors106) that captured the set of images. Herein, the 3D features points may be extracted based on the determined 3D position. In an embodiment, the identification of the set of 2D feature points may be based on one or more of, but not limited to, a user input, an eyelid detection technique, or an eye part segmentation technique. Further, the set of 2D feature points may include contour points along eyelids of the eye and a point at a center of a pupil of the eye. For example, a first set of 3D feature points330A associated with the contours of the eyelids and a second 3D feature point330B associated with the center of the pupil may be extracted based on the processing of the acquired set of images. The first set of 3D feature points330A and the second 3D feature point330B are shown in an eye portion330of the 3D mesh (e.g., the 3D mesh326). In an embodiment, the circuitry202may be configured to process a raw 3D scan (not shown inFIG.3A) of the head portion of the object110to extract 3D points corresponding to a sclera of the one or more regions of the eye. For example, inFIG.3A, there are shown, 3D points332A corresponding to the sclera in an eye portion332of the raw 3D scan. The extraction of the 3D feature points and the 3D points are described further, for example, inFIG.6. At1412, the sphere334may be fit to the extracted 3D feature points. In an embodiment, the circuitry202may be configured to fit the sphere334to the extracted 3D feature points (for example, a set of 3D feature points334A, as shown inFIG.3A). The fitting of the sphere to the extracted 3D feature points is described further, for example, inFIG.3A. At1414, the initial pose transformation between the 3D template mesh328and the fitted sphere334may be estimated. In an embodiment, the circuitry202may be configured to estimate the initial pose transformation between the 3D template mesh328and the fitted sphere334. To estimate the initial pose transformation, the scale factor, the rotation parameter, and the translation parameter of the initial pose transformation may be estimated. The estimation of the initial pose transformation is described further, for example, inFIG.3B. At1416, the one or more operations may be executed by using the 3D template mesh328, to interpolate the first set of points that may correspond to the one or more regions of the eye. In an embodiment, the circuitry202may be configured to execute the one or more operations by using the 3D template mesh328, to interpolate the first set of points that may correspond to the one or more regions of the eye. Examples of the one or more regions of the eye may include, but are not limited to, eyelids, a limbus, a sclera, a pupil, and an iris. The execution of the one or more operations is described further, for example, inFIG.3B. At1418, the second set of points may be determined, based on the sampling parameters associated with the interpolated first set of points. The determined second set of points may correspond to the one or more regions of the eye. In an embodiment, the circuitry202may be configured to determine the second set of points, based on sampling parameters associated with the interpolated first set of points. The determination of the second set of points is described further, for example, inFIG.12. At1420, the final pose transformation may be determined based on the minimization of the difference between the first set of points and the second set of points. In an embodiment, the circuitry202may be configured to determine the final pose transformation, based on the minimization of the difference between the first set of points and the second set of points. In an embodiment, the determination of the final pose transformation may be further based on the minimization of the distance between the reference 3D points and the extracted 3D points. The difference may be specified in terms of a distance measure in the 3D space808to be estimated between the reference 3D points and the extracted 3D points associated with the sclera, and also a distance measure in the 3D space808between the first set of points and the second set of points associated with the pupil. The difference may also be in terms of a distance measure in the UV space810between the first set of points and the second set of points associated with the eyelids, and also a distance measure in the UV space810between the first set of points and the second set of points associated with the limbus. The determination of the final pose transformation is described further, for example, inFIG.12. At1422, the 3D template mesh328may be fit into the eyeball socket of the 3D mesh326based on the determined final pose transformation. In an embodiment, the 3D mesh326may include an empty eyeball socket to represent an eyeball in the head portion of the object110. The circuitry202may be configured to fit the 3D template mesh328into the eyeball socket of the 3D mesh326, based on the determined final pose transformation and the estimated scale factor (i.e., “s”), as described further at310. In other words, based on the estimated scale factor, the 3D template mesh328may be scaled to a size that may represent a life-size human eye. The scaled 3D template mesh328may then be fitted into the eyeball socket of the 3D mesh326. The eyeball may be accurately positioned in the 3D template mesh328, based on the determined final pose transformation, as described, for example, at operations described at308to318. When the 3D template mesh328with the accurately positioned eyeball may be properly scaled (based on the scale factor) and fit into the eyeball socket of the 3D mesh326, the eyeball may impart photorealism to the 3D mesh326. Control may pass to the end. Although the flowchart1400is illustrated as discrete operations, such as1404,1406,1408,1410,1412,1414,1416,1418,1420, and1422, the disclosure is not so limited. Accordingly, in certain embodiments, such discrete operations may be further divided into additional operations, combined into fewer operations, or eliminated, depending on the particular implementation without detracting from the essence of the disclosed embodiments. Various embodiments of the disclosure may provide a non-transitory computer readable medium and/or storage medium having stored thereon, instructions executable by a machine and/or a computer to operate an electronic device (for example, the electronic device102). The instructions may cause the electronic device102to perform operations that include acquiring a set of images (e.g., the images324A,324B, and324C) comprising an eye of an object (e.g., the object110). The operations may further include acquiring a three-dimensional (3D) mesh (e.g., the 3D mesh326) of a head portion of the object110. The operations may further include acquiring a 3D template mesh (e.g., the 3D template mesh328) of an eyeball. The operations may further include processing the acquired set of images to extract 3D feature points (e.g., the first set of 3D feature points330A associated with the contours of the eyelids and the second 3D feature point330B associated with the center of the pupil) associated with one or more regions of the eye. The operations may further include fitting a sphere (e.g., the sphere334) to the extracted 3D feature points. The operations may further include estimating an initial pose transformation between the 3D template mesh328and the fitted sphere334. The operations may further include executing one or more operations by using the 3D template mesh328, to interpolate a first set of points that correspond to the one or more regions of the eye. The operations may further include determining a second set of points which may correspond to the one or more regions of the eye based on sampling parameters associated with the interpolated first set of points. The operations may further include determining a final pose transformation based on a minimization of a difference between the first set of points and the second set of points. The operations may further include fitting the 3D template mesh328into an eyeball socket of the 3D mesh326, based on the determined final pose transformation. Exemplary aspects of the disclosure may provide an electronic device (such as, the electronic device102ofFIG.1) that includes circuitry (such as, the circuitry202). The circuitry202may be configured to acquire a set of images (e.g., the images324A,324B, and324C) of an eye of an object (e.g., the object110). The circuitry202may be further configured to acquire a three-dimensional (3D) mesh (e.g., the 3D mesh326) of a head portion of the object110. The circuitry202may be further configured to acquire a 3D template mesh (e.g., the 3D template mesh328) of an eyeball. The circuitry202may be further configured to process the acquired set of images to extract 3D feature points (e.g., the first set of 3D feature points330A associated with the contours of the eyelids and the second 3D feature point330B associated with the center of the pupil) associated with one or more regions of the eye. The circuitry202may be further configured to fit a sphere (e.g., the sphere334) to the extracted 3D feature points. The circuitry202may be further configured to estimate an initial pose transformation between the 3D template mesh328and the fitted sphere334. The circuitry202may be further configured to execute one or more operations by using the 3D template mesh328, to interpolate a first set of points that correspond to the one or more regions of the eye. The circuitry202may be further configured to determine a second set of points which may correspond to the one or more regions of the eye based on sampling parameters associated with the interpolated first set of points. The circuitry202may be further configured to determine a final pose transformation based on a minimization of a difference between the first set of points and the second set of points. The circuitry202may be further configured to fit the 3D template mesh328into an eyeball socket of the 3D mesh326, based on the determined final pose transformation. In an embodiment, the one or more regions of the eye comprise of eyelids, a limbus, a sclera, a pupil, and an iris. In an embodiment, the circuitry202may be further configured to identify a set of two-dimensional (2D) feature points of the eye in each of the acquired set of images. The circuitry202may be further configured to determine a 3D position of each of the set of 2D feature points, based on a set of camera parameters associated with one or more image-capture devices that captured the set of images. The 3D features points may be extracted based on the determined 3D position. The identification of the set of 2D feature points may be based on one or more of a user input, an eyelid detection technique, or an eye part segmentation technique, and the set of 2D feature points include contour points along eyelids of the eye and a point at a center of a pupil of the eye. In an embodiment, the circuitry202may be further configured to process a raw 3D scan of the head portion of the object to extract 3D points corresponding to a sclera of the one or more regions of the eye. The circuitry202may be further configured to fit the sphere334further to the extracted 3D points. In an embodiment, the circuitry202may be further configured to estimate a scale factor that may correspond to a ratio of a radius of the fitted sphere334to a radius of the 3D template mesh328. The 3D template mesh328may be fitted into the eyeball socket further based on the estimated scale factor. In an embodiment, the circuitry202may be further configured to estimate a rotation parameter of the initial pose transformation between a first vector along an axis of rotation of the 3D template mesh328and a second vector that spans from a center of the fitted sphere334to a 3D point that corresponds to a center of a pupil of the eye. The circuitry202may be further configured to estimate a translation parameter of the initial pose transformation based on an offset between the center of the fitted sphere334and the center of the 3D template mesh328. In an embodiment, the circuitry202may be further configured to label contours of the one or more regions including eyelids, a limbus, and a pupil in the acquired set of images. The circuitry202may be further configured to project one or more contours of the labelled contours to a 3D coordinate space, based on defined camera parameters. The circuitry202may be further configured to determine a set of contour points as intersecting points of the projection on the 3D template mesh328. The execution of the one or more operations may comprise of a first operation to unwrap the 3D template mesh328to a UV coordinate space. The unwrapped 3D template mesh may include the determined set of contour points in the UV coordinate space. The execution of the one or more operations may further comprise of a second operation to apply one or more interpolation methods to fit spline curves into eyelid points of the set of contour points, and fit a circle into limbus points of the set of contour points. In an embodiment, the fitting of the spline curves and the circle may be based on the initial pose transformation and a parameter for sampling points used in the one or more interpolation methods. In an embodiment, the first set of points corresponds to points included in each of the fitted spline curves and the fitted circle. In an embodiment, the circuitry202may be further configured to label one or more points on an iris mesh component of the 3D template mesh328. The labeled one or more points may correspond to a location of a pupil in the iris mesh component. In an embodiment, the circuitry202may be further configured to update positions of the labelled one or more points, based on a refractive index of a cornea of the eye and an intersection of a plane formed by the labelled one or more points with rays cast from a reference position outside the 3D template mesh328. The first set of points may include the updated positions of the labelled one or more points. In an embodiment, the circuitry202may be further configured to process a raw 3D scan of the head portion of the object to extract 3D points corresponding to a sclera of the one or more regions of the eye. The circuitry202may be further configured to determine vertex positions corresponding to the sclera on the 3D template mesh328based on the extracted 3D points. The circuitry202may be further configured to determine reference 3D points on the 3D template mesh328based on the determined vertex positions corresponding to the sclera on the 3D template mesh328. The final pose transformation may be determined further based on a minimization of a distance between the reference 3D points and the extracted 3D points. In an embodiment, the circuitry202may be further configured to apply, around an eyelid contour of the 3D mesh326, an as-rigid-as-possible (ARAP) deformation over the 3D mesh326, to obtain a refined 3D mesh. The ARAP deformation may be applied based on a position of the eyelid contour and the final pose transformation. Further, the 3D template mesh is fitted into the eyeball socket of the refined 3D mesh. The present disclosure may be realized in hardware, or a combination of hardware and software. The present disclosure may be realized in a centralized fashion, in at least one computer system, or in a distributed fashion, where different elements may be spread across several interconnected computer systems. A computer system or other apparatus adapted to carry out the methods described herein may be suited. A combination of hardware and software may be a general-purpose computer system with a computer program that, when loaded and executed, may control the computer system such that it carries out the methods described herein. The present disclosure may be realized in hardware that comprises a portion of an integrated circuit that also performs other functions. The present disclosure may also be embedded in a computer program product, which comprises all the features that enable the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program, in the present context, means any expression, in any language, code or notation, of a set of instructions intended to cause a system with information processing capability to perform a particular function either directly, or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. While the present disclosure is described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departure from the scope of the present disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departure from its scope. Therefore, it is intended that the present disclosure is not limited to the particular embodiment disclosed, but that the present disclosure will include all embodiments that fall within the scope of the appended claims.
104,751
11861806
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION Camera calibration is an important task for computer vision applications, such as tracking systems, simultaneous localization and mapping (SLAM), and augmented reality (AR). Recently, many professional sports leagues have deployed some version of a vision-based tracking system. Additionally, AR applications (e.g., Virtual 3 in NBA®, First Down Line in NFL®) used during video broadcasts to enhance audience's engagement have become commonplace. All of these applications require high-quality camera calibration systems. Presently, most of these applications rely on multiple pre-calibrated fixed cameras or the real-time feed of pan-tilt-zoom (PTZ) parameters directly from the camera. However, as the most widely available data source in the sports domain is broadcast videos, the ability to calibrate from a single, moving camera with unknown and changing camera parameters would greatly expand the reach of player tracking data and fan-engagement solutions. Calibration of a single moving camera remains a challenging task as the approach should be accurate, fast, and generalizable to a variety of views and appearances. The one or more techniques described herein allows for a computing system to determine camera homography of a single moving camera given the frame and the sport. Current approaches to camera calibration mainly follow a framework based on field registration, template matching (i.e., camera pose initialization), and homography refinement. Most of these approaches focus on sports where semantic information (e.g., key court markings) is easy to extract, the field appearance is consistent across stadiums (e.g., green grass and white lines), and motion of the camera is relatively slow and smooth. These assumptions, however, do not hold in more dynamic sports, such as basketball, where players occlude field markings, the field appearance varies wildly from venue to venue, and the camera moves quickly. Furthermore, most existing works consist of multiple standalone models that are trained or tuned separately. As a result, they cannot achieve a global optimal for such an optimization task. This issue further limits the performance of those methods in more challenging scenarios as error propagates through the system, module to module. The one or more techniques described herein relate to a brand new end-to-end neural network used for camera calibration. Through use of the end-to-end neural network, the present system is able to handle more challenging scenarios involving motion blur, occlusion and large transformations—scenarios existing systems are simply unable to account for or address. In some embodiments, the present system implements area-based semantics rather than lines for camera calibration, thus providing a more robust approach for dynamic environments and those environments with highly variable appearance features. In some embodiments, the present system incorporates a spatial transformation network for large transform learning, which aids in reducing the number of required templates for calibration purposes. In some embodiments, the present system implements an end-to-end architecture for camera calibration, which allow for joint training and inference homography much more efficiently. FIG.1is a block diagram illustrating a computing environment100, according to example embodiments. Computing environment100may include camera system102, organization computing system104, and one or more client devices108communicating via network105. Network105may be of any suitable type, including individual connections via the Internet, such as cellular or Wi-Fi networks. In some embodiments, network105may connect terminals, services, and mobile devices using direct connections, such as radio frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), Wi-Fi™ ZigBee™, ambient backscatter communication (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connection be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore, the network connections may be selected for convenience over security. Network105may include any type of computer networking arrangement used to exchange data or information. For example, network105may be the Internet, a private data network, virtual private network using a public network and/or other suitable connection(s) that enables components in computing environment100to send and receive information between the components of environment100. Camera system102may be positioned in a venue106. For example, venue106may be configured to host a sporting event that includes one or more agents112. Camera system102may be configured to capture the motions of all agents (i.e., players) on the playing surface, as well as one or more other objects of relevance (e.g., ball, referees, etc.). In some embodiments, camera system102may be an optically-based system using, for example, a plurality of fixed cameras. For example, a system of six stationary, calibrated cameras, which project the three-dimensional locations of players and the ball onto a two-dimensional overhead view of the court may be used. In another example, a mix of stationary and non-stationary cameras may be used to capture motions of all agents on the playing surface as well as one or more objects of relevance. As those skilled in the art recognize, utilization of such camera system (e.g., camera system102) may result in many different camera views of the court (e.g., high sideline view, free-throw line view, huddle view, face-off view, end zone view, etc.). Generally, camera system102may be utilized for the broadcast feed of a given match. Each frame of the broadcast feed may be stored in a game file110. Camera system102may be configured to communicate with organization computing system104via network105. Organization computing system104may be configured to manage and analyze the broadcast feed captured by camera system102. Organization computing system104may include at least a web client application server114, pre-processing engine116, a data store118, and a camera calibrator120. Pre-processing engine116and camera calibrator120may include one or more software modules. The one or more software modules may be collections of code or instructions stored on a media (e.g., memory of organization computing system104) that represent a series of machine instructions (e.g., program code) that implements one or more algorithmic steps. Such machine instructions may be the actual computer code the processor of organization computing system104interprets to implement the instructions or, alternatively, may be a higher level of coding of the instructions that is interpreted to obtain the actual computer code. The one or more software modules may also include one or more hardware components. One or more aspects of an example algorithm may be performed by the hardware components (e.g., circuitry) itself, rather as a result of the instructions. Data store118may be configured to store one or more game files124. Each game file124may include the broadcast data of a given match. For example, the broadcast data may be a plurality of video frames captured by camera system102. Camera calibrator120may be configured to calibrate the cameras of camera system102. For example, camera calibrator120may be configured to project players detected in the trackable frames to real world coordinates for further analysis. Because cameras in camera systems102are constantly moving in order to focus on the ball or key plays, such cameras are unable to be pre-calibrated. Camera calibrator120may be configured to generate a homography matrix that can register a target ground-plane surface of any frame from the broadcast video with a top view field model. For example, camera calibrator120may implement a single neural network to find a homograph matrix H that can register the target ground-plane surface of any frame I from a broadcast video with a top view field model M. In some embodiments, the standard objective function for computing homography with point correspondence may be: H=arg⁢minH1❘"\[LeftBracketingBar]"χ❘"\[RightBracketingBar]"⁢∑(xi′,xi)⁢ϵ⁢χ❘"\[LeftBracketingBar]"H⁢xi′-xi❘"\[RightBracketingBar]"2 where xirepresents the (x, y) location of pixel i in the broadcast image I and xi′ is the corresponding pixel location on the model “image” M and χ represents a set of point correspondences between the two images I and M. Client device108may be in communication with organization computing system104via network105. Client device108may be operated by a user. For example, client device108may be a mobile device, a tablet, a desktop computer, or any computing system having the capabilities described herein. Users may include, but are not limited to, individuals such as, for example, subscribers, clients, prospective clients, or customers of an entity associated with organization computing system104, such as individuals who have obtained, will obtain, or may obtain a product, service, or consultation from an entity associated with organization computing system104. Client device108may include at least application132. Application132may be representative of a web browser that allows access to a website or a stand-alone application. Client device108may access application132to access one or more functionalities of organization computing system104. Client device108may communicate over network105to request a webpage, for example, from web client application server114of organization computing system104. For example, client device108may be configured to execute application132to access content managed by web client application server114. The content that is displayed to client device108may be transmitted from web client application server114to client device108, and subsequently processed by application132for display through a graphical user interface (GUI) of client device108. FIGS.2A-2Bare block diagrams illustrating neural network architecture200of camera calibrator120, according to example embodiments. As discussed, camera calibrator120may utilize a single neural network, which takes in a video frame as input, and outputs a homography matrix of that frame. For example, camera calibrator120may utilize a single neural network for a single moving camera calibration given unknown camera internal parameters across a variety of sports (e.g., basketball, soccer, football, hockey, etc.). Neural network architecture200may include three modules: semantic segmentation module202, camera pose initialization module204, and homography refinement module206. Each of the three modules202-206are integrated into a single neural network architecture, such as that shown by neural network architecture200. Because all three module202-206are connected, neural network architecture200is capable of end-to-end training. Semantic segmentation module202may be configured to identify features of the playing surface (e.g., basketball court, soccer field, etc.). For example, semantic segmentation module202may be configured to extract key features and remove irrelevant information from an input image, I (reference numeral220). Such output may result in a venue agnostic appearanceY(reference numeral222) that may be used to determine the point correspondences. Thus, the objective function H from above may be rewritten as: θH=argminθHL⁡(Y_,W⁡(M;θH)) where θHrepresents a vector of the eight homography parameters, W(; θ) represents the warping function with transform parameters θ, and L( ) represents any loss function that measures the difference between two images, in this case the predicted semantic map Y and the warped overhead model M. Semantic segmentation module202may conduct area-based segmentation on a playing surface by dividing the playing surface into one or more regions. By dividing the playing surface into one or more regions, semantic segmentation module202may transform the overhead field model M into a multi-channel image. Given the multi-channel image, semantic segmentation module202may classify each pixel in I into one region of the one or more regions. To generate area-based semantic labels of each image, semantic segmentation module202may warp the overhead model with the associated ground truth homography, thus providing ground truth semantic labels for training. FIG.3is a block diagram illustrating one or more images302-306of a basketball playing surface, according to example embodiments. As shown, image302may correspond to a top-down view field model for a basketball playing surface. Semantic segmentation module202may divide the basketball playing surface into four regions, resulting in a 4-chanel image. For example, region308may correspond to a first channel, region310correspond to a second channel, region312may correspond to a third channel, and region314may correspond to a fourth channel. In operation, semantic segmentation module202may utilize image302to classify each pixel in an input image (e.g., in I) into one of regions308-314. Image304may illustrate semantic labels applied to an incoming image. Semantic segmentation module202may generate image304by warping the field model M (e.g., image302) using the ground truth homography. These images (e.g., image302and image304) may then be used to train semantic segmentation module202. Image306may illustrate a polygonal area of image304from a top-down perspective, illustrating the fraction of the field model in the camera view. Referring back toFIGS.2A and2B, for the segmentation task, semantic segmentation module202may implement a Unet style auto-encoder214(hereinafter “Unet214”). Unet214may take, as input, image I220and output a semantic mapY222as needed by θH. In some embodiments, cross-entropy loss may be used to train Unet214. For example: ℒce=-1❘"\[LeftBracketingBar]"Y_❘"\[RightBracketingBar]"⁢❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"⁢∑yt-c∈Y¯∑c∈Cyic⁢log⁢yi-c where C may represent the set of classes, and yicmay represent the ground truth label, and yi−cmay represent the likelihood of pixel i belonging to class c. Camera pose initialization module204may be configured to select an appropriate template from a set of templates using a semantic map. Camera pose initialization module204may use a Siamese network to determine the best template for each input semantic image. Siamese network may be a convolutional encoder that computes a hidden representation for a semantic image, which may be the output of Unet214or any semantic template image. In some embodiments, the similarity between two images may be the L2norm between their hidden representations. In some embodiments, each image may be encoded to a 128-length vector for similarity calculation. For a PTZ camera, the projective matrix P may be expressed as: P=KR[I|−C]=KQS[I|−C] where Q and S are decomposed from rotation matrix R, K are the intrinsic parameters of a camera in camera system102, I is a 3×3 identity matrix, and C is the camera translation. The matrix S may describe the rotation from the world coordinate to the PTZ camera base, and Q represents the camera rotation due to pan and tilt. For example, S may be defined to rotate around world x-axis by about −90° so that the camera looks along the y-axis in the world plane. In other words, the camera is level and its projection is parallel to the ground. In some embodiments, for each image, camera calibrator120may assume a center principle point, square pixels, and no lens distortion. In some embodiments, six parameters may be identified. For example, the six parameters may be the focal length, three-dimensional camera location, pan and tilt angles. In some embodiments, pre-processing engine116may initialize intrinsic camera matrix K, camera location C, and rotation matrix R. With this initialization, pre-processing engine116may identify the optimal focal length, three-dimensional camera location, and rotation angles. For example, pre-processing engine116may use the Levenberg-Marquardt algorithm to find the optimal focal length, three-dimensional camera location, and rotation angles. Once pre-processing engine116determines K, C, R, and S, pre-processing engine116may generate Q. In some embodiments, pre-processing engine116may generate the pan and tilt angles given Q. For example, pre-processing engine116may generate the pan and tile angles by applied the Rodrigues formula to Q. Thus, from the above, camera pose initialization module204may generate the 6-dimensional camera configuration (pan, tilt, zoom, and three-dimensional camera location), λ. After pre-processing engine116estimates the camera configuration λ for each training image, pre-processing engine116may generate a dictionary of possible camera poses Λ. In some embodiments, pre-processing engine116may generate the dictionary of possible camera poses Λ by uniformly sampling from the range of possible camera poses. For example, pre-processing engine116may determine the ranges of pan, tilt, focal length, and camera location from training data and uniformly sample the poses from a 6-dimensional grid. Such method is able to cover all camera poses, even if the training set is small. Further, using a smaller grid may simplify the homography refinement since the maximum scale of the transformation needed is on the scale of the grid size. In some embodiments, pre-processing engine116can learn the possible camera poses Λ directly from the training data using clustering. Such process may be beneficial, for example, when the training set has sufficient diversity. For example, pre-processing engine116may treat Λ as a multi-variant normal distribution and apply a Gaussian Mixture model (GMM) to build the camera pose set. In some embodiments, mixing weights π may be fixed as equal for each component. In some embodiments, covariance matrix Σ may be fixed for each distribution. In such embodiments, the characteristic scale of Σ may set the scale of the transformations that are handled by homography refinement module206. In contrast with traditional GMMs, instead of setting the number of components K, the GMM learning algorithm implemented by pre-processing engine116may find the number of components K and the mean μkof each distribution given the mixing weights π and covariance matrix Σ. Identical Σ and π for each component may ensure that the GMM components are sampled uniformly from the manifold of the training data. In some embodiments, the GMM learning algorithm may be: Pre-define covariance Σfor K = [100, 110, 120, . . . N] doInitialize μkfor K GMM componentswhile μknot converge doCompute⁢⁢γk⁡(λn)=πk⁢𝒩⁡(λn;μk,Σ)Σj⁢πk⁢𝒩⁡(λn;μk,Σ)Update⁢⁢μk=Σn⁢γk⁡(λn)⁢λnΣn⁢γk⁡(λn)end whileif⁢⁢1N⁢Σn⁢⁢maxk⁢𝒩⁡(λn;μk,Σ)𝒩⁡(μk;μk,Σ)>threshold⁢⁢thenbreakend ifend forReturn GMM Because pre-processing engine116may fix Σ, camera pose initialization module may only update μ during the maximization step. Pre-processing engine116may gradually increase K until the stopping criteria are satisfied. The stopping criteria may aim to generate enough components so that every training example is close to the mean of one component in the mixture. Pre-processing engine116may generate the camera pose dictionary Λ utilizing all components [μ1, . . . , μk]. Given the dictionary of camera poses Λ, camera pose initialization module204may compute the homography for each pose and use the Λ to warp the overhead field model M. Accordingly, a set of image templates=[T1, . . . , Tk] and their corresponding homography matrices*=[H1*, . . . , HK*] may be determined and used camera pose initialization module204. Given the semantic segmentation imageYand a set of template images, camera pose initialization module204may use a Siamese network to computer the distance between each input and template pair (Y, Tk). In some embodiments, the target/label for each pair may be similar or dissimilar. For example, for a grid sampled camera pose dictionary, a template Tkmay be similar to the image if its pose parameters are the nearest neighbor in the grid. For the GMM-based camera pose dictionary, a template Tkmay be labeled as similar to an image if the corresponding distribution of the template(; μk, Σ) gives the highest likelihood to the pose parameters λ of the input image. This procedure may generate a template similarity label for every image in the training set. Once the input semantic imageYand the template imagesare encoded (after FC1), camera pose initialization module204may use the latent representations to compute the L2 distance between the input image and each template. A selection module210may find the target camera pose indexkand may retrieve its template image Tkand homography Hk* as output according to: k¯=arg⁢mink⌈f⁡(Y_)-f⁡(Tk)❘"\[RightBracketingBar]"2 where f( ) may represent the encoding function of the Siamese network. In some embodiments, camera pose initialization module204may use contrastive loss to train the Siamese network. For example, con=a|f(Y)−f(Tk)22+(1−a)max(0,m−|f(Y)−f(Tk)|22) where a may represent the binary similarity label for the image pair (Y, Tk) and m may represent the margin for contrastive loss. Homography segmentation module206may be configured to refine the homography be identifying the relative transform between the selected template and the input image. For example, homography segmentation module206may implement a spatial transformer network (STN) that allows for the handling of large non-affine transformation and use of a smaller camera pose dictionary. For example, given the input image and a selected template, the two images may be stacked and provided as input to STN. STN may be used to regress the geometric transformation parameters. In some embodiments, residual blocks may be used in convolutional encoder to preserve the salient features for deformation prediction. In some embodiments, ReLU may be used for all hidden layers, while the output layer of STN may use a linear activation. To compute the relative transform between input semantic imageYand the selected template image Tk, homography segmentation module206may stack the images into an n-channel image (e.g., 8-channel image), forming the input to the localization layers of the STN. In some embodiments, the output of the localization layers may be the parameters (e.g., 8-parameters) of the relative homographyHthat maps the semantic imageYto the template Tk. In some embodiments, homography segmentation module206may initialize the last of the localization layers (e.g., FC3), such that all elements in the kernel are zero and the bias is to the first n values (e.g., 8 values) of a flattened identity matrix. Therefore, at the start of the training, the input may be assumed to be identical to the template, providing an initialization for the STN optimization. Therefore, the final homography may be H=Hk*H. Once H is computed, transformer212of homography refinement module206may warp the overhead model M to the camera perspective or vice versa, which allows camera calibrator120to compute the loss function. For example, homography refinement module206may us a Dice coefficient loss: Dice⁢(U,V)=1❘"\[LeftBracketingBar]"C❘"\[RightBracketingBar]"⁢∑c∈c2⁢Uc∘VcUc+Vc where U, V may represent semantic images, C may represent the number of channels, ∘ may represent the element-wise multiplication, and ∥⋅∥ may represent the sum of pixel intensity in an image. Here, for example, the intensity of each channel may be the likelihood that the pixel belongs to a channel c. One of the major advantages of using area-based segmentation, as opposed to line-based segmentation, is that it is robust to occlusions and makes better use (i.e., more efficient use) of the network capacity because a larger fraction of image pixels may belong to a meaningful class. A limitation, however, of intersection-of-union (IoU) based loss is that as the fraction of the field of view in the image decreases, the IoU loss may become sensitive to segmentation errors. For example, if the playing surface occupied a tiny portion of the image, a small transform could reduce the IoU dramatically. Therefore, homography refinement module206uses the Dice loss on the warped playing surface in both perspectives—a high occupancy perspective can achieve coarse registration, while a low occupancy perspective can provide strong constraints on fine-tuning. Thus, the loss functions may be defined as: warp=δDice(Y,(M,θH))+(1−δ)Dice(M′,(Y,θH−1)) where Y may represent the ground truth semantic image and M′ may represent the masked overhead field model so that loss is only computed for the area shown in the image. Losses from the two perspectives may be weighted by δ, where the weight for the lower occupancy fraction perspective is always higher. Because each module202-206may use the output of other modules as input, the three modules202-206may be connected into a single neural network (i.e., neural network architecture200). As such, the total loss of the network may become: =αce+βcon+(1−α−β)warp where α,β∈[0,1). Camera calibrator120may train the entire neural network architecture200incrementally, module-by-module, so that the Siamese network and STN may start training with reasonable inputs. For example, training may start with a 20-epoch warm-up for the Unet; the Siamese network training may be turned on with a α=0.1 and β=0.9. After another 10 epochs, for example, the STN may be turned on with α=0.05 and β=0.05. Neural network architecture may continue to undergo join training until convergence. FIG.4is a flow diagram illustrating a method400of generating a fully trained calibration model, according to example embodiments. Method400may begin at step402. At step402, organization computing system104may retrieve one or more data sets for training. Each data set may include a plurality of images captured by a camera system102during the course of a game. In some embodiments, the data set may be created from thirteen basketball games. Those skilled in the art recognize that more than thirteen games or less than thirteen games may be used for training purposes. For example, ten games may be used for training and the remaining three games may be used for testing. Those killed in the art recognize that more than ten games or less than ten games may be used for training and more than three games or less than three games may be used for testing. The aforementioned number of games for training purposes is exemplary only and is not meant to limit the foregoing discussion. Different games may have different camera locations, with each game being played in a unique venue. As such, the playing surface appearance for each game may be very different from game-to-game. For each game, 30-60 frames may be selected for each annotation with a high camera pose diversity. Professional annotators may have clicked four to six point correspondences in each image to compute the ground truth homography. These annotations may have produced 526 images for training and 114 images for testing. In some embodiments, the training data may be further enriched by flipping the images horizontally, which may generate 1052 training examples in total. In some embodiments, the data set may be created from twenty soccer games. For example, the twenty soccer games were held in nine different stadiums during day and night, and the images may consist of different perspectives and lighting conditions. Accordingly, the data set may include 209 training images collected from 10 games and 186 testing images collected from the other 10 games. At step404, organization computing system104may generate a plurality of camera pose templates from the one or more data sets. For example, based on the retrieved one or more data sets for training, camera calibrator120may generate camera pose templates for training. In some embodiments, camera calibrator120may generate the camera pose templates using the GMM-based method discussed above, provided that the one or more data sets is adequately large and diverse. In some embodiments, one or more data sets may be considered adequately large and diverse when a complete and relatively clean overhead playing surface image is achieved. In such embodiments, camera calibrator120may set the standard deviation for pan, tilt, focal length, and camera locations (x, y, z). In some embodiments, camera calibrator120may further set the threshold for stopping criteria and warping loss,warpδ. Continuing with the first example referenced above, using the basketball data set, camera calibrator120may use the GMM-based method to generate camera pose templates from 1052 training images. In such example, camera calibrator120may set the standard deviation for pan, tilt, focal length, and camera locations (x, y, z) to 5°, 5°, 1000 pixels, and 15 feet respectively. The non-diagonal elements may be set to zero, as camera calibrator120assumes that those camera configurations are independent of each other. The threshold for the stopping criteria may be set to 0.6 and the clustering algorithm may generate 210 components. For the warping loss,warpδ may be set to 0.8 because the camera perspective may have a lower field occupancy rate than the top view perspective. In some embodiments, camera calibrator120may generate the camera pose templates using a high grid resolution if, for example, the one or more data sets has an insufficient number of examples. In such embodiments, camera calibrator120may set the resolution of pan, tilt, and focal length. Continuing with the second example referenced above, using the soccer data set, camera calibrator120may use a high grid resolution approach to generate the camera pose templates. In such examples, camera calibrator120may set the resolution of pan, tilt, and focal length to 5°, 2.5°, and 500 pixels, respectively. In some embodiments, the camera locations may be fixed at, for example, 560, 1150, and 186, yards relative to the top left corner of the field. Because the soccer data set has an insufficient number of examples to use the GMM-based camera pose estimation, camera calibrator120may use a uniform sampling for this data set with estimated pan, tilt, and focal length range ([−35°, 35° ], [5°, 15° ], [1500, 4500] pixels respectively), which generates 450 templates for camera pose initialization. As those skilled in the art recognize, although basketball and soccer are discussed in the current examples, such methodologies may be extended to the video broadcast of any sport. At step406, organization computing system104may learn, based on the one or more training data sets, how to calibrate a single moving camera. For example, neural network of camera calibrator120may learn how to calibrate a single moving camera based on the one or more training data sets. In some embodiments, each module of neural network architecture200may be trained simultaneously. For example, because each module202-206of neural network architecture200uses the output of other modules as input, the three modules202-206may be connected into a single neural network. As such, the total loss of the network may become: =αce+βcon+(1−α−β)warp where α,β∈[0,1). Camera calibrator120may train the entire neural network architecture200incrementally, module-by-module, so that the Siamese network and STN may start training with reasonable inputs. For example, training may start with a 20-epoch warm-up for the Unet; the Siamese network training may be turned on with a α=0.1 and β=0.9. After another 10 epochs, for example, the STN may be turned on with α=0.05 and β=0.05. Neural network architecture may continue to undergo join training until convergence. In some embodiments, one or more modules of modules202-206may be “warmed up” with synthesized data. For example, due to the small number of training examples in the above referenced soccer data sets, camera calibrator120may use synthesized data to warm up camera pose initialization module204and homography refinement module206. Apart from Unet in semantic segmentation module202, the rest of neural network architecture200uses the semantic images as input so that camera calibrator120can synthesize an arbitrary number of semantic images to pre-train parts of the network. Using a specific example, 2000 semantic images may be generated by uniformly sampling the pan, tilt, and focal length parameters. For each synthesized image, their ground truth homography is known, and the template assignment can be easily found by down sampling the grid. Thus, camera pose initialization module204and the STN may be pre-trained individually. Once camera pose initialization module204and homography refinement module206are warmed up, camera calibrator120may train neural network with real data. At step408, organization computing system104may output a fully trained prediction model. For example, at the end of the training and testing processes, camera calibrator120may have a fully trained neural network architecture200. FIG.5is a flow diagram illustrating a method500of calibrating a broadcast camera, according to example embodiments. Method500may begin at step502. At step502, organization computing system104may receive (or retrieve) a broadcast feed for an event. In some embodiments, the broadcast feed may be a live feed received in real-time (or near real-time) from camera system102. In some embodiments, the broadcast feed may be a broadcast feed of a game that has concluded. Generally, the broadcast feed may include a plurality of frames of video data. Each frame may capture a different camera perspective. At step504, organization computing system104may input each frame into neural network architecture200. For example, camera calibrator120may identify a first frame in a received broadcast feed and provide that frame to neural network architecture200. At step506, organization computing system104may generate a homography matrix H for each frame. For example, semantic segmentation module202may identify the court features Y in each frame. The output from semantic segmentation module202may be the semantic mapYgenerated by the Unet. The semantic mapYmay be provided as input to camera pose initialization module204. Camera pose initialization module204may select the appropriate template Tkfrom a set of templates using semantic mapY. Camera pose initialization module204may further identify the target camera pose indexkand retrieve its template image Tkand homography Hk* using selection module210. Camera calibrator120may pass, as input to homography refinement module206, both TkandYconcatenated and Hk*. Homography refinement module206may then predict the relative homographyHbetween the template and the semantic map by passing the concatenated item TkandYto the STN. Homography refinement module206may then generate the homography matrix H based on the relative homographyHand Hk* using matrix multiplication, i.e., H=HHk*. At step508, organization computing system104may warp each frame by its respective homography matrix H. FIG.6Aillustrates a system bus computing system architecture600, according to example embodiments. System600may be representative of at least a portion of organization computing system104. One or more components of system600may be in electrical communication with each other using a bus605. System600may include a processing unit (CPU or processor)610and a system bus605that couples various system components including the system memory615, such as read only memory (ROM)620and random access memory (RAM)625, to processor610. System600may include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor610. System600may copy data from memory615and/or storage device630to cache612for quick access by processor610. In this way, cache612may provide a performance boost that avoids processor610delays while waiting for data. These and other modules may control or be configured to control processor610to perform various actions. Other system memory615may be available for use as well. Memory615may include multiple different types of memory with different performance characteristics. Processor610may include any general purpose processor and a hardware module or software module, such as service 1632, service 2634, and service 3636stored in storage device630, configured to control processor610as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor610may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device600, an input device645may represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device635may also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems may enable a user to provide multiple types of input to communicate with computing device600. Communications interface640may generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device630may be a non-volatile memory and may be a hard disk or other types of computer readable media which may store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)625, read only memory (ROM)620, and hybrids thereof. Storage device630may include services632,634, and636for controlling the processor610. Other hardware or software modules are contemplated. Storage device630may be connected to system bus605. In one aspect, a hardware module that performs a particular function may include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor610, bus605, display635, and so forth, to carry out the function. FIG.6Billustrates a computer system650having a chipset architecture that may represent at least a portion of organization computing system104. Computer system650may be an example of computer hardware, software, and firmware that may be used to implement the disclosed technology. System650may include a processor655, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor655may communicate with a chipset660that may control input to and output from processor655. In this example, chipset660outputs information to output665, such as a display, and may read and write information to storage device670, which may include magnetic media, and solid state media, for example. Chipset660may also read data from and write data to RAM675. A bridge680for interfacing with a variety of user interface components685may be provided for interfacing with chipset660. Such user interface components685may include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system650may come from any of a variety of sources, machine generated and/or human generated. Chipset660may also interface with one or more communication interfaces690that may have different physical interfaces. Such communication interfaces may include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein may include receiving ordered datasets over the physical interface or be generated by the machine itself by processor655analyzing data stored in storage670or675. Further, the machine may receive inputs from a user through user interface components685and execute appropriate functions, such as browsing functions by interpreting these inputs using processor655. It may be appreciated that example systems600and650may have more than one processor610or be part of a group or cluster of computing devices networked together to provide greater processing capability. While the foregoing is directed to embodiments described herein, other and further embodiments may be devised without departing from the basic scope thereof. For example, aspects of the present disclosure may be implemented in hardware or software or a combination of hardware and software. One embodiment described herein may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory (ROM) devices within a computer, such as CD-ROM disks readably by a CD-ROM drive, flash memory, ROM chips, or any type of solid-state non-volatile memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid state random-access memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the disclosed embodiments, are embodiments of the present disclosure. It will be appreciated to those skilled in the art that the preceding examples are exemplary and not limiting. It is intended that all permutations, enhancements, equivalents, and improvements thereto are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It is therefore intended that the following appended claims include all such modifications, permutations, and equivalents as fall within the true spirit and scope of these teachings.
42,914
11861807
DETAILED DESCRIPTION FIG.1is a flow chart illustrating a color decomposition method according to example embodiments. Referring toFIG.1, inter-color images indicating similarity between color sensitivities may be generated based on color images (S100). The similarity between color sensitivities may be varied according to a wavelength of a light incident on an image sensor. The similarity between color sensitivities may increase as a difference between color pixel values decreases in a condition of the same brightness, as will be described in further detail below with reference toFIG.6. Conversion coefficients of the color images and the inter-color images with respect to a white image may be determined (S200). In some example embodiments, as will be described in further detail below with reference toFIG.5, the conversion coefficients may be determined using a least square method based on a matrix that includes color pixel values of the color images and inter-color pixel values of the inter-color images as components of the matrix, and real white pixel values obtained, for example, by an RGBW image sensor. In some example embodiments, as will be described in further detail below with reference toFIG.5, the inter-color pixel value of the inter-color image may be a square root of a multiplication of color pixel values of different color images corresponding to the inter-color image. A pseudo-white image corresponding to the color images and the inter-color images may be generated using the conversion coefficients (S300). The pseudo-white image is differentiated from the read white image that is obtained using a real image sensor including white pixels. In some example embodiments, the pseudo-white image may be provided as additional information for deep learning of an artificial neural network associated with image processing. In some example embodiments, the pseudo-white image may be used as display data for a display device including white pixels. As such, the color decomposition method according to example embodiments may generate the pseudo-white image similar to a real white image using the inter-color images that indicate similarity between color sensitivities. As will be described in further detail below, the pseudo-white image may be used in a method of demosaicing images according to example embodiments based on deep learning. The deep learning of the artificial neural network may be performed efficiently using the color images and the pseudo-white image, and the demosaiced images of high quality may be generated using the trained artificial neural network that is trained. FIG.2is a block diagram illustrating a color decomposition device according to example embodiments. Referring toFIG.2, a color decomposition device100may include a first inter-color image generator GBIG110, a second inter-color image generator GRIG120, and an image processing unit130. The first inter-color image generator110and the second inter-color image generator120may generate inter-color images that indicate similarity between color sensitivities based on color images. In some example embodiments, the color images may include a red image Ir, a green image Ig, and a blue image Ib. Herein, some example embodiments are described based on a non-limiting example that the color images include the red image Ir, the green image Ig, and the blue image Ib. However, according to example embodiments, the color images may include arbitrary combination of various colors. When the color images include the red image Ir, the green image Ig, and the blue image Ib, the inter-color images may include a green-blue image Igb, which indicates similarity between green sensitivity and blue sensitivity, and a green-red image Igr, which indicates similarity between green sensitivity and red sensitivity. The first inter-color image generator110may generate the green-blue image Igb indicating the similarity between the green sensitivity and the blue sensitivity based on the green image Ig and the blue image Ib. The second inter-color image generator120may generate the green-red image Igr indicating the similarity between the green sensitivity and red sensitivity based on the green image Ig and the red image Ir. The image processing unit130may include a conversion coefficient generator CCG131and a pseudo-white image generator PWIG132. The conversion coefficient generator131may determine conversion coefficients of the color images and the inter-color images with respect to a white image. The pseudo-white image generator132may generate a pseudo-white image corresponding to the color images and the inter-color images using the conversion coefficients. When the color images include the red image Ir, the green image Ig, and the blue image Ib, the conversion coefficient generator131may determine a red conversion coefficient Cr of the red image Ir with respect to the white image, a green conversion coefficient Cg of the green image Ig with respect to the white image, a blue conversion coefficient Cb of the blue image Ib with respect to the white image, a green-blue conversion coefficient Cgb of the green-blue image Igb with respect to the white image, and a green-red conversion coefficient Cgr of the green-red image Igr with respect to the white image. Here the green-blue image Igb and the green-red image Igr correspond to the above-described inter-color images. In this case, the pseudo-white image generator132may generate a pseudo-white image Iw based on the red conversion coefficient Cr, the green conversion coefficient Cg, the blue conversion coefficient Cb, the green-blue conversion coefficient Cgb, and the green-red conversion coefficient Cgr. As will be described in further detail below, the red image Ir, the green image Ig, the blue image Ib, and the pseudo-white image Iw may be provided as learning data set or training data set IStr of an artificial neural network for image processing such as demosaicing. When the artificial neural network is designed to generate a demosaiced red image, a demosaiced green image, and a demosaiced blue image based on a mosaic image, the red image Ir, the green image Ig, and the blue image Ib may be used as a ground truth image set ISgt for verifying a learning result of the artificial neural network. As such, the color decomposition method according to example embodiments may generate the pseudo-white image similar to a real white image using the inter-color images indicating similarity between color sensitivities. In addition, the demosaicing method based on deep learning according to example embodiments may efficiently perform deep learning of the artificial neural network using the color images and the pseudo-white image, and generate the demosaiced images of high quality using the trained artificial neural network that is trained. FIGS.3and4are diagrams illustrating images corresponding to a color decomposition method according to example embodiments. FIGS.3and4illustrate a red image Ir, a green image Ig, a blue image Ib, a green-blue image Igb, and a green-red image Igr, each composed of n pixel rows and m pixel columns. Referring toFIGS.3and4, the red image Ir includes m*n red pixel values R11˜Rnm, the green image Ig includes m*n green pixel values G11˜Gnm, and the blue image Ib includes m*n blue pixel values B11˜Bnm. The red image Ir, the green image Ig, and the blue image Ib correspond to demosaiced images or full color images that have pixel values of all pixels. The green-blue image Igb includes m*n green-blue pixel values GB11˜GBnm, the green-red image Igr includes m*n green-red pixel values GR11˜GRnm, and the pseudo-white image Iw includes m*n white pixel values W11˜Wnm. Referring toFIGS.3and4, each image includes n*m pixel values, and the pixel values in the different images correspond to each other according to pixel positions. FIG.5is a diagram for describing an example embodiment of determining conversion coefficients included in a color decomposition method according to example embodiments. A pixel value of a pseudo-white image may be generated by Equation 1. W=Cr*R+Cg*G+Cb*B+CgbGB+Cgr*GR[Equation 1] In Equation 1, W indicates the white pixel value of the pseudo-white image, R indicates a red pixel value of the red image, G indicates a green pixel value of the green image, B indicates a blue pixel value of the blue image, GB indicates a green-blue pixel value of the green-blue image, and GR indicates a green-red pixel value of the green-red image. Cr indicates a red conversion coefficient of the red image with respect to the white image, Cg indicates a green conversion coefficient of the green image with respect to the white image, Cb indicates a blue conversion coefficient of the blue image with respect to the white image, Cgb indicates a green-blue conversion coefficient of the green-blue image with respect to the white image, and Cgr indicates a green-red conversion coefficient of the green-red image with respect to the white image. Referring toFIG.5, a plurality of equations with respect to a plurality of corresponding pixels may be arranged as a matrix equation as represented by Equation 2. W=AC[Equation 2] FIG.5illustrates an example embodiment of the matrix equation as represented by Equation 2. InFIG.5, W indicates a column vector including a plurality of white pixel values W1˜Wn of the white image. InFIG.5, C indicates a column vector including the red conversion coefficient Cr, the green conversion coefficient Cg, the blue conversion coefficient Cb, the green-blue conversion coefficient Cgb, and the green-red conversion coefficient Cgr. InFIG.5, A indicates a matrix including the red pixel values R1˜Rn of the red image, the green pixel values G1˜Gn of the green image, the blue pixel values B1˜Bn of the blue image, the green-blue pixels values √{square root over (G1B1)}˜√{square root over (GnBn)} of the green-blue image, and the green-red pixel values √{square root over (G1R1)}˜√{square root over (GnRn)} of the green-red image as components of the matrix. Referring to the above, the green-blue pixel value √{square root over (GiBi)} (i is an integer) may be set to be a square root of a multiplication of the green pixel value Gi and the blue pixel value Bi, and the green-red pixel value √{square root over (GiRi)} may be set to be a square root of a multiplication of the green pixel value Gi and the red pixel value Ri. The column vector C may be determined using white pixel values of a real white image that is obtained by an image sensor such as an RGBW image sensor including white pixels. The matrix equation in Equation 2 andFIG.5may be solved as represented by Equation 3 using a least square method to obtain the column vector C including the red conversion coefficient Cr, the green conversion coefficient Cg, the blue conversion coefficient Cb, the green-blue conversion coefficient Cgb, and the green-red conversion coefficient Cgr. C=(ATA)−1ATW[Equation 3] In Equation 3, ATindicates a transposed matrix of the matrix A, and A−1indicates an inverse matrix of the matrix A. An example of the conversion coefficients obtained as such is shown in Equation 4. The white pixel values of the pseudo-white image may be calculated by applying the obtained conversion coefficients to Equation 1 or Equation 2. Cr=0.71 Cg=1.4 Cb=1.51 Cgb=−1.34 Cgr=−0.62  [Equation 4] As shown in Equation 4, the conversion coefficients of the color images with respect to the white image (that is, the red conversion coefficient Cr, the green conversion coefficient Cg, and the blue conversion coefficient Cb) may have positive values. In contrast, the conversion coefficients of the inter-color images with respect to the white image (that is, the green-blue conversion coefficient Cgb and the green-red conversion coefficient Cgr) may have negative values. The green-blue pixel value √{square root over (GiBi)} of the green-blue image Igb may increase as the similarity between the green sensitivity and the blue sensitivity increases. In addition, the green-red pixel value √{square root over (GiRi)} of the green-blue image Igb may increase as the similarity between the green sensitivity and the red sensitivity increases. When the green-blue conversion coefficient Cgb has a negative value, the white pixel value Wi of the pseudo-white image Iw decreases as the green-blue pixel value √{square root over (GiBi)} of the green-blue image Igb corresponding to the white pixel value Wi increases. In addition, when the green-red conversion coefficient Cgr has a negative value, the white pixel value Wi of the pseudo-white image Iw decreases as the green-red pixel value √{square root over (GiRi)} of the green-red image Igr corresponding to the white pixel value Wi increases. Even though an example embodiment that the pseudo-white image is generated using color images including the red image Ir, the green image Ig, and the blue image Ib, the color decomposition method according to example embodiments may be applied to arbitrary color images in which similarity between color sensitivities may be considered. With respect to arbitrary color images, the inter-color pixel value of the inter-color image may increase as the similarity between the color sensitivities increases. The conversion coefficients of the color images with respect to the white image may have positive values, and the conversion coefficients of the inter-color images with respect to the white image may have negative values. In this case, the white pixel value of the pseudo-white image may decrease as the inter-color pixel value of the inter-color image corresponding to the white pixel value increases. FIG.6is a diagram illustrating an example of color sensitivities of an image sensor according to wavelength, andFIG.7is a diagram illustrating an example of sensitivities of white images according to wavelength. InFIGS.6and7, the horizontal axis represents wavelength of light in nano-meter (nm) and the vertical axis represents color sensitivity. FIG.6illustrates unique spectrum color sensitivities of a red image, a green image, and a blue image that are obtained by a real RGB image sensor. R indicates the red sensitivity, G indicates the green sensitivity, and B indicates the blue sensitivity. As illustrated inFIG.6, the similarity between the blue sensitivity and the green sensitivity may be increased in a first overlapping region REG1, and the similarity between the green sensitivity and the red sensitivity may be increased in a second overlapping region REG2. Here, the increase of the color sensitivities indicates the decrease of the difference between the color sensitivities. FIG.7illustrates white sensitivities of various white images. W indicates the white sensitivity of a real white image that is obtained by an RGBW image sensor including white pixels, LINEAR indicates the white sensitivity that is obtained by a general method of a linear scheme based on luminance of RGB images, and NON-LINEAR indicates the white sensitivity of the pseudo-white image that is obtained by a color decomposition method of a non-linear scheme according to example embodiments. As illustrated inFIG.7, the linear scheme of the general method may cause over-amplification of the white sensitivity of the pseudo-white image in the first and second overlapping regions REG1and REG2. To repress such over-amplification in the overlapping regions, an inter-color pixel value of each inter-color image may be set to be a square root of a multiplication of color pixel values of different color images corresponding to the each inter-color image, as described above. If two corresponding pixel values of the different color images are represented by A and B, a relation between an arithmetic average (A+B)/2 and a geometric average (A*B)1/2of the two pixel value A and B may be represented by Equation 5. (A+B)/2≥(A*B)1/2[Equation 5] The arithmetic average (A+B)/2 corresponds to an averaged pixel values, that is, an averaged brightness of the two pixel values A and B. The geometric average (A*B)1/2corresponds to the similarity between the color sensitivities, that is, an inter-color pixel value of the two pixel values A and B. When the brightness corresponding to the two pixel values A and B is fixed, that is, when the arithmetic average (A+B)/2 is fixed, the geometric average (A*B)1/2corresponding to the similarity may be increased as the difference between the two pixel values A and B is decreased. Thus, the similarity may be maximum when the two pixel values A and B are equal to each other. As such, the inter-color pixel value may efficiently represent the similarity between color sensitivities by setting the inter-color pixel value to be a root of a multiplication of the corresponding color pixel values of the different color images. The inter-color pixel value may be reflected to the white pixel value of the pseudo-white image as Equation 1, and the white sensitivity (NON-LINEAR) of the pseudo-white image may be generated to be similar to the white sensitivity (W) of the real white image as illustrated inFIG.7. FIG.8is a block diagram illustrating a color decomposition device according to example embodiments. The descriptions repeated with respect toFIG.2may be omitted. Referring toFIG.8, a color decomposition device200may include a first inter-color image generator GBIG210, a second inter-color image generator GRIG220, a conversion coefficient generator CCG230, a third inter-color image generator GBIG240, a fourth inter-color image generator GRIG2500, and a pseudo-white image generator PWIG260. The first inter-color image generator210and the second inter-color image generator220may generate first inter-color images Igb′ and Igr′ indicating similarity between color sensitivities based on first color images Ir′, Ig′ and Ir′. Thus, the first inter-color image generator210may generate the first green-blue image Igb′ indicating the similarity between the green sensitivity and the blue sensitivity based on the first green image Ig′ and the first blue image Ib′. The second inter-color image generator220may generate the first green-red image Igr′ indicating the similarity between the green sensitivity and red sensitivity based on the first green image Ig′ and the first red image Ir′. The conversion coefficient generator230may determine conversion coefficients Cr, Cg, Cb, Cgb, and Cgr based on the first color images Ir′, Ig′, and Ib′ and the inter-color images Igb′ and Igr′. In some example embodiments, the conversion coefficient generator230may determine the conversion coefficients Cr, Cg, Cb, Cgb, and Cgr using the least square method described above with reference toFIG.5. The third inter-color image generator240and the fourth inter-color image generator250may generate second inter-color images Igb and Igr indicating similarity between color sensitivities based on second color images Ir, Ig, and Ir. Thus, the third inter-color image generator240may generate the second green-blue image Igb indicating the similarity between the green sensitivity and the blue sensitivity based on the second green image Ig and the second blue image Ib. The fourth inter-color image generator250may generate the second green-red image Igr indicating the similarity between the green sensitivity and red sensitivity based on the second green image Ig and the second red image Ir. The pseudo-white image generator260may generate a pseudo-white image Iw corresponding to the second color images Ir, Ig, and Ib and the second inter-color images Igb and Igr using the conversion coefficients Cr, Cg, Cb, Cgb, and Cgr. In some example embodiments, each white pixel value of the pseudo-white image Iw may be determined based on Equation 1. The color decomposition device100ofFIG.2may generate the conversion coefficients and the pseudo-white image based on the same color images. In contrast, the color decomposition device200ofFIG.8may determine the conversion coefficients Cr, Cg, Cb, Cgb, and Cgr based on the first color images Ir′, Ig′, and Ib′, and generate the pseudo-white image Iw based on the second color images Ir, Ig, and Ib different from the first color images Ir′, Ig′, and Ib′. As such, the first color images Ir′, Ig′, and Ib′ used in determining the conversion coefficients and the second color images Ir, Ig, and Ib used in generating the pseudo-white image may be provided independently. The second red image Ir, the second green image Ig, the second blue image Ib, and the pseudo-white image Iw may be provided as learning data set or training data set IStr of an artificial neural network for image processing such as demosaicing. The second red image Ir, the second green image Ig, and the second blue image Ib may be used as a ground truth image set ISgt for verifying a learning result of the artificial neural network. FIGS.9and10are diagrams for describing examples of a deep learning neural network structure that is driven by a machine learning device according to example embodiments. Referring toFIG.9, a general neural network may include an input layer IL, a plurality of hidden layers HL1, HL2, . . . , HLn, and an output layer OL. The input layer IL may include i input nodes x1, x2, . . . , xi, where i is a natural number. Input data (e.g., vector input data) IDAT whose length is i may be input to the input nodes x1, x2, . . . , xi such that each element of the input data IDAT is input to a respective one of the input nodes x1, x2, . . . , xi. The plurality of hidden layers HL1, HL2, HLn may include n hidden layers, where n is a natural number, and may include a plurality of hidden nodes h11, h12, h13, . . . , h1m, h21, h22, h23, h2m, hn1, hn2, hn3, . . . , hnm. For example, the hidden layer HL1may include m hidden nodes h11, h12, h13, . . . , h1m, the hidden layer HL2may include m hidden nodes h21, h22, h23, . . . , h2m, and the hidden layer HLn may include m hidden nodes hn1, hn2, hn3, . . . , hnm, where m is a natural number. The output layer OL may include j output nodes y1, y2, . . . , yj, where j is a natural number. Each of the output nodes y1, y2, . . . , yjmay correspond to a respective one of classes to be categorized. The output layer OL may output the output values (e.g., class scores or simply scores) associated with the input data IDAT for each of the classes. The output layer OL may be referred to as a fully-connected layer and may indicate, for example, a probability that the input data IDAT corresponds to a car. A structure of the neural network illustrated inFIG.9may be represented by information on branches (or connections) between nodes illustrated as lines, and a weighted value assigned to each branch. Nodes within one layer may not be connected to one another, but nodes of different layers may be fully or partially connected to one another. Each node (e.g., the node h11) may receive an output of a previous node (e.g., the node x1), may perform a computing operation, computation, or calculation on the received output, and may output a result of the computing operation, computation, or calculation as an output to a next node (e.g., the node h21). Each node may calculate a value to be output by applying the input to a specific function, e.g., a nonlinear function. Generally, the structure of the neural network may be set in advance, and the weighted values for the connections between the nodes are set appropriately using data having an already known answer of which class the data belongs to. The data with the already known answer is referred to as “training data,” and a process of determining the weighted value is referred to as “training.” The neural network “learns” during the training process. A group of an independently trainable structure and the weighted value is referred to as a “model,” and a process of predicting, by the model with the determined weighted value, which class the input data belongs to, and then outputting the predicted value, is referred to as a “testing” process. The general neural network illustrated inFIG.9may not be suitable for handling input image data (or input sound data) because each node (e.g., the node h11) is connected to all nodes of a previous layer (e.g., the nodes x1, x2, . . . , xi included in the layer IL) and then the number of weighted values drastically increases as the size of the input image data increases. Thus, a convolutional neural network (CNN), which is implemented by combining the filtering technique with the general neural network, has been researched such that two-dimensional image (e.g., the input image data) is efficiently trained by the convolutional neural network. Referring toFIG.10, a convolutional neural network may include a plurality of layers CONV1, RELU1, CONV2, RELU2, POOL1, CONV3, RELU3, CONV4, RELU4, POOL2, CONV5, RELU5, CONV6, RELU6, POOL3, and FC. Unlike the general neural network, each layer of the convolutional neural network may have three dimensions of width, height, and depth. Thus, data that is input to each layer may be volume data having three dimensions of width, height, and depth. For example, if an input image inFIG.10has a size of 32 width units (e.g., 32 pixels) and 32 height units (e.g., 32 pixels) and three color channels R, G, and B, then input data IDAT corresponding to the input image may have a size of 32×32×3. The input data IDAT inFIG.3Bmay be referred to as input volume data or input activation volume. Each of convolutional layers CONV1, CONV2, CONV3, CONV4, CONV5, and CONV6may perform a convolutional operation on input volume data. In an image processing, the convolutional operation represents an operation in which image data is processed based on a mask with weighted values, and an output value is obtained by multiplying input values by the weighted values and adding up the total multiplied values. The mask may be referred to as a filter, window, or kernel. In further detail, parameters of each convolutional layer may consist of or include a set of learnable filters. Every filter may be spatially small (along width and height), but may extend through the full depth of an input volume. For example, during the forward pass, each filter may be slid (more precisely, convolved) across the width and height of the input volume, and dot products may be computed between the entries of the filter and the input at any position. As the filter is slid over the width and height of the input volume, a two-dimensional activation map that gives the responses of that filter at every spatial position may be generated. As a result, an output volume may be generated by stacking these activation maps along the depth dimension. For example, if input volume data having a size of 32×32×3 passes through the convolutional layer CONV1having four filters with zero-padding, then output volume data of the convolutional layer CONV1may have a size of 32×32×12 (e.g., a depth of volume data increases). Each of RELU layers RELU1, RELU2, RELU3, RELU4, RELU5, and RELU6may perform a rectified linear unit operation that corresponds to an activation function defined by, e.g., a function f(x)=max(0, x) (e.g., an output is zero for all negative input x). For example, if input volume data having a size of 32×32×12 passes through the RELU layer RELU1to perform the rectified linear unit operation, then output volume data of the RELU layer RELU1may have a size of 32×32×12 (e.g., a size of volume data is maintained). Each of pooling layers POOL1, POOL2, and POOL3may perform a down-sampling operation on input volume data along spatial dimensions of width and height. For example, four input values arranged in a 2×2 matrix formation may be converted into one output value based on a 2×2 filter. For example, a maximum value of four input values arranged in a 2×2 matrix formation may be selected based on 2×2 maximum pooling, or an average value of four input values arranged in a 2×2 matrix formation may be obtained based on 2×2 average pooling. For example, if input volume data having a size of 32×32×12 passes through the pooling layer POOL1having a 2×2 filter, then output volume data of the pooling layer POOL1may have a size of 16×16×12 (e.g., width and height of volume data decreases, and a depth of volume data is maintained). Typically, one convolutional layer (e.g., CONV1) and one RELU layer (e.g., RELU1) may form a pair of CONV/RELU layers in the convolutional neural network, pairs of the CONV/RELU layers may be repeatedly arranged in the convolutional neural network, and the pooling layer may be periodically inserted in the convolutional neural network, thereby reducing an image spatial size and extracting an image characteristic. It is understood that the types and number of layers included in the convolutional neural network may not be limited to the example described above with reference toFIG.3Band may be changed or vary according to one or more other example embodiments. In addition, it is understood that the convolutional neural network may further include other layers such as a softmax layer for converting score values corresponding to predicted results into probability values, a bias adding layer for adding at least one bias, or the like. FIG.11is a diagram illustrating an example of a node included in a neural network. FIG. illustrates an example node operation performed by a node ND in a neural network. When N inputs a1˜anare provided to the node ND, the node ND may multiply the n inputs a1˜anand corresponding n weights w1˜wn, respectively, may sum n values obtained by the multiplication, may add an offset “b” to a summed value, and may generate one output value by applying a value to which the offset “b” is added to a specific function “σ”. The learning operation may be performed based on the training data to update all nodes in the neural network. As described above, image processing based on deep learning calls for a sufficient amount of training data or learning data for training of a deep learning module. For example, tens through millions of training data of various kinds may be used to prevent over-fitting during training and enhance performance of the deep learning module. It may not be easy to secure sufficient data for training using a real image sensor. Through the color decomposition method according to example embodiments, the pseudo-white image similar to the real white image may be generated from the color images. The image data similar to the real RGBW image that is obtained by an RGBW image sensor may be generated using the pseudo-white image. Demosaicing is digital image processing to generate a full color images or demosaiced images from color images or mosaic images. The color images may be imperfect data obtained by an image sensor including a color filter array CFA. The full color images may be obtained using a plurality of image sensors corresponding different colors, but such methods use spectral band-pass filters and increase cost. In a more efficient method, one color component may be obtained per pixel by applying a color filter array to an image sensor, and the missing color components may be obtained by an interpolation scheme. Such an interpolation method may cause block noise such as a zipper effect, random color dots, etc. In addition, there exists trade-off between noise filtering and blurring of sharp edges. The interpolation using neighboring pixel value may result in degradation of image quality, which is caused by averaging of pixel crossing an edge in an image. General methods of demosaicing CFA images or mosaic images may tend to reduce image detail and cause artifacts such as false color, jagging, etc. When demosaicing is divided into stages and a color channel of a previous stage is used to restore in a next stage, errors may be accumulated. As will be described in further detail below with reference toFIGS.12through15, a demosaicing method based on deep learning according to example embodiments may reduce image artifacts by restoring demosaiced color images using an artificial neural network having enhanced nonlinearity. FIG.12is a flow chart illustrating a demosaicing method based on deep learning according to example embodiments. Referring toFIG.12, inter-color images indicating similarity between color sensitivities may be generated based on color images (S100). The similarity between color sensitivities may be varied according to a wavelength of a light incident on an image sensor. As described above with reference toFIG.6, the similarity between color sensitivities may increase as a difference between color pixel values decreases in a condition of the same brightness. Conversion coefficients of the color images and the inter-color images with respect to a white image may be determined (S200). In some example embodiments, as described above with reference toFIG.5, the conversion coefficients may be determined using a least square method based on a matrix including color pixel values of the color images and inter-color pixel values of the inter-color images as components of the matrix, and real white pixel values may be obtained, for example, by an RGBW image sensor. In some example embodiments, as described above with reference toFIG.5, the inter-color pixel value of the inter-color image may be a square root of a multiplication of color pixel values of different color images corresponding to the inter-color image. A pseudo-white image corresponding to the color images and the inter-color images may be generated using the conversion coefficients (S300). The pseudo-white image is differentiated from the read white image that is obtained using a real image sensor including white pixels. In some example embodiments, the pseudo-white image may be provided as additional information for deep learning of an artificial neural network associated with image processing. Training mosaic images may be generated based on the color images and the pseudo-white image (S400). The generation of the training mosaic images will be described with reference toFIG.13. An artificial neural network may be trained based on the training mosaic images (S500). Demosaiced color images corresponding to input mosaic images may be generated based on the artificial neural network that is trained (S600). Training of the artificial neural network and the generation of the demosaiced color images will be described with reference toFIGS.14and15. FIG.13is a diagram illustrating images corresponding to a demosaicing method based on deep learning according to example embodiments. FIG.13illustrates a red image Ir, a green image Ig, a blue image Ib, a pseudo-white image Iw, a training mosaic image Itm, a first channel image Icw, a second channel image Icr, a third channel image Icg, and a fourth channel image Icb, wherein each image is composed of four pixel rows and four pixel columns.FIG.13illustrates a non-limiting example that each image includes sixteen pixels, but example embodiments are not limited thereto. The red image Ir includes 4*4 red pixel values R11˜Rnm, the green image Ig includes 4*4 green pixel values G11˜Gnm, the blue image Ib includes 4*4 blue pixel values B11˜Bnm, and the white image Iw includes 4*4 white pixel values W11˜W44. The training mosaic image Itm includes 4*4 pixel values W11, G12, W13, R14, G21, W22, R23, W24, W31, B32, W33, G34, B41, W42, G43, and W44. As illustrated inFIG.13, the pixel values in the different images correspond to each other according to pixel positions. Using the color decomposition method described above, the pseudo-white image Iw may be generated from the red image Ir, the green image Ig, and the blue image Ib. As illustrated inFIG.13, the respective pixel values of the red image Ir, the green image Ig, and the blue image Ib may be extracted to generate the training mosaic image Itm corresponding to an RGBW image. The pixel values of the training mosaic image Itm may be split per color to generate the first channel image Icw, the second channel image Icr, the third channel image Icg, and the fourth channel image Icb. Each of the first channel image Icw, the second channel image Icr, the third channel image Icg, and the fourth channel image Icb may include 4*4 pixel values such that empty blocks of the first channel image Icw, the second channel image Icr, the third channel image Icg, and the fourth channel image Icb indicate the pixel values of zero. As will be described in further detail below with reference toFIG.14, the first channel image Icw, the second channel image Icr, the third channel image Icg, and the fourth channel image Icb may be input to an artificial neural network to generate demosaiced color images Idr, Idg, and Idb and supervised learning may be performed based on a result of comparing the demosaiced color images Idr, Idg, and Idb with the color images Ir, Ig, and Ib corresponding to ground truth images. FIG.14is a block diagram illustrating a structure of an artificial neural network for a demosaicing method based on deep learning according to example embodiments, andFIG.15is a block diagram illustrating an example embodiment of an artificial neural network for a demosaicing method based on deep learning according to example embodiments. Referring toFIG.14, an artificial neural network performing a demosaicing method according to example embodiments may be implemented as a convolutional neural network (CNN)300having an encoder-decoder structure. The CNN300may include a plurality of encoders ENC1˜ENCk and a plurality of decoders DEC1˜DECk, which are cascade-connected. The CNN300may be trained or learned to generate demosaiced color images based on mosaic images or mosaiced images. In some example embodiments, as illustrated inFIG.15, the CNN300may include three encoders ENC1, ENC2, and ENC3(which are configured to sequentially perform down-sampling based on an input mosaic image) and three decoders DEC1, DEC2, and DEC3(which are configured to sequentially perform up-sampling). The encoders ENC1, ENC2, and ENC3may include at least one convolution layer CONV having a predetermined kernel size (e.g., 3*3 size) and stride sizes STRIDE1and STRIDE2. The decoders DEC1, DEC2, and DEC3may include a de-convolution layer CONVT and a convolution layer CONV. At least one of the encoders ENC1, ENC2, and ENC3and the decoders DEC1, DEC2, and DEC3may include a summing layer EWS to perform an elementary-wise sum. The de-convolution layer CONVT and the convolution layer CONV may include a rectified linear layer RELU. The encoders ENC1, ENC2, and ENC3may sequentially perform down-sampling and training of residual components based on the input mosaic image or the training mosaic image Itm to generate encoded image maps. The decoders DEC1, DEC2, and DEC3may sequentially perform up-sampling and restoring of resolution based on the encoded image maps to generate the demosaiced color images Idr, Idg, and Idb. The artificial neural network ofFIG.15is based on deep learning to restore RGB demosaiced images from an RGBW mosaic image that is obtained by the color decomposition method as described above. In the RGBW pattern of the input image, the pixel values are split per white, red, green, and blue to generate different channel signals Icw, Icr, Icg, and Icb that are input to the artificial neural network. The residual components may be trained through the encoders and the resolution may be restored through the decoders. The high-frequency components of the input signal may be reserved through skip-connection corresponding to the curved arrows inFIG.15. As such, the artificial neural network may generate the three-channel demosaiced images Idr, Idg, and Idb that are restored finally. FIG.16is a diagram illustrating parameters of the artificial neural network ofFIG.15, andFIG.17is a diagram illustrating effects of a demosaicing method based on deep learning according to example embodiments. InFIG.16, INL and MPL indicate an input layer and a maximum pooling layer disposed in a front portion of the artificial neural network, respectively. WIDTH and HEIGHT indicate a row size and a column size of an image or an image map input to each layer, respectively. NOF indicates a number of filters or channels and NOP indicates a number of parameters of each layer. FIG.17illustrates a peak signal-to-noise ratio (PSNR) in units of decibel (dB), and complexity or operation amounts TOPs in units of tera with respect to general first through fourth cases CS1˜CS4and a case CSp according to example embodiments. As illustrated inFIGS.16and17, the PSNR may be enhanced through the color decomposition method and the demosaicing method based on deep learning according to example embodiments in comparison with the general methods. Further, the operation amounts TOPs may be significantly reduced and thus the methods according to example embodiments may be easily applied to mobile devices. The demosaicing method according to example embodiments may enhance quality of the demosaiced images and simultaneously reduce the operation amounts to 1/30˜ 1/100 of the general operation amounts. FIG.18is a flow chart illustrating a demosaicing method based on deep learning according to example embodiments. Referring toFIG.18, a green-blue image indicating similarity between green sensitivity and blue sensitivity, and a green-red image indicating similarity between green sensitivity and red sensitivity may be generated based on a red image, a green image, and a blue image (S10). The similarity between the red sensitivity, the green sensitivity, and the blue sensitivity may be varied according to a wavelength of a light incident on an image sensor as described above with reference toFIG.6. As described above, the similarity between the red sensitivity and green sensitivity may increase as a difference between the red pixel value and the green pixel value decreases in a condition of the same brightness. In addition, the similarity between the blue sensitivity and green sensitivity may increase as a difference between the blue pixel value and the green pixel value decreases in a condition of the same brightness. As described above, the green-blue pixel value may be a square root of a multiplication of the green pixel value and the blue pixel value, and the green-red pixel value may be a square root of a multiplication of the green pixel value and the red pixel value. Conversion coefficients of the red image, the green image, the blue image, the green-blue image, and the green-red image with respect to a white image may be determined (S20). In some example embodiments, as described above with reference toFIG.5, the conversion coefficients may be determined using a least square method based on a matrix including the red pixel values of the red image, the green pixel values of the green image, the blue pixel values of the blue image, the white pixel values of the real white image (which are obtained using an RGBW image sensor), the green-blue pixel values of the green-blue image, and the green-red pixel values of the green-red image as components of the matrix. A pseudo-white image corresponding to the red image, the green image, the blue image, the green-blue image, and the green-red image may be generated using the conversion coefficients (S30). Training mosaic images may be generated based on the red image, the green image, the blue image, and the pseudo-white image (S40). The generation of the training mosaic images is the same as described with reference toFIG.13. An artificial neural network may be trained based on the training mosaic images (S50). Demosaiced color images corresponding to an input mosaic image may be generated based on the artificial neural network that is trained (S60). The input mosaic image may have an RGBW pattern and the demosaiced color images may include a demosaiced red image, a demosaiced green image, and a demosaiced blue image. Training of the artificial neural network and the generation of the demosaiced color images are the same as described with reference toFIGS.14and15. FIG.19is a block diagram illustrating a system performing a demosaicing method based on deep learning according to example embodiments. Referring toFIG.19, a system1000may include camera module CAM1114, a transceiver TRX1140, a control unit1160, and a user interface1150. The camera module1114may include a camera and/or an image sensor to capture and provide images. In some example embodiments, the camera module1114may include a plurality of cameras to capture a plurality of input images to be merged. In some example embodiments, the camera module1114may provide a plurality of input images to be merged where the plurality of input images are captured by a single camera. The transceiver1140may provide connectivity through wired or wireless links to other networks such as an internet, a cellular network, etc. The user interface1150may include input devices KPD1152, such as a keyboard, a keypad, etc., and a display device DSP1112to display images. In some examples, a virtual keypad or keyboard may be integrated into the display device1112with a touch screen/sensor or the like. The control unit1116may include a general purpose processor PRC1161, a hardware device HW1162, a firmware device FW1163, a memory MEM1164, a digital signal processor DSP1166, a graphics engine GENG1167, and a bus1177. The control unit1160may perform the color decomposition method and the demosaicing method based on deep learning according to example embodiments. Thus, the control unit1160may be configured to perform functions of the color decomposition device and the artificial neural network as described above. Example embodiments may be implemented as hardware, software, firmware, or a combination thereof. In some example embodiments, the color decomposition method and the demosaicing method based on deep learning according to example embodiments may be performed by the digital signal processor1166. For example, the color decomposition device and the artificial neural network as described may be included in the digital signal processor1166. In some example embodiments, at least a portion of the methods according to example embodiments may be performed by program instructions that are executed by a processing device. The program instructions may be stored in the memory1164as software SW1165, and the program instructions may be performed by the general purpose processor1161and/or the digital signal processor1166. In some example embodiments, to execute the program instructions, the general purpose processor1161may retrieve or fetch the program instructions from an internal register, an internal cache or the memory1164and decode and execute the instructions. During or after execution of the program instructions, the general purpose processor1161may write one or more results (which may be intermediate or final results) of the program instructions to the internal register, internal cache, or the memory1164. The system1000may be a computer system taking any suitable physical form. For example, the system1000may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, a tablet computer system, or a combination of two or more of these. The program instruction for implementing methods according to example embodiments may be stored in a computer-readable non-transitory storage medium or media. The computer-readable non-transitory storage medium may include one or more semiconductor-based or other integrated circuits (ICs) (for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL (SD) cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate. FIG.20is a block diagram illustrating an example embodiment of an interface employable in the system of19according to example embodiments. Referring toFIG.20, a computing system2100may be implemented by a data processing device that uses or supports a mobile industry processor interface (MIPI) interface. The computing system2100may include an application processor2110, a three-dimensional image sensor2140, a display device2150, etc. A CSI host2112of the application processor2110may perform a serial communication with a CSI device2141of the three-dimensional image sensor2140via a camera serial interface (CSI). In some example embodiments, the CSI host2112may include a deserializer (DES), and the CSI device2141may include a serializer (SER). A DSI host2111of the application processor2110may perform a serial communication with a DSI device2151of the display device2150via a display serial interface (DSI). In some example embodiments, the DSI host2111may include a serializer (SER), and the DSI device2151may include a deserializer (DES). The computing system2100may further include a radio frequency (RF) chip2160performing a communication with the application processor2110. A physical layer (PHY)2113of the computing system2100and a physical layer (PHY)2161of the RF chip2160may perform data communications based on a MIPI DigRF. The application processor2110may further include a DigRF MASTER2114that controls the data communications of the PHY2161. The computing system2100may further include a global positioning system (GPS)2120, a storage2170, a microphone MIC2180, a DRAM device2185, and a speaker2190. In addition, the computing system2100may perform communications using an ultra-wideband (UWB) network2210, a wireless local area network (WLAN)2220, a worldwide interoperability for microwave access (WIMAX) network2230, etc. However, the structure and the interface of the computing system2100are not limited thereto. As described above, the color decomposition method according to example embodiments may generate the pseudo-white image similar to a real white image using the inter-color images indicating similarity between color sensitivities. In addition, the demosaicing method based on deep learning according to example embodiments may efficiently perform deep learning of the artificial neural network using the color images and the pseudo-white image and generate the demosaiced images of high quality using the trained artificial neural network that is trained. As will be appreciated by one skilled in the art, embodiments may be embodied as a system, method, computer program product, or a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. The computer readable program code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be any suitable tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Example embodiments may be applied to any electronic devices and systems performing image processing. For example, example embodiments may be applied to systems such as a computer, a mobile phone, a smart phone, a personal digital assistant (PDA), a portable multimedia player (PMP), a digital camera, a camcorder, a personal computer (PC), a server computer, a workstation, a laptop computer, a digital TV, a set-top box, a portable game console, a navigation system, a wearable device, an internet of things (IoT) device, an internet of everything (IoE) device, an e-book, a virtual reality (VR) device, an augmented reality (AR) device, etc. As described above, example embodiments may provide a color decomposition method capable of efficiently generating a pseudo-white image from color images. Some example embodiments may provide a demosaicing method capable of efficiently generating demosaiced images using the color decomposition method. The color decomposition method according to example embodiments may generate the pseudo-white image similar to a real white image using the inter-color images indicating similarity between color sensitivities. In addition, the demosaicing method based on deep learning according to example embodiments may efficiently perform deep learning of the artificial neural network using the color images and the pseudo-white image and generate the demosaiced images of high quality using the trained artificial neural network that is trained. Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims.
54,174
11861808
BEST MODES Terms used in the description of the various example embodiments of the disclosure are briefly described and then the various example embodiments of the disclosure will be described in greater detail. The terms used in the example embodiments of the disclosure are general terms which are widely used now and selected considering the functions of the disclosure. However, the terms may vary depending on the intention of a person skilled in the art, a precedent, or the advent of new technology. In addition, in a specified case, the term may be arbitrarily selected. In this case, the meaning of the term will be explained in the corresponding description. Therefore, terms used in the disclosure may be defined based on a meaning of the terms and contents described in the disclosure, not simply based on names of the terms. Various embodiments of the disclosure are described with reference to the accompanying drawings. However, it should be appreciated that the disclosure is not limited to a specific embodiment and all modifications, equivalents and/or alternatives thereof also belong to the scope of the disclosure. Descriptions of well-known functions and structures incorporated herein may be omitted to avoid obscuring the subject matter of the disclosure. In the disclosure, terms including an ordinal number such as ‘first’, ‘second’, etc. may be used to describe various components, but the components are not to be construed as being limited to the terms. The terms are only used to differentiate one component from other components. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be understood that the terms “comprising”, “including”, “having” and variants thereof specify the presence of stated features, numbers, steps, operations, elements, components, and/or groups thereof, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof. In the description, the word “module” or “unit” refers to a software component a hardware component or a combination thereof, which is capable of carrying out at least one function or operation. A plurality of modules or units may be integrated into at least one module and realized using at least one processor except for those modules or units that need to be realized in specific hardware. Hereinafter, embodiments will be described in detail with reference to the accompanying drawings such that they can be easily practiced by those skilled in the art to which the disclosure pertains. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the disclosure. In the accompanying drawings, a portion irrelevant to description of the disclosure will be omitted for clarity. Like reference numerals refer to like elements throughout. Hereinafter, the disclosure will be described in more detail with reference to the drawings. FIG.1is a view schematically illustrating an image processing process of an electronic device according to an embodiment of the disclosure. Referring toFIG.1, when an input image10is input to the electronic device100, the electronic device100may sequentially perform a series of image processing processes and output an enlarged image20. In this case, the input image10being input may be a low-resolution image acquired by processing an original image. Here, the electronic device100may be a device capable of performing artificial intelligence learning. For example, the electronic device100may be a desktop PC, a notebook computer, a smartphone, a tablet PC, a server, or the like. Alternatively, the electronic device100may refer to a system in which a cloud computing environment is built. However, the disclosure is not limited thereto, and the electronic device100may be any device capable of performing artificial intelligence learning. Specifically, the electronic device100may include a plurality of layers101extracting features of the input image10and an upscaling module103upscaling the input image10using the extracted features. Here, the plurality of layers101may extract features of the input image10using a plurality of filters trained by a neural network. That is, the plurality of layers101may perform pre-processing before upscaling. Here, the filters are masks having weights and is defined as a matrix of weights. The filters are also referred to as windows or kernels. The weights configuring the matrix in the filters may include 0 (zero value) or a zero element that may be approximated to 0 and a non-zero element having a certain value between 0 and 1 and may have various patterns according to functions thereof. For example, when the neural network is realized as a convolution neural network (CNN) for recognizing an image, the electronic device100may put the filters having weights on the input image10and determine the sum (convolution operation) of values acquired by multiplying the image by each of the weights of the filters, as a pixel value of the output image, to extract a feature map. The input image may be extracted as a plurality of input images through multiple filters to extract robust features, and a plurality of feature maps may be extracted according to the number of filters. Such a convolutional image may be repeated by multiple layers. Here, the filters to be trained vary depending on a learning target of the CNN and patterns of selected filters vary. In other words, the trained filters and the selected filters vary depending on what a learning target of the CNN is cat, dog, pig, cow, and the like. In this manner, the electronic device100may determine what type of features the input original data has by combining the plurality of layers101from which different features may be extracted and applying a combination of the plurality of layers to the CNN. The electronic device100may output an enlarged image by inputting a feature map of the input image10extracted from the plurality of layers101to the upscaling module103. Meanwhile, the upscaling module103may optionally further include convolutional layers102-1and102-2ahead and behind. In this case, the upscaling module103may be referred to by including the convolutional layers102-1and102-2. In this case, the convolutional layers102-1and102-2may be a convolutional layer or a combination of a convolutional layer and a ReLu layer. Further, the electronic device100may learn parameters of the plurality of layers101or the convolutional layers102-1and102-2by comparing the output enlarged image20and the original image. Meanwhile, the upscaling module103may increase resolution of an image using a filter in the form of a function that is bilaterally symmetrical and decreases nonlinearly. For example, the upscaling module103may be in the form of a Gaussian function. For convenience of description below, the upscaling module in the disclosure is described as a Gaussian function, but is not limited thereto. Details of the upscaling module103will be described with reference to the accompanying drawings. FIG.2is a block diagram illustrating a simplified configuration of an electronic device according to an embodiment of the disclosure. Referring toFIG.2, the electronic device100includes a memory110and a processor120. The memory110may be realized as a memory of various formats such as a hard disk drive (HDD), a solid state drive (SSD), a DRAM memory, an SRAM memory, an FRAM memory, or a flash memory. Specifically, an artificial intelligence model may be stored in the memory110. Here, the artificial intelligence model may be learned. In addition, the artificial intelligence model may include an upscaling module for increasing the resolution of the input image. Specifically, the upscaling module is a module for acquiring a pixel value of an original pixel corresponding to a pixel of the input image in the enlarged image and a pixel value of an interpolated pixel near the original pixel. Here, the upscaling module may acquire the pixel value of the interpolated pixel near the original pixel according to a function in a form which is bilaterally symmetrical and decreases nonlinearly with respect to the original pixel under the control of the processor120. For example, the upscaling module may be in the form of a Gaussian function based on the original pixel. The processor120generally controls the operation of the electronic device100. According to an embodiment, the processor120may be realized as a digital signal processor (DSP), a microprocessor, or a time controller (TCON). However, without being limited thereto, the processor120may include at least one of a central processing unit (CPU), a microcontroller unit (MCU), a micro processing unit (MPU), a controller, an application processor (AP), a communication processor (CP), or an ARM processor or may be defined by the corresponding term. In addition, the processor120may be realized as a system on chip (SoC) or large scale integration (LSI) with a built-in processing algorithm or may be realized in the form of a field programmable gate array (FPGA). The processor120may output the enlarged image of the input image using the upscaling module included in the artificial intelligence model stored in the memory110. Specifically, the processor120may acquire the pixel value of the interpolated pixel near the original pixel according to the function which is bilaterally symmetrical and nonlinearly decreases with respect to the original pixel corresponding to the pixel of the input image using the upscaling module stored in the memory110, and output the enlarged image based on the acquired pixel values. For example, the upscaling module may be in the form of a Gaussian function based on the original pixel. Details of the original pixel and the interpolated pixel will be described with reference toFIG.4below. Specifically, the upscaling module used by the processor120may acquire the pixel value of interpolated pixel near a plurality of original pixels based on each ratio of the plurality of original pixel values. That is, the processor120acquires a pixel value of one interpolated pixel using the upscaling module, and to this end, the processor120may use pixel values of the plurality of original pixels around the interpolated pixel. Meanwhile, the processor120may acquire the pixel value of the interpolated pixel using the plurality of pixel values respectively corresponding to the plurality of original pixels in the input image. Specifically, the processor120may identify a reflection ratio of the pixel value of the original pixel according to distances between the interpolated pixel and the plurality of original pixels around the interpolated pixel. In this case, the plurality of original pixels may be pixels corresponding to a first pixel of the input image, a second pixel which is at least one of a plurality of pixels adjacent to the first pixel based on the first pixel, and a third pixel which is at least one of a plurality of pixels which are spaced apart from the first pixel but adjacent to the second pixel, in the enlarged image. Specifically, the processor120may identify a ratio reflecting the pixel value of the original pixel on the Gaussian function with respect to the original pixel according to the distance of the interpolated pixel to the original pixel. Here, variance of the Gaussian function may be acquired based on an upscaling factor. Specifically, the variance of the Gaussian function may be acquired based on a slope of a linear function for bilinear interpolation of the upscaling factor. A process of acquiring the variance of the Gaussian function will be described in detail below with reference toFIGS.6and7. Meanwhile, a reflection ratio of a pixel value of an original pixel other than the original pixel adjacent to the interpolated pixel may be identified on the Gaussian function with respect to the other original pixel according to a distance between the interpolated pixel and the other original pixel. The pixel value reflection ratio as described above may also be applied between the original pixels. Specifically, a pixel value of a first original pixel in the enlarged image will affect a pixel value of an adjacent second original pixel, and thus, when the pixel value of the second original pixel is acquired, the processor120may identify the ratio at which the pixel value of the first original pixel is reflected on the Gaussian function based on the first original pixel according to a distance between the first original pixel and the second original pixel and acquire the pixel value of the second original pixel using the identified ratio. The method of acquiring the pixel value of the enlarged image as described above will be described in detail below with reference toFIGS.6and9. FIG.3is a block diagram illustrating a specific configuration of the electronic device disclosed inFIG.2. Referring toFIG.3, the electronic device100may include a memory110, a processor120, a communication unit130, a display140, a button150, a video processor160, an audio processor170, a microphone180, an imaging unit185, and an audio output unit190. Here, the memory110and the processor120are the same as those shown inFIG.1, and redundant descriptions are omitted. The memory110may store various programs and data necessary for the operation of the electronic device100. Specifically, a parameter for processing the input image may be stored in the memory110. Here, the stored parameter may be machine-learned based on a previously input low-quality image and a high-quality image corresponding thereto. In addition, the memory110may store a reduction ratio for use in reducing the input image. Here, the stored reduction ratio, which is calculated by a manufacturer through machine learning, may be previously stored at the factory or may be updated through periodic firmware upgrading. Meanwhile, the memory110may store an algorithm for deriving the reduction ratio. In addition, the memory110may store a plurality of low-quality images to be upscaled to high-quality images. The processor120may generate a high-quality image for a low-quality image selected by a user from among the plurality of stored low-quality images. In addition, the memory110may store information on a reduction ratio corresponding to the degree of deterioration of an image. Here, the reduction ratio based on the degree of deterioration may be stored in the form of a lookup table. In addition, the memory110may store programs and data for upscaling a low-quality image. Accordingly, the processor120may generate a high-quality image from the input low-quality image using the program and data stored in the memory110, and in some cases, the processor120may determine a reduction ratio used in a parameter updating process or an upscaling process. The communication unit130is a component for performing communication with various types of external devices according to various types of communication methods. Specifically, the communication unit130may receive a low-quality image from an external device and transmit a high-quality image generated by the processor120to an external device such as a separate display device. In addition, the communication unit130may also receive an original image which is a high-quality image corresponding to the low-quality image. Specifically, the communication unit130may receive an image from an external device through a wired method such as an antenna, a cable, or a port or may receive an image through a wireless method such as Wi-Fi and Bluetooth. Meanwhile, in actual realization, the electronic device100may receive an image selected by the user from among a plurality of images stored in a storage unit (not shown) provided in the electronic device100and process the image. When the electronic device100is capable of performing wireless communication, the communication unit130may include a Wi-Fi chip, a Bluetooth chip, a wireless communication chip, and an NFC chip. Specifically, the Wi-Fi chip and the Bluetooth chip perform communication in a Wi-Fi method and a Bluetooth method, respectively. In case of using the Wi-Fi chip or the Bluetooth chip, various connection information such as an SSID and a session key may be first transmitted and received, and various types of information may be transmitted and received, connection may be established using the various connection information, and various information may then be transmitted and received. The wireless communication chip refers to a chip that performs communication according to various communication standards such as IEEE, Zigbee, 3rd generation (3G), 3rd generation partnership project (3GPP), and long term evolution (LTE). The NFC chip refers to a chip that operates in a near field communication (NFC) method using a 13.56 MHz band among various RF-ID frequency bands such as 135 kHz, 13.56 MHz, 433 MHz, 860 to 960 MHz, and 2.45 GHz. The display140may display an image acquired by processing the input image using an adjusted parameter. Here, the processed image displayed by the display140may be an image generated by improving image quality of the input image with the adjusted parameter. The display140may be realized as various types of displays such as a liquid crystal display (LCD), an organic light emitting diodes (OLED) display, and a plasma display panel (PDP). The display140may include a driving circuit, a backlight unit, and the like, which may be realized in the form of an a-si TFT, a low temperature polysilicon (LTPS) TFT, an organic TFT (OTFT), or the like. Also, the display140may be realized as a flexible display. In addition, the display140may include a touch sensor for detecting a user's touch gesture. The touch sensor may be realized as various types of sensors such as capacitive, resistive, and piezoelectric sensors. The capacitive type is a method of calculating touch coordinates by sensing micro-electricity excited to the user's body when a part of the user's body touches a surface of the display140using a dielectric coated on the surface of the display. The resistive type, which includes two electrode plates embedded in the display140, is a method of calculating touch coordinates by sensing a current flowing as upper and lower plates at a touched point are in contact with each other when the user touches a screen. In addition, if the electronic device100supports a pen input function, the display140may detect a user's gesture using an input unit such as a pen in addition to the user's finger. If the input unit is a stylus pen including a coil therein, the electronic device100may include a magnetic field detection sensor capable of detecting a magnetic field changed by a coil inside the stylus pen. Accordingly, the display140may detect even a proximity gesture, i.e., hovering, as well as a touch gesture. Meanwhile, it has been described that the display function and the gesture detection function are performed in the same component, but the display function and the gesture detection function may be performed in different components. In addition, according to various embodiments, the display140may not be provided in the electronic device100. The processor120may include a RAM121, a ROM122, a CPU123, a graphic processing unit (GPU)124, and a bus125. The RAM121, the ROM122, the CPU123, the GPU124, and the like may be connected to each other through the bus125. The CPU123accesses the memory110and performs booting using an operating system (O/S) stored in the memory110. In addition, the CPU123performs various operations using various programs, contents, data, and the like stored in the memory110. The ROM122stores an instruction set for system booting. When a turn-on command is input and power is supplied, the CPU123copies the O/S stored in the storage unit140to the RAM121according to a command stored in the ROM122and executes the O/S to boot the system. When booting is completed, the CPU123copies various programs stored in the storage unit140to the RAM121and executes the programs copied to the RAM121to perform various operations. When the booting of the electronic device100is completed, the GPU124displays a user interface (UI) on the display140. Specifically, the GPU124may generate a screen including various objects such as an icon, an image, text, and the like using a calculation unit (not shown) and a rendering unit (not shown). The calculation unit calculates attribute values such as coordinate values where each object is to be displayed, shapes, sizes, colors, and the like of each object according to a layout of the screen. The rendering unit generates screens of various layouts including objects based on the attribute values calculated by the calculation unit. The screen (or a UI window) generated by the rendering unit is provided to the display140and displayed in each of a main display area and a sub-display area. The button150may be various types of buttons such as a mechanical button, a touch pad, a wheel, and the like formed in a certain area such as a front portion, a side portion, or a rear portion of the exterior of a main body of the electronic device100. The video processor160is a component for processing content received through the communication unit130or video data included in the content stored in the memory110. The video processor160may perform various image processing such as decoding, scaling, noise filtering, frame rate conversion, resolution conversion, and the like on video data. The audio processor170is a component for processing the content received through the communication unit130or audio data included in the content stored in the memory110. The audio processor170may perform various processing such as decoding, amplification, noise filtering, or the like on audio data. When a playback application for multimedia content is executed, the processor120may drive the video processor160and the audio processor170to play the corresponding content. Here, the display140may display an image frame generated by the video processor160on at least one of the main display area or the sub-display area. The audio output unit190outputs audio data generated by the audio processor170. The microphone180is a component for receiving a user's voice or other sound and converting the received user's voice or the sound into audio data. The processor120may use the user's voice input through the microphone180during a call process or convert the user's voice into audio data and store the converted audio data in the memory110. Meanwhile, the microphone180may be configured as a stereo microphone that receives sound input at a plurality of locations. The imaging unit185is a component for capturing a still image or a video according to the user's control. The imaging unit185may be provided in plurality such as a front camera and a rear camera. As described above, the imaging unit185may be used as a unit for acquiring an image of the user in an embodiment for tracking a user's gaze. When the imaging unit185and the microphone180are provided, the processor120may perform a control operation according to the user's voice input through the microphone180or the user's motion recognized by the imaging unit185. That is, the electronic device100may operate in a motion control mode or a voice control mode. When operating in the motion control mode, the processor120activates the imaging unit185to capture an image of the user, tracks a change in the user's motion, and performs a corresponding control operation. When operating in the voice control mode, the processor120may operate in a voice recognition mode to analyze the user's voice input through the microphone180and perform a control operation according to the analyzed user's voice. In the electronic device100supporting the motion control mode or the voice control mode, a voice recognition technology or a motion recognition technology may be used in various embodiments described above. For example, when the user takes a motion as if selecting an object displayed on a home screen or utters a voice command corresponding to the object, it is determined that the object is selected and a control operation matched to the object may be performed. In addition, although not shown inFIG.3, according to an embodiment, the electronic device100may further include a USB port to which a USB connector may be connected, external input port to be connected to various external terminals such as a headset, a mouse, a LAN, and the like, a digital multimedia broadcasting (DMB) chip that receives and processes a DMB signal, various sensors, and the like. FIG.4is a view illustrating an image processing method of increasing resolution of an image. Specifically, inFIG.4, as an example, it is assumed that an input image410of 3 by 3 is input to an upscaling module having a scaling factor of 3 and an enlarged image420of 9 by 9 is acquired. Referring toFIG.4, a pixel of (1,1) of the input image410is referred to as a first pixel411-1and a pixel of (1,2) is referred to as a second pixel411-2. Here, if a pixel corresponding to a pixel of the input image410among a plurality of pixels of the enlarged image420is referred to as an original pixel, a pixel of (2,2) in the enlarged image420may be a first original pixel421-1to which the first pixel411-1corresponds. Also, a second original pixel421-2of the enlarged image420corresponding to the second pixel411-2adjacent to the first pixel411-1of the input image410may be a pixel of (2,5). Meanwhile, in addition to the original pixel corresponding to the pixels of the input image410among the plurality of pixels included in the enlarged image420, a pixel near the original pixel may be referred to as an interpolated pixel. Specifically, among the pixels between the first original pixel421-1and the second original pixel421-2corresponding to the pixels of the input image410in the enlarged image420, an interpolated pixel adjacent to the first original pixel421-1may be referred to as a first interpolated pixel422-1and an interpolated pixel adjacent to the first interpolated pixel422-1may be referred to as a second interpolated pixel422-2, and here, the second interpolated pixel422-2may be adjacent to the second original pixel421-2. InFIG.4, a center pixel among the plurality of pixels included in the region in which one pixel of the input image410is upscaled is illustrated as the original pixel, but another pixel other than the center pixel in the upscaled region may also be set as the original pixel. Meanwhile, the electronic device100may acquire pixel values of the original pixel and interpolated pixels of the enlarged image420by using the pixel values of the pixels of the input image410. Specifically, the electronic device100may acquire pixel values of the original pixel, the interpolated pixel near the original pixel, and another original pixel according to a Gaussian function based on the original pixel in the enlarged image420. Specifically, the electronic device100may acquire a pixel value of each pixel by identifying a reflection ratio of the pixel value of the original pixel according to a distance to the original pixel on a Gaussian function. A specific method of acquiring pixel values of a plurality of pixels configuring the enlarged image420will be described in detail below with reference toFIGS.6and9. FIG.5is a view illustrating an interpolation method in an image processing method of the related art. Specifically,FIG.5Aillustrates a filter of a nearest neighbor method andFIG.5Billustrates a filter of a bilinear interpolation method. In both methods, it is assumed that a scaling factor is 3. Referring toFIG.5A, the nearest neighbor method is a method of acquiring a pixel value of an interpolated pixel adjacent to an original pixel to be equal to a pixel value of the original pixel. Specifically, according to the nearest neighbor method, a pixel value of the first interpolated pixel422-1adjacent to the first original pixel421-1may be acquired as a pixel value of the first original pixel421-1. Also, a pixel value of the second interpolated pixel422-2adjacent to the second original pixel421-2may be acquired as a pixel value of the second original pixel421-2. In other words, according to the nearest neighbor method, the pixel value of the second original pixel421-2which is not adjacent is not considered in acquiring the pixel value of the first interpolated pixel422-1. In addition, there is a problem in that mosaic-shaped checkboard artifacts may be formed in the enlarged image due to the boundary between the first interpolated pixel422-1and the second interpolated pixel422-2based on a difference between the pixel value of the first interpolated pixel422-1and the pixel value of the second interpolated pixel422-2. Also, referring toFIG.5B, the bilinear interpolation method is a method of acquiring a pixel value of an interpolated pixel using values of a plurality of original pixels around the interpolated pixel and determining a reflection ratio of the pixel values of the plurality of original pixels according to a linear function. Here, a y-intercept of the linear function may be 1 (which means that all pixel values of the original pixels are reflected) and a slope may be a reciprocal of a scaling factor. Accordingly, a slope of the linear function of the right area may be −⅓ and a slope of the linear function of the left area may be ⅓ based on the first original pixel. Because the slope of the linear function is identified by the scaling factor, the slope of the linear function may also vary if the scaling factor is different. Specifically, according to the bilinear interpolation method, the pixel value of the first interpolated pixel422-1adjacent to the first original pixel421-1may be acquired as a pixel value of the first original pixel421-1and as a pixel value of the second original pixel421-2. Specifically, the pixel value of the first interpolated pixel422-1may be acquired based on a reflection ratio of the pixel value of the first original pixel421-1and the pixel value of the second original pixel421-2based on the distance between the first original pixel421-1and the second original pixel421-2. For example, referring toFIG.5B, a distance between the first interpolated pixel422-1and the first original pixel421-1is one pixel. Accordingly, the electronic device may acquire the pixel value of the first interpolated pixel422-1by reflecting ⅔ of the pixel value of the first original pixel421-1according to a linear function based on the first original pixel421-1shown inFIG.5B. Although not shown, the electronic device may acquire a ratio in which the pixel value of the second original pixel421-2is reflected on the pixel value of the first interpolated pixel422-1in the same manner. Specifically, a distance between the first interpolated pixel422-1and the second original pixel421-2is two pixels. Accordingly, the electronic device may acquire the pixel value of the first interpolated pixel422-1by reflecting ⅓ of the pixel value of the second original pixel421-2according to a linear function based on the second original pixel421-2. In conclusion, the electronic device may acquire the pixel value of the first interpolated pixel422-1using ⅔ of the pixel value of the first original pixel421-1and ⅓ of the pixel value of the second original pixel421-2. The electronic device may acquire the pixel value of the second interpolated pixel422-2using ⅓ of the pixel value of the first original pixel421-1and ⅔ of the pixel value of the second original pixel421-2in the same manner. In other words, according to the bilinear interpolation method, in acquiring the pixel value of the first interpolated pixel422-1, the pixel values of the two closest original pixels421-1and421-2are considered but the pixel value of the original pixel which is farther is not considered. In addition, in case of the bilinear interpolation method, there is a problem in that ringing artifacts occur in a region with a high frequency and an edge region in which a pixel value changes rapidly is not clear. FIG.6is a view illustrating an interpolation method in an image processing method according to an embodiment of the disclosure. Specifically,FIG.6shows a Gaussian function based on the first original pixel421-1in the enlarged image among a plurality of Gaussian functions included in the upscaling module. Here, it is assumed that the upscaling factor is 3. Referring toFIG.6, a Gaussian function620based on the first original pixel421-1defines a ratio in which the pixel value of the first original pixel421-1is defined according to a distance to the first original pixel421-1. In other words, the x axis of the Gaussian function620refers to a distance between one pixel in the enlarged image and the first original pixel421-1, which is a reference pixel, and the y-axis represents a ratio in which the pixel value of the first original pixel421-1is reflected according to distances. Here, because the Gaussian function620is two-dimensional, the pixels expressed in the Gaussian function620may be in the same row or the same column. Here, the Gaussian function620is bilaterally symmetrical with respect to the first original pixel421-1and may have a shape that decreases nonlinearly away from the first original pixel421-1. Meanwhile, variance of the Gaussian function620according to the disclosure may be acquired based on the upscaling factor. Specifically, the variance of the Gaussian function620according to the disclosure may be acquired based on a linear function610for bilinear interpolation of the same upscaling factor. Specifically, the variance of the Gaussian function620according to the disclosure may be acquired to form a point of contact with the linear function610for bilinear interpolation. Here, an absolute value of a slope of the linear function610may be a reciprocal (⅓ or −⅓) of the upscaling factor. Accordingly, the variance of the Gaussian function620according to the disclosure may be acquired based on Equation 1 as follows. σd⁡(s)=sqrt(-d22⁢ln⁡(-4⁢dt⁡(s)-1+1))[Equation⁢⁢1] Here, σdis the variance of the Gaussian function620, s is the upscaling factor, d is the x-coordinate of the point of contact of the linear function610and the Gaussian function620, and t(s) may be a value acquired by adding 1 to the distance between x intercepts of the Gaussian function. Here, t(s) may refer to a size of the Gaussian function620and may be acquired based on Equation 2 below. t(s)=s*4+1  [Equation 2] Here, s denotes the upscaling factor, and Equation 2 may be based on a user's setting. In other words, the size of the Gaussian function620is not limited to Equation 2 described above and may be adjusted according to a user's setting. As described above, according to the disclosure, by identifying the pixel value reflection ratio of the reference pixel according to the Gaussian function, the pixel value of the reference pixel may be reflected even when acquiring the pixel value of the pixel at a distance greater than that of the related art. That is, compared with the related art in which an enlarged image is generated by reflecting only pixel values of adjacent pixels, pixel values of pixels in a wider range including separated pixels are reflected, thereby generating a more improved enlarged image. FIG.7is a view illustrating a range of variance of a Gaussian function. Specifically,FIG.7is a view illustrating a variable range of the variance of the Gaussian function620acquired based on the linear function610for bilinear interpolation as shown inFIG.6. Referring toFIG.7, the Gaussian function620may be acquired to have a point of contact with the linear function610for bilinear interpolation. Hereinafter, for convenience of description, the Gaussian function having a point of contact with the linear function610will be referred to as a first Gaussian function620. Here, the variance σdof the first Gaussian function620may be acquired based on Equation 1 described above. Meanwhile, the electronic device may change the variance of the Gaussian function based on the variance ad of the first Gaussian function620. Specifically, the electronic device may set a range of variance so that a full width at half maximum (FWHM) of the Gaussian function does not deviate significantly compared to a FWHM of the linear function610for bilinear interpolation. Here, the FWHM, a term representing a width of a function, may refer to a difference between two variables in which a function value becomes half of a maximum value of the function. Specifically, the Gaussian function may be acquired by Equation 3 below. f⁡(x;s)=exp⁡(-x22⁢σ⁡(s)2)[Equation⁢⁢3] Here, the range of the Variance may be σd(s)−s*0.1≤σ(s)≤σd(s)+s*0.1. Based on the range of the variance described above, the Gaussian function having a minimum variance value may be in the form of a second Gaussian function630, and the Gaussian function having a maximum variance value may be in the form of a third Gaussian function640. Meanwhile, as shown inFIG.9below, the range of the variance described above may be a range set so that artifacts do not occur in a high frequency region of a frequency domain. FIGS.8and9are views illustrating a difference between the related art and the disclosure. Specifically,FIG.8illustrates a difference between the related art and the disclosure in an image domain, andFIG.9illustrates a difference between the related art and the disclosure in a frequency domain. Here, it is assumed that the scaling factor is 3 in both the related art and the disclosure. In addition, for convenience of explanation, only the right area of the reference pixel in which x is 0 will be described. Because the filters are bilaterally symmetrical, a description of the left area of the reference pixel is the same as the description of the right area of the reference pixel. In the image domain ofFIG.8, a filter810of a nearest neighbor method, a filter820of a bilinear interpolation method, and a filter830in the form of a Gaussian function of the disclosure are illustrated. In the nearest neighbor method, a value of an interpolated pixel is acquired as a pixel value of an adjacent original pixel. Referring toFIG.8, the filter810of the nearest neighbor method may be a filter based on an original pixel whose x is 0 and may acquire a pixel value of an interpolated pixel whose x is 1 as a pixel value of an original pixel whose x is 0 which is a pixel adjacent to the pixel whose x is 1. In other words, the filter810of the nearest neighbor method uses the pixel value of the reference pixel whose x is 0 only to obtain the pixel value of the adjacent pixel whose x is 1. Based on this, a size of the filter810of the nearest neighbor method may be 3 (including a reference pixel whose x is 0, an interpolated pixel whose x is 1, and an interpolated pixel whose x is −1). Meanwhile, the filter of the bilinear interpolation method acquires a reflection ratio of a pixel value of an original pixel in accordance with a linear function based on a distance between an interpolated pixel and the original pixel and obtain a pixel value of the interpolated pixel based on the acquired reflection ratio. Referring toFIG.8, the filter820of the bilinear interpolation method is a filter based on the original pixel whose x is 0, and the ratio of reflecting the pixel value of the original pixel linearly decreases according to a distance between the interpolated pixel and the original pixel whose x is 0. Accordingly, the pixel value of the interpolated pixel whose x is 1 may be acquired using ⅔ of the pixel value of the original pixel whose x is 0, and the pixel value of the interpolated pixel whose x is 2 may be acquired using ⅓ of the pixel value of the original pixel whose x is 0. Meanwhile, this is based on measurement of a distance based on a middle point of the pixel, and precisely, because the distance varies within the pixel, an average of reflection ratios that vary within one pixel may be used. In case of the pixel whose x is 1 and the pixel whose x is 2, the ratio based on the distance linearly decreases, and thus there is no problem even if the reflection ratio corresponding to the middle point of the pixel is used. However, in case of a pixel whose x is 3, the reflection ratio linearly decreases from a starting point to a middle point but is 0 from the middle point, and thus an average of reflection ratios in accordance with the distance from the starting point to the middle point of the pixel whose x is 3 with respect to the pixel whose x is 0 may be used as a reflection ratio of the pixel whose x is 3. That is, a size of the filter820of the bilinear interpolation method may be 7 (including a reference pixel whose x is 0, interpolated pixels whose x is 1, 2, and 3, and interpolated pixels whose x is −1, −2, and −3). Meanwhile, the filter830in the form of the Gaussian function of the disclosure may be acquired based on a scaling factor. Specifically, the Gaussian function type filter830may have a variance acquired based on the bilinear interpolation type filter820having the same scaling factor. Here, the Gaussian function type filter830may use the pixel value of the reference pixel whose x is 0 up to a pixel whose x is 6 due to the shape characteristic of the filter. Accordingly, the size of the Gaussian function type filter830may be 13 (including the reference pixel whose x is 0, the interpolated pixels whose x is 1, 2, 3, 4, 5, and 6, and the interpolated pixels whose x is −1, −2, −3, −4, −5, and −6). As described above, when the Gaussian function type filter of the disclosure is used, the pixel values of the interpolated pixels may be acquired using the pixel values of the original pixels in a wider range. In addition, the reflection ratio of the pixel value of the reference pixel gradually decreases according to distances to the neighboring pixels, and compared with the bilinear interpolation type filter, the pixel value of the pixel which is closer is used more frequently and the pixel value of a pixel which is away is used less frequently. Therefore, according to the disclosure, a more improved high-quality image may be generated. Meanwhile,FIG.9shows a result of analyzing an enlarged image acquired using the nearest neighbor type filter910, the bilinear interpolation type filter920, and the Gaussian function type filter930of the disclosure in the frequency domain. Referring toFIG.9, it can be seen that boosting occurs in a high frequency region when the nearest neighbor type filter910and the bilinear interpolation type filter920are used. As a result, artifacts occur in the enlarged image. In contrast, in case of using the Gaussian function type filter930, boosting does not occur even in the high frequency region, and thus it can be seen that artifacts in the enlarged image are reduced. FIG.10is a view illustrating an interpolation method using a plurality of pixels of an input image according to an embodiment of the disclosure. Referring toFIG.10, pixel values of a plurality of pixels in an enlarged image may be acquired using pixel values of a plurality of original pixels421-1,421-2, and421-3. Specifically, reflection ratios of the pixel values of the original pixels may be identified according to distances to the respective original pixels on a first Gaussian function1010-1based on the first original pixel421-1, a second Gaussian function1010-2based on the second original pixel421-2, and a third Gaussian function1010-3based on the third original pixel421-3. That is, the original pixels or interpolated pixels configuring the enlarged image may be acquired by overlapping the reflection ratio of the pixel value of the first original pixel421-1identified according to the distance to the first original pixel421-1on the first Gaussian function1010-1, the reflection ratio of the pixel value of the second original pixel421-2identified according to the distance to the second original pixel421-2on the second Gaussian function1010-2, and the reflection ratio of the pixel value of the third original pixel421-3identified according to the distance to the third original pixel421-3on the third Gaussian function1010-3. Meanwhile, inFIG.10, for convenience of explanation, Gaussian functions corresponding to original pixels other than the first to third original pixels421-1to421-3are not shown, but in actual realization, pixel values of the original pixels and the interpolated pixels of the enlarged image may be acquired based on Gaussian functions respectively corresponding to all the original pixels of an input image. FIG.11is a view illustrating the interpolation method ofFIG.6in a 3D domain. Specifically,FIG.6shows that the pixel value of the reference original pixel421-1is used only for the pixel values of the left and right interpolated pixels and the original pixel, but in actuality, as shown inFIG.11, the pixel value of the reference original pixel421-1may also be used for pixel values of upper and lower interpolated pixels and the original pixel and pixel values of diagonal interpolated pixels and the original pixel. Referring toFIG.11, a Gaussian function1110-1may be in the form of a 3D Gaussian function based on the reference original pixel421-1. Pixel values of interpolated pixels around the reference original pixel421-1and other original pixels may be acquired by reflecting the pixel value of the reference original pixel421-1by a ratio identified according to the distance to the reference original pixel421-1. Meanwhile, inFIG.11, for convenience of explanation, the pixels of the enlarged image are shown as 7 by 7, but a range of pixels in which the pixel value of the reference original pixel421-1is used may be 13 by 13 with respect to the reference original pixel421-1as shown inFIG.6. FIG.12is a view illustrating the interpolation method ofFIG.10in a 3D domain. Specifically, inFIG.10, it is illustrated that pixel values of a plurality of original pixels included in the same row or column are used to acquire a pixel value of an original pixel or an interpolated pixel of an enlarged image, but in actuality, pixel values of a plurality of original pixels included in different rows or columns may also be used. Here, the Gaussian function based on each original pixel may have a 3D form. Specifically, the original pixels or interpolated pixels configuring the enlarged image may be acquired by overlapping a reflection ratio of a pixel value of a first original pixel421-1identified according to a distance to the first original pixel421-1on the first Gaussian function1110-1, a reflection ratio of a pixel value of a second original pixel421-2identified according to a distance to the second original pixel421-2on the second Gaussian function1110-2, a reflection ratio of a pixel value of a fourth original pixel421-4identified according to a distance to the fourth original pixel421-4on the fourth Gaussian function1110-4, and a reflection ratio of a pixel value of a fifth original pixel421-5identified according to a distance to the fifth original pixel421-5on the fifth Gaussian function1110-5. FIG.13is a flowchart schematically illustrating an image processing method according to an embodiment of the disclosure. First, an electronic device may receive an image (S1310). Specifically, the electronic device may receive an image from an external device or may receive an image stored in a memory of the electronic device. Next, the electronic device may input the input image to a learned artificial intelligence model and output an enlarged image having increased resolution (S1320). Specifically, the artificial intelligence model includes an upscaling module, and the upscaling module may acquire a pixel value of an interpolated pixel near an original pixel according to a Gaussian function based on the original pixel corresponding to a pixel of the input image. Here, the upscaling module may identify a reflection ratio of the pixel value of the original pixel according to a distance between the original pixel and the interpolated pixel on the Gaussian function based on the original pixel. In addition, the upscaling module may acquire the pixel value of the interpolated pixel using the identified reflection ratio. FIG.14is a view comparing enlarged images acquired according to the related art and an image processing method according to an embodiment of the disclosure. FIG.14shows results of analyzing frequencies of the images in the x direction from the boundary of the enlarged image to the black line, which is the center of the images, as shown inFIG.14A. Specifically,FIGS.14A to14Cshow the enlarged image acquired according to the related art and the analysis results andFIG.14dshows the enlarged image acquired according to the disclosure and the analysis results. Here,FIG.14Ashows an enlarged image acquired by a deconvolution method,FIG.14Bshows an enlarged image acquired by a nearest neighbor method, andFIG.14Cshows an enlarged image acquired by a bilinear interpolation method. Referring toFIG.14A, it can be seen that the frequency of the enlarged image forms a wave shape at a certain period and mosaic-shaped artifacts occurred. Referring toFIGS.14B and14C, it can be seen that the frequency fluctuates in an edge region between a gray surface and a black line, and thus ringing artifacts occurred in the edge region. Meanwhile, referring toFIG.14D, it can be seen that the frequency is even in the gray area and there is no fluctuation of the frequency in the edge area between the gray surface and the black line, and thus the edge in the enlarged image is clear. In other words, it can be seen that an improved high-quality image is acquired, compared to the related art. According to the various embodiments described above, when the Gaussian function type filter is used, pixel values of interpolated pixels may be acquired using a pixel value of an original pixel of a wider range. In addition, the reflection ratio of the pixel value of the reference pixel gradually decreases according to distances to the neighboring pixels, and compared with the bilinear interpolation type filter, the pixel values of the pixels which are closer are used more frequently and the pixel values of the pixels which are away are used less frequently. Therefore, according to the disclosure, an improved high-quality image may be generated. Meanwhile, various embodiments described above may be realized in a computer or similar device-readable recording medium using software, hardware, or a combination thereof. In case of implementation by hardware, embodiments described in the disclosure may be realized using at least one of application-specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, or electronic units performing other functions. In some cases, embodiments described in the disclosure may be realized by the processor120itself. In case of software implementation, embodiments such as procedures and functions described in the disclosure may be realized as separate software modules. Each of the software modules may perform one or more functions and operations described in the disclosure. Meanwhile, the image processing method according to various embodiments described above may be stored in a non-transitory readable medium. Such a non-transitory readable medium may be installed and used in a variety of devices. Such a non-transitory readable medium is not a medium for storing data for a short time such as a register, cache or memory, but refers to a medium that semi-permanently stores data and may be read by a device. Specifically, programs for performing various methods described above may be stored in the non-transitory readable medium may include a CD, DVD, hard disk, Blu-ray disc, USB, memory card, ROM, and the like, and provided. According to embodiments, the methods according to various embodiments disclosed in this document may be included in a computer program product and provided. The computer program product may be traded as goods between a seller and a purchaser. The computer program product may be distributed as a device-readable storage medium (e.g., compact disk read only memory (CD-ROM)) or online through an application store (e.g., Play Store™). In case of online distribution, at least part of the computer program product may be temporarily stored or temporarily created in a storage medium such as a server of a manufacturer, a server of an application store, or a memory of a relay server. Hereinabove, the embodiment of the disclosure has been described but the disclosure is not limited to the specific embodiment and may be variously modified by a person skilled in the art to which the disclosure pertains without departing from the scope of the disclosure as claimed in the appended claims and such modifications should not be individually understood from technical concepts or prospects of the disclosure.
53,439
11861809
DETAILED DESCRIPTION One or more specific embodiments of the disclosure are illustrated in the drawings and are described in detail in the detailed description. However, it is to be understood that the disclosure is not limited to the one or more specific embodiments, but includes all modifications, equivalents, and substitutions without departing from the scope and spirit of the present disclosure. Also, well-known functions or constructions are not described in detail since they would obscure the disclosure with unnecessary detail. General terms that are currently widely used were selected as terms used in embodiments of the disclosure in consideration of functions in the disclosure, but may be changed depending on the intention of those skilled in the art or a judicial precedent, the emergence of a new technique, and the like. In addition, in a specific case, terms arbitrarily chosen by an applicant may exist. In this case, the meaning of such terms will be mentioned in detail in a corresponding description portion of the disclosure. Therefore, the terms used in embodiments of the disclosure should be defined on the basis of the meaning of the terms and the contents throughout the disclosure rather than simple names of the terms. In this specification, the expressions “have,” “may have,” “include,” or “may include” or the like represent presence of a corresponding feature (for example: components such as numbers, functions, operations, or parts) and does not exclude the presence of additional feature. The expression “At least one of A or/and B” should be understood to represent “A” or “B” or any one of “A and B”. As used herein, the terms “first,” “second,” or the like may denote various components, regardless of order and/or importance, and may be used to distinguish one component from another, and does not limit the components. In addition, the description in the disclosure that one element (e.g., a first element) is “(operatively or communicatively) coupled with/to” or “connected to” another element (e.g., a second element) should be interpreted to include both the case that the one element is directly coupled to the another element, and the case that the one element is coupled to the another element through still another element (e.g., a third element). A singular expression includes a plural expression, unless otherwise specified. It is to be understood that the terms such as “comprise” or “consist of” are used herein to designate a presence of a characteristic, number, step, operation, element, component, or a combination thereof, and not to preclude a presence or a possibility of adding one or more of other characteristics, numbers, steps, operations, elements, components or a combination thereof. The term such as “module,” “unit,” “part”, and so on is used to refer to an element that performs at least one function or operation, and such element may be implemented as hardware or software, or a combination of hardware and software. Further, except for when each of a plurality of “modules”, “units”, “parts”, and the like needs to be realized in an individual hardware, the components may be integrated in at least one module or chip and be realized in at least one processor (not shown). In this disclosure, a term user may refer to a person using an electronic apparatus or an apparatus (for example: AI electronic apparatus) which uses an electronic apparatus. Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings. FIG.1is a view illustrating an example of an electronic apparatus100according to an embodiment. The electronic apparatus100, as illustrated inFIG.1, may be implemented as a device having a display such as a TV, a monitor, a smart phone, a tablet PC, a notebook PC, a head mounted display (HMD), a near-eye display (NED), a large format display (LFD), a digital signage, a digital information display (DID), a video wall, a projector display, or the like, but is not limited thereto. The electronic apparatus100may be an apparatus which provides an image to a device including an external display, such as a server, a Blu-ray disc (BD) player, a disc player, a streaming box, or the like. The disclosure is not limited thereto, and according to another embodiment, the electronic apparatus100may be any device capable of image-processing an image. The electronic apparatus100may receive various types of images. For example, the electronic apparatus100may receive at least one image among a standard definition (SD), high definition (HD), full HD, ultra HD images. Alternatively, the electronic apparatus100may receive an image in a compressed form such as a moving picture experts group (MPEG) (for example, MP2, MP4, MP7, etc.), advanced video coding (AVC), H.264, a high efficiency video codec (HEVC), or the like. According to an embodiment, the electronic apparatus100, as illustrated inFIG.1, may receive an image10-1which has the same resolution as the display of the electronic apparatus100and is not compressed. In this case, the electronic apparatus100may display the image10-1without performing image processing operation on the received image10-1. However, an image w quality as the image10-1may not received always received by the apparatus100. According to an embodiment, the electronic apparatus100may receive an image10-2having the same resolution as the display of the electronic apparatus100but having the image quality degraded due to compression. In this case, the electronic apparatus100needs to improve the quality of the degraded image10-2. According to an embodiment, the electronic apparatus100may receive an image10-3having a resolution lower than the resolution of the display of the electronic apparatus100. In this case, upscaling operation on the image10-3may have to be performed, but the quality may be lowered. Accordingly, the electronic apparatus100needs to improve the quality of the image10-3before upscaling and then upscale the image with improved quality or improve the quality of the upscaled image. According to another embodiment, the electronic apparatus100may receive various types of images, and needs to perform image improvement in consideration of a characteristic of each image. Hereinafter, a method of image quality improvement of the electronic apparatus100and various embodiments will be described. FIG.2is a block diagram illustrating a configuration of the electronic apparatus100according to an embodiment. As illustrated inFIG.2, the electronic apparatus100includes a memory110and a processor120. The memory110is electrically connected to the processor120and may store data necessary for various embodiments. In this case, the memory110may be implemented as an internal memory such as read only memory (for example, electrically erasable programmable read-only memory (EEPROM)), random-access memory (RAM), or the like, included in the processor120, or a memory separate from the processor120. In this case, the memory110may be implemented as a memory embedded with the electronic apparatus100, or may be implemented as a detachable memory in the electronic apparatus100, according to the data usage purpose. For example, data for driving the electronic apparatus100may be stored in a memory embedded in the electronic apparatus100, and data for an expanded function of the electronic apparatus100may be stored in the memory detachable to the electronic apparatus100. A memory embedded in the electronic apparatus100may be a volatile memory such as a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), or a nonvolatile memory (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, flash ROM, a flash memory (for example, NAND flash or NOR flash), a hard disk drive or a solid state drive (SSD). In the case of a memory detachably mounted to the electronic apparatus100, the memory may be implemented as a memory card (for example, a compact flash (CF), secure digital (SD), micro secure digital (micro-SD), mini secure digital (mini-SD), extreme digital (xD), multimedia card (MMC), etc.), an external memory (for example, a USB memory) connectable to the USB port, or the like. The memory110may store a learning network model used to improve the quality of the input image. Here, the learning network model may be a model of machine learning based on a plurality of sample images, a noise map for each sample image, and an original image corresponding to each sample image. For example, the learning network model may be a model that is learned using convolution neural network (CNN) for a plurality of sample images, a noise map for each sample image, and an original image corresponding to each sample image. Here, the CNN is a multi-layered neural network having a special concatenation structure that is designed for voice processing, image processing, or the like. However, this is merely exemplary, and the learning network model may be a learning network model based on various neural networks such as recurrent neural network (RNN), deep neural network (DNN), or the like. In the meantime, the noise map may represent the quality of the input image. For example, the noise map includes information indicative of the quality of each pixel included in the input image. In this case, the size of the noise map may be equal to the size of the input image. For example, if the size of the input image is 4×4, the size of the noise map may also be 4×4. However, the embodiment is not limited thereto, and if the noise map represents the quality of the input image, the type, the display method of the information, or the like may use a number of ways. For example, the unit information of the noise map may correspond to an average value for each region of a predetermined size of the input image, rather than corresponding to each pixel value of the input image. The memory110may further store a noise map generation model to obtain the noise map of the input image. Here, the noise map generation model may be a machine-learned model based on a plurality of sample images and a noise map for each sample image. The learning network model and noise map generation model used to improve quality of the input image will be specified through the drawings. The processor120is electrically connected to the memory110and controls overall operations of the electronic apparatus100. According to an embodiment, the processor120may include a digital signal processor (DSP) for processing a digital image signal, a microprocessor, a time controller (TCON), or the like, but is not limited thereto. The processor120may include, for example, and without limitation, one or more of a central processing unit (CPU), a micro controller unit (MCU), a micro processing unit (MPU), a controller, an application processor (AP), a communication processor (CP), an Advanced Reduced instruction set computing (RISC) Machine (ARM) processor, or the like, or may be defined as a corresponding term. The processor120may be implemented in a system on chip (SoC) type or a large scale integration (LSI) type which a processing algorithm is built therein or in a field programmable gate array (FPGA) type. According to an embodiment, the processor120may obtain an output image in which quality of the input image is improved, by performing image-processing of the input image. For instance, the processor120may obtain a noise map representing the quality of the input image from the input image, and obtain an output image in which quality of the input image is improved by applying the input image and the noise map to the learning network model including a plurality of layers. By using the noise map obtained from the input image in the process of improving the quality of the input image, the quality of the image is improved in an adaptive manner according to the type of the input image, and the overall effect of the quality improvement may be increased. Here, the learning network model may be an artificial intelligence (AI) model which is obtained by learning, through an AI algorithm, a relationship among a plurality of sample images, a noise map for each sample image, and an original image corresponding to each sample image. Further, the plurality of layers of the learning network model may include an input layer, an intermediate layer, and an output layer. The input layer is a layer in which operation may be performed first among the plurality of layers, the output layer is a layer in which operation may be performed last, and the intermediate layer is a layer disposed between the input layer and the output layer. According to an embodiment, processor120may provide a noise map to at least one intermediate layer among the plurality of layers. For example, the processor120may provide the noise map to each of the plurality of layers, or provide a noise map to each of the remaining layers other than the input layer among the plurality of layers. Through this operation, performance of quality improvement on an image may be improved since the quality of the image is continuously reflected in the process of the quality improvement of the input image. For example, if the noise map is provided only to the input layer, among a plurality of layers, the noise map is not reflected while the image passes through the other layers, among the plurality of layers, the characteristic of the noise map may be weakened, and quality improvement performance may be degraded. Alternatively, if the noise map is provided to only the output layer, among the plurality of layers, the noise map is not reflected while the image passes through the other layers, among the plurality of layers, and accordingly, quality improvement is performed in a state where the noise map is reflected only in the output layer. In general, the higher the number of layer in the learning network model, higher the performance image processing, and if the noise map is reflected only in the output layer, the overall performance of the image quality improvement may be degraded. Therefore, when the processor120provides the noise map to at least one intermediate layer, quality improvement performance may be improved more than the case where the noise map is provided to only the input layer or the output layer. The learning network model may further include at least one sub-layer, and the processor120may process the noise map using at least one sub-layer and provide the processed noise map to the at least one intermediate layer. The processor120may provide a plurality of channels corresponding to the output data output from a previous layer of each of the at least one intermediate layer and additional channels to at least one intermediate layer, respectively. Here, the additional channel may be a processed noise map output from the sub-layer corresponding to each of the at least one intermediate layer. According to an embodiment, the processor120may not mix the output data output from the previous layer of each of the at least one intermediate layer with the processed noise map output from the sub-layer corresponding to each of the at least one intermediate layer, but concatenate the output data and the processed noise map in parallel, and provide to each of the at least one intermediate layer. The processor120may provide the input image to the input layer among a plurality of layers included in the learning network model. In this case, the learning network model may be an AI model which is obtained by learning, through the AI algorithm, a relationship between an output image which is obtained by sequentially processing, by a plurality of layers, each of a plurality of sample images provided to the input layer among the plurality of layers and a noise map of each of the plurality of sample images provided to at least one intermediate layer with an original image corresponding to each of the plurality of sample images. Alternatively, the processor120may provide an input image to the input layer, and may mix the output data of the output layer and the input image to obtain an output image. In other words, the processor120may provide the input image not only to the input layer but also a rear end of the output layer. In this case, the learning network model may be an AI model which is acquired by learning, through the AI algorithm, the relationship between an output image which is obtained by mixing an output data and each of a plurality of sample images, with an original image corresponding to each of the plurality of sample images, the output data which is obtained by sequentially processing, by a plurality of layers, each of a plurality of sample images provided to the input layer, among the plurality of layers, and a noise map of each of the plurality of sample images provided to at least one intermediate layer. Here, each of a plurality of sample images may be a compressed image of an original image, and the noise map for each sample image may be a noise map obtained from each sample image and an original image corresponding to each sample image. The processor120may obtain a noise map by applying the input image to a noise map generation model including a plurality of layers. Here, the noise map generation model may be an AI model obtained by learning a relationship between the plurality of sample images and a noise map for each of the plurality of sample images through an AI algorithm When learning a learning network model for improving the quality of an image and learning a noise map generation model, the same plurality of sample images and a noise map for each of the plurality of sample images may be used for learning both the learning network model and the noise map generation model. However, the embodiment is not limited thereto and, the learning data when learning a learning network model for improving the quality of an image and learning data when learning a noise map generation model may be different from each other. The electronic apparatus100may further include a display. According to an embodiment, the electronic apparatus100may convert the resolution of the output image based on a resolution of the display, and control the display to display the image of which the resolution is converted. Here, the image with converted resolution may be a 4K ultra high definition (UHD) or 8K UHD image. The processor120may apply each of the plurality of frames included in the video as an input image to a learning network model to obtain an output video with improved quality of the video. For example, the processor120may decode the video, apply each frame of the decoded video as an input image to a learning network model to improve quality, and combine frames with improved quality to obtain an output video with improved quality. Here, the processor120may obtain a noise map of each frame and use the obtained noise map in improving the quality of each frame. As described above, the processor120may adaptively improve the quality of the input image, as the noise map is obtained from the input image. In addition, as the noise map is provided to at least one intermediate layer among the plurality of layers included in the learning network model, the processor120may perform the image processing while continuously reflecting the quality of the input image. Accordingly, the quality improvement performance of the input image may be improved. Hereinbelow, the operation of the processor120will be further described through the drawings. FIGS.3A and3Bare views illustrating a learning network model and a noise map generation model according to various embodiments. According to an embodiment inFIG.3A, the processor120may obtain the noise map by applying the input image to a noise map generation model (quality estimation convolution neural network (QECNN))310. The noise map generation model310may include a plurality of convolution layers. For example, each of the plurality of convolution layers may perform convolution on the input data using a kernel of 5×5. However, the embodiment is not limited thereto, and any other types of kernels may be used. Further, one convolutional layer may perform convolution on the input data using each of a plurality of kernels. After the convolution is performed, some of the plurality of convolution layers may process the input data using a Rectified Linear Unit (ReLU) function. The ReLU function is a function of converting to zero if the input value is less than zero, and outputting the input value as it is if the input value is greater than zero. However, the embodiment is not limited thereto, and some of the plurality of convolution layers may process the input data using the sigmoid function. The processor120may apply the input image to a learning network model (compression artifact reduction convolution neural network (CARCNN))320-1to obtain an output image in which quality of the input image is improved. The processor120may provide the noise map to at least one intermediate layer among the plurality of layers. For example, the processor120, as shown inFIG.3A, may provide the noise map to each of the remaining layers except for an input layer, among the plurality of layers. The learning network model320-1may include a plurality of convolution layers and a plurality of sub-convolution layers330. For example, each of the plurality of convolution layers may perform convolution on the input data using a kernel of 3×3, and the plurality of sub-convolution layers330may perform a convolution on the noise map using a kernel of 1×1. However, the embodiment is not limited thereto, and any other types of kernels may be used. Further, one convolutional layer may perform convolution on the input data using each of a plurality of kernels. Some of the plurality of convolution layers may process the input data using the ReLU function after performing convolution. Another portion of the plurality of convolution layers, after performing convolution, may process the input data using the batch normalization and the ReLU function. The batch normalization is a task that equalizes the distribution of each layer to ensure fast learning speed. The output data output from the input layer, from among the plurality of convolution layers, may be divided into channels corresponding to the number of kernels included in the input layer. The output data output from the sub-convolution layer330corresponding to the input layer among the plurality of sub-convolution layers330may be concatenated to the output data output from the input layer and input to the next layer of the input layer. For example, output data composed of 36 channels in the input layer is output, output data composed of one channel is output from the sub-convolution layer330corresponding to the input layer, and output data comprised of a total of 37 channels may be input to the next layer of the input layer. The number of channels may vary according to the characteristics of each of the plurality of convolutional layers, and a similar operation is performed in the remaining convolution layers. The processor120may obtain the output image having the quality of the input image is improved, by mixing the output data of the output layer, among a plurality of convolution layers, and the input image. As described above, the processor120may obtain the noise map corresponding to the input image, and the performance of the image quality improvement processing may be improved, since the noise map is continuously reflected in the process of quality improvement process of the input image. For instance, the noise map is continuously reflected in the one or more of the intermediate layers of the processing operation of the de-noise convolution neural network (DnCNN). The learning network model320-1ofFIG.3Aillustrates that the plurality of sub-convolution layers330are added to continuously reflect the noise map to the de-noise convolution neural network (DnCNN), but as shown inFIG.3B, a learning network model320-2may be configured to add the plurality of sub-convolution layers330for continuously reflecting the noise map to a residual dense network (RDN) format. A residual dense block (RDB) layer ofFIG.3Bincludes a plurality of convolution layers in a form in which a residual block and a dense block are combined, the output of each of the plurality of convolution layers may be sequentially input into the next convolutional layer and may be additionally input to a convolutional layer disposed at another location. The output data in which the initial input data of the RDB layer is mixed with the data which passes through the last layer may be output from the RDB layer. In the case ofFIG.3B, the same noise map generation model310asFIG.3Amay be used. The embodiment is not limited thereto, and the learning network model320-1may use any basic model if the model may continuously reflect the noise map. According to an embodiment, the models ofFIGS.3A and3Bmay be implemented as software and stored in the memory110, and the processor120may read out the data for performing an operation of each layer and perform processing for the input image. FIG.4is a view illustrating a learning method of the noise map generation model according to an embodiment. The noise map generation model may be an AI model obtained by learning a relationship between a plurality of sample images and a noise map for each of the plurality of sample images through an AI algorithm. For example, as shown inFIG.4, the noise map generation model may learn, through the AI algorithm, the relationship between the output data according to an input of a first sample image410-1and a first noise map420-1of the first sample image410-1. For the remaining pairs of data (410-2,420-2) (410-3,420-3), (410-4,420-4), or the like, the same learning process may be repeated and the noise map generation model may be obtained. Here, the noise map for each sample image may be a noise map obtained through a predetermined algorithm of the rule base. For example, the first sample image410-1to the fourth sample image410-4may images of JPEG quality of 10, 30, 50, and 90, respectively, and the first noise map420-1to the fourth noise map420-4may be noise maps for the first sample image410-1to the fourth sample image410-4, respectively. The noise map generation model may be a model learned by another device, other than the electronic apparatus100. The embodiment is not limited thereto, and the processor120of the electronic apparatus100may learn the noise map generation model. FIG.5is a view illustrating a learning method of a learning network model to improve quality of an image according to an embodiment. The learning method of the learning network model may be an AI model obtained by learning, through the AI algorithm, a relationship between a plurality of sample images, a noise map for each sample image, and an original image corresponding to each sample image. For example, as shown inFIG.5, the learning network model may learn, through the AI algorithm, the relationship between the output data according to the input of the first sample image520-1and input of the first noise map530-1for the first sample image520-1and the original image510corresponding to the first sample image520-1. For the relationship of the remaining data groups (520-2,530-2,510) and (520-3,530-3,510), the learning network model may be obtained by repeating the same learning process. Here, the noise map for each sample image may be a noise map obtained through a rule-based predetermined algorithm. FIG.5illustrates that only one original image510is used, but in the real learning process, a plurality of original images may be used. That is, in addition to the original image510ofFIG.5, additional original images, a plurality of sample images which are obtained by compressing the additional original images with various compression rates, and noise maps for each sample image may be used in the learning process. The learning network model may be a model that is learned by another apparatus, instead of the electronic apparatus100. The embodiment is not limited thereto, and the processor120of the electronic apparatus100may learn the learning network model. FIG.6is a block diagram illustrating a specific configuration of an electronic apparatus100according to an embodiment. Referring toFIG.6, the electronic apparatus100may include a memory110, a processor120, an inputter130, a display140, and a user interface150. The memory110may store one or more instructions. The processor120may execute the one or more instructions stored in memory110to perform a noise map acquisition operation of the input image as described above, a quality improvement operation of the input image, a learning operation of each AI model, or the like. The configuration ofFIG.6which are overlapped with the configurations ofFIG.2will not be further described. The inputter130receives various types of contents, such as image signals. For example, the inputter130may receive an image signal in a streaming or downloading manner from an external apparatus (for example, a source apparatus), an external storage medium (for example, universal serial bus (USB)), an external server (for example, a web hard), and the like through a communication method such as access point (AP)-based Wi-Fi (Wireless Lan network), Bluetooth, Zigbee, wired/wireless LAN, wide area network (WAN), Ethernet, IEEE1394, high definition multimedia interface (HDMI), mobile high definition link (MHL), universal serial bus (USB), display port (DP), Thunderbolt, video graphics array (VGA) port, RGB port, d-subminiature (D-SUB), digital visual interface (DVI), and the like. Here, the image signal may be a digital signal, but is not limited thereto. In addition, a video may be received through the inputter130. The display140may be implemented as various formats such as a liquid crystal display (LCD), an organic light-emitting diode (OLED), Light-Emitting Diode (LED), micro LED, Liquid Crystal on Silicon (LCoS), Digital Light Processing (DLP), quantum dot (QD) display panel, or the like. The processor120may control the display140to display the output image in which quality of the input image is improved. The user interface150may be implemented as a device such as a button, a touch pad, a mouse, a keyboard, a remote control receiver or a touch screen capable of performing the above-described display function and input function. The button may be various types of buttons such as a mechanical button, a touch pad, a wheel, and the like formed in an arbitrary area such as a front surface portion, a side surface portion, and a back surface portion of the main body of the electronic apparatus100. The processor120may perform quality improvement operation for the input image according to a user command input through the user interface150. FIG.7is a block diagram illustrating a configuration of a processor120for learning and using a learning network model according to an embodiment. Learning and image processing may be performed by separate devices, but inFIG.7, it is described for convenience of description that the electronic apparatus100learns the learning network model. Referring toFIG.7, the processor120may include at least one of a learning processor710or an image processor720. The learning processor710may generate or train a model for obtaining the noise map from the input image and a model for improving quality of the input image. The learning processor710may generate the recognition model having a determination criteria using the collected learning data. For example, the learning processor710may generate, train, or update a model for obtaining a noise map from the input image using the input image and the noise map for the input image as learning data. In addition, the learning processor710may learn, train, or update a model for obtaining an original image from the input image and the noise map by using the input image, noise map for the input image, and an original image corresponding to the input image as learning data. The image processor720may obtain the output data in which the quality of an input data is improved, by using the input data of the learning network model. According to an embodiment, the input data may be predetermined data. For example, the image processor720may obtain the noise map of the input image and obtain the output image with improved quality of the input image based on the noise map. According to an embodiment, at least a portion of the learning processor710and at least a portion of the image processor720may be implemented as software modules or at least one hardware chip form and mounted in the electronic apparatus100. For example, at least one of the learning processor710and the image processor720may be manufactured in the form of an exclusive-use hardware chip for AI, or a conventional general purpose processor (e.g., a CPU or an application processor) or a graphics-only processor (e.g., a GPU) and may be mounted on various electronic devices described above or object recognition devices. Herein, the exclusive-use hardware chip for AI is a dedicated processor for probability calculation, and it has higher parallel processing performance than existing general purpose processor, so it can quickly process computation tasks in AI such as machine learning. When the learning processor710and the image processor720are implemented as a software module (or a program module including an instruction), the software module may be stored in a computer-readable non-transitory computer readable media. In this case, the software module may be provided by an operating system (OS) or by a predetermined application. Alternatively, some of the software modules may be provided by an OS, and some of the software modules may be provided by a predetermined application. In this case, the learning processor710and the image processor720may be mounted on one electronic apparatus, or may be mounted on separate servers, respectively. For example, one of the learning processor710and the image processor720may be included in the electronic apparatus100, and the other one may be included in an external server. In addition, the learning processor710and the image processor720may provide the model information constructed by the learning processor710to the image processor720via wired or wireless communication, and provide data which is input to the image processor720to the learning processor710as additional data. FIGS.8A and8Bare views illustrating a performance of a noise map generation model according to various embodiments. FIG.8Aillustrates a mean square error of a first noise map according to the quality of the image and a second noise map output from the noise map generation model using LIVE 1 video data set. For example, when an original image is compressed, the first noise map may be obtained from the compressed image through a predetermined algorithm of the rule based method, the second noise map may be obtained by applying the compressed image to the noise map generation model, and the mean square error of the first noise map and the second noise map may be obtained. The compression factor (Q) ofFIG.8Arepresents quality in accordance with compression, and quality gets closer to the original image from 10 to 90. In addition,FIG.8Adepicts 8 layers, 12 layers, and 16 layers, and the more the layers, the lower the mean square error. This is more clearly illustrated inFIG.8B. When the number of layers is equal to or more than a particular number, the second noise map which is very similar to the first noise map is obtained, regardless of Q. FIGS.9A and9Bare views illustrating a performance of a learning network model to improve quality of an input image according to various embodiments. FIG.9Aillustrates the average peak signal-to-noise ratio (PSNR)/structural similarity index (SSIM) results calculated after a compressed image of which Q is from 10 to 90 in the classic5 or LIVE1 video data set has been de-noised by various ways. The PSNR is maximum signal to noise ratio, representing the power of the noise for the maximum power, and the SSIM represents the similarity to the original image for distortion caused by compression and conversion as the structural similarity index. As illustrated inFIG.9A, the higher the Q, the higher the quality improvement function, and the performance of the QEDnCNN is improved more than the conventional DnCNN, and the performance of the QERDN is improved more than the conventional RDN. InFIG.9B, QEDnCNN and QERDN are compared, and it is seen that QEDnCNN has a better performance than QERDN, in general. FIGS.10A,10B,10C, and10Dare views illustrating various expanded examples of an embodiment. The processor120may divide the object region and the background region in the input image to improve the quality of the input image. For example, as shown inFIG.10A, the processor120may obtain the output image with improved quality by dividing the original image into an object image including only an object and a background image that includes the remaining region except for the object, improving quality of each of the object image and the background image, and synthesizing the object image with improved quality and the background image with improved quality. The processor120may identify an object region and a background region of the input image using various methods. For example, the processor120may identify an object with a particular shape in the input image based on a pixel value and identify a remaining region other than the region where the object is identified, as the background region. According to an embodiment, the shape may be a predetermined shape. Alternatively, the processor120may identify an object with a particular shape using an AI model for object recognition and identify a remaining region except the region where the object is identified, as the background region. The aforementioned embodiment is merely exemplary, and the processor120may identify the object region and the background region in the input image in many ways. The processor120may perform image processing by dividing the object region and the background region using a plurality of image quality improvement models. For example, the processor120, as illustrated inFIG.10B, may improve quality of the object region in the input image by applying the input image and the object region information in the input image to a first image quality improvement model1010and improve quality of the background region in the input image by applying the input image and the background region information in the input image to a second image quality improvement model1020. The processor120may combine the object region with improved quality and the background region with improved quality and obtain the output image with improved quality. Here, the object region information may include information corresponding to the lower left drawing inFIG.10A, and the background region information may include information corresponding to the lower right drawing inFIG.10A. According to an embodiment, the object region information may include pixels values of the image in the lower left drawing inFIG.10A, and the background region information may include pixels values of the image in the lower right drawing inFIG.10A. Alternatively, the object region information and the background region information may not include pixel values, and may include only region information to distinguish between the object region and the background region. For example, the object region information may be an image that indicates the object region as 1, and indicates the background region as 0, and the background region information may be an image that indicates the background region as 1, and the object region as 0. The first image quality improvement model1010may include the first noise map generation model to obtain the noise map from the input image and the first learning network model to improve quality of the object region of the input image. The second image quality improvement model1020may include the second noise map generation model to obtain the noise map from the input image and the second learning network model to improve the quality of the background region of the input image. The first noise map generation model may be a model for generating the noise map of the object region, and the second noise map generation model may be a model for generating the noise map of the background region. The first learning network model may be a model for improving image quality of the object region, and the second learning network model may be a model for improving image quality of the background region. According to an embodiment, for the first learning network model and the second learning network model, different sample images may be used during a learning process. For example, the first learning network model may be generated by learning the original image and an upscaled image after lowering the resolution of the original image, and the second learning network model may be generated by learning the original image and an image that adds noise to the original image. In this case, the processor120may obtain a sharp resultant output as if resolution of the object region is improved by using the first learning network model, and obtain a resultant output in which background region is de-noised using the second learning network model. The processor120may perform different image processing on the object region and the background region through the ways described above. InFIG.10B, it has been described that only an input image is applied to the first noise map generation model and the second noise map generation model, but this is not limited thereto. For example, not only the input image but also the object region information may be additionally applied to the first noise map generation model, and not only the input image but also the background region information may be additionally applied to the second noise map generation model. Alternatively, the processor120may perform image processing by dividing the object region and the background region using one image quality improvement model. For example, the processor120, as illustrated inFIG.10C, may obtain the output image with improved quality of the input image by additionally applying not only the input image and the noise map but also at least one of the object region information or the background region information to the learning network model (CARCNN). The object region information and the background region information may be the same asFIG.10B. Alternatively, one image in which the object region is represented as 1, and the background region is represented as 0, as the object region information and the background region information, may be used. However, the embodiment is not limited thereto, and any method which may divide the object region and the background region may be used. The learning network model may be a model learned to correspond to types of the object region information and the background region information. For example, when an image in which the object region is represented as 1, and the background region is represented as 0 is used, the same type of image may be used in the learning process. In addition, the plurality of sample images used in the learning process of the learning network model may also be different sample images in terms of the object region and the quality improvement scheme of the background region. For example, an object region of a plurality of sample images may be a region with improved quality to a higher level than the background region. Here, the plurality of sample images may be obtained through degradation of the original image. That is, the plurality of sample images may be obtained through a method in which an object region and a background region of the original image are degraded to different levels. Alternatively, each of the plurality of sample images may be a compressed image in which the object region and the background region of the corresponding original image are compressed in a different manner. That is, the learning network model ofFIG.10Cmay perform quality improvement by identifying the object region and the background region of the input image, and dividing the object region and the background region. InFIG.10C, the image quality improvement model includes a noise map generation model (QECNN) and a learning network model, wherein only the input image is applied to the noise map generation model, but is not limited thereto. For example, the processor120may further apply at not only the input image but also at least one of the object region information or the background region information to the noise map generation model and obtain a noise map for the input image. Alternatively, the processor120may divide the input image into a plurality of blocks, divide each block into an object region and a background region, and process the object region and the background region through a separate AI model. For example, as illustrated inFIG.10D, the processor120may sequentially divide the input image into blocks of a predetermined size, and identify whether each block is an object region or a background region. The processor120may obtain a first output block1050-1by applying the block identified as the object region to the first image quality improvement model1030, obtain a second output block1050-2by applying the block identified as the background region to the second image quality improvement model1040, and obtain an output image by incorporating the first output block1050-1and the second output block1050-2. Here, in the learning process of the first image quality improvement model1030, a plurality of sample blocks representing the object region may be used, and in the learning process of the second image quality improvement model1040, a plurality of sample blocks representing the background region may be used. As described above, the processor120may divide the object region and the background region in the input image and improve quality of the input image. FIG.11is a flowchart illustrating an image processing method of the electronic apparatus according to an embodiment. In operation S1110, the noise map representing the quality of the input image is obtained from the input image. In operation S1120, the input image is provided to an input layer among a plurality of layers included in the learning network model, and the noise map is provided to at least one intermediate layer among the plurality of layers. In operation S1130, the output image with the improved input image quality is obtained by applying the input image and the noise map to the learning network model. Here, the learning network model may be the AI model which is obtained by learning the relationship among a plurality of sample images, a noise map for each sample image, and an original image corresponding to each sample image through the AI algorithm. Here, the learning network model may further include at least one sub-layer, and operation S1120may further include processing the noise map using at least one sub-layer and provide the processed noise map to at least one intermediate layer. According to an embodiment, operation S1120may further provide, to each of the at least one intermediate layer, a plurality of channels and additional channels corresponding to output data output from the previous layer of each of the at least one intermediate layer, and the additional channel may be a processed noise map output from the sub-layer corresponding to each of the at least one intermediate layer. According to an embodiment, operation S1130may further mix the output data of the output layer, among a plurality of layers, and the input image and obtain the output image. According to an embodiment, operation S1110may further obtain the noise map by applying the input image to the noise map generation model including a plurality of layers, and the noise map generation model may be an AI model which is obtained by learning the relationship between the plurality of sample images and the noise map for each sample image. According to an embodiment, operation S1120may further provide the noise map to each of the plurality of layers, or provide the noise map to each of the remaining layers except the input layer, among a plurality of layers. The learning network model may be the AI model which is obtained by learning, through an AI algorithm, a relationship between an output image which is obtained by learning, through the AI algorithm, a relationship between an output image which is obtained by sequentially processing, by a plurality of layers, each of a plurality of sample images provided to the input layer among the plurality of layers and a noise map of each of the plurality of sample images provided to at least one intermediate layer with an original image corresponding to each of the plurality of sample images. Each of the plurality of sample images may be a compressed image in which the original image is compressed, and the noise map for each sample image may be a noise map obtained from each sample image and the original image corresponding to each sample image. According to an embodiment, operation S1130may further obtain an output video with improved quality by applying each of a plurality of frames included in the video to the learning network model as the input image. According to an embodiment, the method illustrated inFIG.11may further include converting the resolution of the output image based on the resolution of the display of the electronic device and displaying the image with converted resolution, and the image with converted resolution may be a 4K UHD image or an 8K UHD image. According to an embodiment, an electronic apparatus having a memory storing one or more instructions and a processor, electrically connected to the memory, and configured to execute the one or more instructions is provided. According to an embodiment, the processor may obtain a first region from an input image, adjust a feature in the first region of the input image based on a relationship between information about the first region and at least one of a plurality of candidate images and obtain an output image based on the adjusted first region and the input image. The plurality of candidate images may be obtained by providing one or more original images to a learning network model, the learning network model being an artificial intelligence (AI) model that is obtained by learning, through an AI algorithm, a relationship between a plurality of sample images corresponding to the one or more original images, a respective noise map of each of the plurality of sample images, and the one or more original images. According to an embodiment, adjusting the feature in the first region of the input image may include upscaling the resolution of the first region of the input image based on the at least one of the plurality of candidate images. The plurality of candidate images may be obtained based on an original image and an up-scaled image, which is obtained after lowering the resolution of the original image. According to an embodiment, adjusting the feature in the first region of the input image comprises de-noising the first region of the input image based on the at least one of the plurality of candidate images. The plurality of candidate images are obtained based on an original image and a noisy image, which is obtained after adding noise to the original image. According to an embodiment, an electronic apparatus having a memory storing one or more instructions and a processor, electrically connected to the memory, and configured to execute the one or more instructions is provided. According to an embodiment, the processor may obtain a first object from an input image, the first object being different from a second object in the input image, individually adjust a feature in the first object of the input image by processing the first object separate from the second object in the input image; and obtain an output image based on the adjusted first object and the input image. The processing the first object may include processing the first object based on a learning network model, the learning network model being an artificial intelligence (AI) model that is obtained by learning, through an AI algorithm, a relationship between a plurality of sample images corresponding to one or more original images, a respective noise map of each of the plurality of sample images, and the one or more original images. The adjusting the feature in the first object of the input image may include adjusting the resolution of the first object of the input image based on a relationship between information about the first object and at least one of a plurality of candidate images obtained by training the learning network model. The plurality of candidate images may be obtained based on an original image and an upscaled image, which is obtained after lowering the resolution of the original image. The adjusting the feature in the first object of the input image may include de-noising the first object of the input image based on a relationship between information about the first object and at least one of a plurality of candidate images by training the learning network model. The plurality of candidate images may be obtained based on an original image and a noisy image, which is obtained after adding noise to the original image. According to various embodiments of the disclosure, the electronic apparatus may identify quality of the input image more accurately by obtaining the noise map from the input image, and improve quality of the input image using the learning network model which operates adaptively based on the noise map. That is, the electronic apparatus may obtain the noise map from the input image for de-noising, and thus may have an excellent de-noising effect for a spatially varying image, and may reduce the compression artifacts. According to the disclosure of the disclosure, various example embodiments as described above may be implemented with software including instructions stored in the machine-readable storage media readable by a machine (e.g., computer). According to one or more embodiments, an apparatus may call instructions from the storage medium and operate according to the called instructions. When an instruction is executed by a processor, the processor may perform functions corresponding to the instruction, either directly or under the control of the processor, using other components. The instructions may include a code generated by a compiler or executed by an interpreter. A machine-readable storage medium may be provided in the form of a non-transitory storage medium. According to one or more embodiments of the disclosure, a method may be provided in a computer program product. A computer program product may be exchanged between a seller and a purchaser as a commodity. A computer program product may be distributed in the form of a machine-readable storage medium (e.g., compact disc read only memory (CD-ROM)) or distributed online through an application store (e.g. PlayStore™) In the case of on-line distribution, at least a portion of the computer program product may be stored temporarily or at least temporarily in a storage medium such as a manufacturer's server, a server of an application store, or a memory of a relay server. In addition, one or more embodiments of the disclosure described above may be implemented in a computer readable medium, such as a computer or similar device, using software, hardware, or combination thereof. In some cases, the one or more embodiments described herein may be implemented by the processor itself. According to a software implementation, embodiments such as the procedures and functions described herein may be implemented with separate software modules. Each of the software modules may perform one or more of the functions and operations descried herein. According to one or more embodiments of the disclosure, computer instructions for performing the processing operations of the apparatus may be stored in a non-transitory computer-readable medium. The computer instructions stored in the non-transitory computer-readable medium may cause a particular apparatus to perform the processing operations on the apparatus according to the one or more embodiments described above when executed by the processor of the particular apparatus. Non-transitory computer readable medium is a medium that semi-permanently stores data and is readable by the apparatus. Examples of non-transitory computer-readable media include CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM, or the like. Each of the elements (for example, a module or a program) according to one or more embodiments may be comprised of a single entity or a plurality of entities, and some sub-elements of the abovementioned sub-elements may be omitted, the elements may be further included in various embodiments. Alternatively or additionally, some elements (e.g., modules or programs) may be integrated into one entity to perform the same or similar functions performed by each respective element prior to integration. Operations performed by a module, program, or other element, in accordance with various embodiments, may be performed sequentially, in a parallel, repetitive, or heuristically manner, or at least some operations may be performed in a different order. While various embodiments have been illustrated and described with reference to certain drawings, the disclosure is not limited to specific embodiments or the drawings, and it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope as defined, for example, by the following claims and their equivalents.
58,412
11861810
DESCRIPTION OF EMBODIMENTS To make objectives, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings, and described embodiments are not limitations on this application. All other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of this application. The following descriptions involve “some embodiments” which describes a subset of all possible embodiments. “Some embodiments” may be the same subset of all possible embodiments or may be different subsets of all possible embodiments, and may mutually combine in a non-conflict case. In the following descriptions, the terms “first”, “second” are merely intended to distinguish similar objects, but do not necessarily indicate a specific order. “First” and “second” may interchange a specific order or sequence in a permitted case, so that the embodiments of this application described herein can be implemented in a sequence in addition to the sequence shown or described herein. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by a person skilled in the art to which this application belongs. Terms used in the specification of this application are merely intended to describe objectives of the specific embodiments, but are not intended to limit this application. Before the embodiments of this application are further described in detail, a description is made on nouns and terms involved in the embodiments of this application, and the following explanations are applicable to the nouns and terms involved in the embodiments of this application. 1) AI is a theory, method, technology, and application system that uses a computer or a machine controlled by a computer to simulate, extend, and expand human intelligence, acquire knowledge, and use the knowledge to obtain an optimal result. In other words, AI is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new type of intelligent machine that can react in a similar way to human intelligence. AI is to study design principles and implementation methods of various intelligent machines, so that the machines have the capabilities of perception, reasoning, and decision-making. The AI technology is a comprehensive discipline, covering a wide range of fields including both hardware-level technologies and software-level technologies. The basic AI technology generally includes technologies such as sensors, dedicated AI chips, cloud computing, distributed storage, big data processing technologies, operating/interaction systems, and mechatronics. AI software technologies mainly include a computer vision (CV) technology, a speech processing technology, a natural language processing technology, machine learning/deep learning, and the like. 2) CV technology is a science that studies how to enable a machine to “see”, that is, to implement machine vision such as recognition, tracking, and measurement for a target by using a camera and a computer in replacement of human eyes, and further perform graphic processing, so that the computer processes the target into an image more suitable for human eyes to observe, or more suitable to be transmitted to an instrument for detection. As a scientific subject, CV studies related theories and technologies, and attempts to establish an AI system that can obtain information from images or multidimensional data. The CV technologies generally include technologies such as image processing, image recognition, image semantic understanding, image retrieval, optical character recognition (OCR), video processing, video semantic understanding, video content/behavior recognition, 3D object reconstruction, a 3D technology, virtual reality, augmented reality, synchronous positioning, and map construction, and further include biometric feature recognition technologies such as common face recognition and fingerprint recognition. 3) Image dehazing is used because when an image acquisition device is being operated, it is inevitable to encounter haze weather, causing an image acquired by the image acquisition device to present a “gray” state (or “cloudy” state, or “foggy” state) and have low quality, and affecting a subsequent process such as an image-based biometric feature recognition technology. Image dehazing belongs to the field of the CV technology, that is, the CV technology is used to remove a haze effect in an image, to improve quality of the image. 4) Haze density information is a parameter that is used for representing haze density in an image and that is generated using haze penetration of light with different wavelengths. For example, infrared light has a wavelength greater than that of visible light and may penetrate haze to form an image, while light with a shorter wavelength, such as blue light, has a penetration degree varying with haze density. Therefore, imaging of infrared light and visible light may be used to obtain haze density information in the image. 5) Image dehazing instruction is an instruction used for triggering an image dehazing function when it is determined that haze exists. In an actual situation, haze does not exist all the time, and therefore, an instruction needs to be set, and dehazing is performed when it is determined that haze exists. 6) Cloud technology refers to unifying a series of resources such as hardware, software, or networks in a wide area network or a local area network, to achieve hosting technologies of computing, storing, processing, and sharing of data. The cloud technology is a general term of a network technology, an information technology, an integration technology, a management platform technology, and an application technology that are applied based on a cloud computing business model, which may form a resource pool to be used as needed, which is flexible and convenient. A cloud computing technology is becoming an important support. In various embodiments of the present disclosure, a “haze” may refer to a “foggy” state, wherein particles in the atmosphere may include mostly liquid droplets. In some various embodiments of the present disclosure, a “haze” may refer to a “smoky” state, wherein particles in the atmosphere may include mostly solid particles. In some various embodiments of the present disclosure, a “haze” may refer to a “smoggy” state, wherein particles in the atmosphere may include liquid droplets and/or solid particles. Embodiments of this application provide an image dehazing method, apparatus, and device, and a computer storage medium, which can improve quality of a dehazed image. Exemplary applications of the image dehazing device provided in the embodiments of this application are described below. The image dehazing device provided in the embodiments of this application may be implemented as a terminal of various types, or may be implemented as a server. The server may be an independent physical server, or may be a server cluster including a plurality of physical servers or a distributed system, or may be a cloud server providing basic cloud computing services, such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a content delivery network (CDN), big data, and an AI platform. The terminal may be a smartphone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smartwatch, or the like, but is not limited thereto. The terminal and the server may be directly or indirectly connected in a wired or wireless communication manner. This is not limited in this application. The image dehazing device may be implemented as two devices, for example, the terminal and the server for performing the image dehazing method are both used as the image dehazing device. In this case, the terminal and the server may respectively implement some functions of the image dehazing method, for example, the terminal acquires a color image and an infrared image corresponding to a target scene and transmits the color image and the infrared image to the server, and the server further processes the color image and the infrared image, to generate a dehazed image. The image dehazing device may alternatively be implemented as an independent device, for example, the terminal is used as the image dehazing device, to acquire a color image and an infrared image using the terminal, and the terminal further processes the color image and the infrared image, to obtain a dehazed image. The following describes an exemplary application of the image dehazing device. In the exemplary application, the image dehazing device is implemented as an independent device. FIG.1is a schematic structural diagram of an image dehazing system100according to an embodiment of this application, to support of an image dehazing application based on AI. An infrared image pickup device (or an infrared camera,410) and a color image pickup device (or a color camera420) are disposed on the image dehazing device400, to respectively obtain a color image and an infrared image of a target scene500. In some implementations, the infrared camera and/or the color camera may be either a still-image camera or a video camera. In some implementations, the infrared image pickup device and the color image pickup device may be integrated within a single camera that is capable of taking either an infrared image or a color image. For the sake of simplifying description, an infrared image pickup device (or an infrared camera) and a color image pickup device (or a color camera) may be referred in various embodiments. The color image and the infrared image of the target scene500may be captured at a same moment or at a substantially same time. Here in the present disclosure, a substantially same time may refer to within a time duration shorter than 1 second. In some other implementations, the infrared camera and the color camera may be very close to each other (e.g, within six inches), so that they may obtain substantial same view for the color image and the infrared image of the target scene500from substantially same position with substantial same angle (or substantial same perspective). When the image dehazing device400obtains an image dehazing instruction, the image dehazing device400acquires the color image corresponding to the target scene500using the color image pickup device420and acquires the infrared image corresponding to the target scene500using the infrared image pickup device410at the same moment in response to the obtained image dehazing instruction, where the image dehazing instruction is triggered when haze exists in the target scene500. The image dehazing device400calculates, based on a pixel value of each pixel point of the color image and a pixel value of each pixel point of the infrared image, haze density information of the each pixel point, to describe distribution of haze density in the target scene500using the haze density information of the each pixel point. Next, the image dehazing device400generates an image fusion factor of the each pixel point according to the haze density information, to control a fusion degree of the color image and the infrared image at the each pixel point. Finally, the image dehazing device400fuses the color image and the infrared image pixel by pixel according to the image fusion factor, to obtain a dehazed image. In this way, the image dehazing device400completes the entire image dehazing process. FIG.2is a schematic structural diagram of the image dehazing device400shown inFIG.1according to an embodiment of this application. The image dehazing device400shown inFIG.2includes: at least one processor410, a memory450, at least one network interface420, and a user interface430. All components in the image dehazing device400are coupled together through a bus system440. The bus system440is configured to implement connection and communication among these components. In addition to a data bus, the bus system440further includes a power supply bus, a control bus, and a state signal bus. However, for clarity of description, all the buses are marked as the bus system440inFIG.2. The processor410may be an integrated circuit chip and has a signal processing capability, such as a general-purpose processor, a digital signal processor (DSP), or another programmable logic device, discrete gate, transistor logic device, or discrete hardware component. The general-purpose processor may be a microprocessor or any conventional processor. The user interface430includes one or more output apparatuses431that can present media content, including one or more speakers and/or one or more visual display screens. The user interface430further includes one or more input apparatuses432, including a user interface component helpful for a user input, such as a keyboard, a mouse, a touch display screen, a camera, or another input button or control component. The memory450includes a volatile memory or a non-volatile memory, or may include a volatile memory and a non-volatile memory. The non-volatile memory may be a read-only memory (ROM), and the volatile memory may be a random access memory (RAM). The memory450described in the embodiments of this application aims to including any memory of an appropriate type. The memory450optionally includes one or more storage devices away from the processor410at the physical position. In some embodiments, the memory450may store data to support various operations, an example of the data includes a program, a module, a data structure, or a subset or a super set thereof, and an exemplary description is provided below. An operation system451includes system programs configured to process various basic system services and perform hardware-related tasks, such as a framework layer, a core library layer, and a driver layer, to implement various basic services and process hardware-based tasks. A network communication module452is configured to reach other computing devices through one or more (wired or wireless) network interfaces420, and an exemplary network interface420includes: Bluetooth, wireless fidelity (Wi-Fi), a universal serial bus (USB), or the like. A display module453is configured to present information through the one or more output apparatuses431(such as a display screen and a speaker) related to the user interface430(such as a user interface used for operating peripheral equipment and displaying content and information). An input process module454is configured to detect one or more inputs or interactions of one or more users from one of the one or more input apparatuses432, and translate the detected inputs or interactions. In some embodiments, the image dehazing apparatus provided in the embodiments of this application may be implemented in a software manner.FIG.2shows an image dehazing apparatus455stored in the memory450, and the image dehazing apparatus may be software in a form of a program or a plug-in, including the following software modules: an image acquisition part4551, a haze density determining part4552, a factor generating part4553, an image fusion part4554, and an instruction generating part4555, and functions of the modules are described below. In some other embodiments, the image dehazing apparatus provided in the embodiments of this application may be implemented in a hardware manner. As an example, the image dehazing apparatus provided in the embodiments of this application may be a processor in a form of a hardware decoding processor, which is programmed to perform the image dehazing method based on AI provided in the embodiments of this application. For example, the processor in the form of the hardware decoding processor may adopt one or more application specific integrated circuits (ASICs), a DSP, a programmable logic device (PLD), a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), or another electronic element. Exemplarily, the embodiments of this application provide an image dehazing device, including:a memory, configured to store an executable image dehazing instruction; anda processor, configured to implement the image dehazing method provided in the embodiments of this application when the executable image dehazing instruction stored in the memory is executed. The image dehazing method provided in the embodiments of this application is described below with reference to an exemplary application and implementation of the image dehazing device provided in the embodiments of this application. This application may be implemented by means of a cloud technology. FIG.3is a schematic flowchart1of an image dehazing method according to an embodiment of this application, and descriptions are provided with reference to steps shown inFIG.3. S101. Acquire, when an image dehazing instruction is obtained, a color image and an infrared image corresponding to a target scene at the same moment in response to the image dehazing instruction, where the image dehazing instruction is triggered when haze exists in the target scene. In some implementations, S101may include acquiring, by a device comprising a memory storing instructions and a processor in communication with the memory, a first image and a second image corresponding to a target scene. In various embodiments in the present disclosure, a first image may be referred as a color image and a color image may be referred as a first image; and/or a second image may be referred as an infrared image and an infrared image may be referred as a second image; and/or the first image and the second image may be acquired at a same moment or at a substantially same moment. The embodiments of this application are implemented in a scenario where a haze effect in an image is removed. During running, the image dehazing device monitors obtaining of the image dehazing instruction. When the image dehazing device obtains the image dehazing instruction, the image dehazing instruction specifies that image dehazing needs to be started, and the image dehazing device acquires the color image and the infrared image of the target scene respectively using a color image pickup device and an infrared image pickup device at the same moment in response to the image dehazing instruction, to make pixel points of the color image correspond strictly to those of the infrared image. In the embodiments of this application, the image dehazing instruction is triggered when the image dehazing device determines that haze exists at a current moment, that is, the image dehazing device starts image dehazing only when haze exists in the target scene, and does not start the image dehazing process when no haze exists in the target scene, so that the dehazing function of the image dehazing device has an adaptive capability to different scenes. In some embodiments of this application, the image dehazing device may determine that haze exists in the target scene according to weather warning. For example, the image dehazing device may pull weather information at the current time in the target scene from a network, and when the pulled weather information indicates that haze exists at the current time in the target scene, the image dehazing instruction is triggered, so that the image dehazing device starts the image dehazing. In some other embodiments of this application, the image dehazing device may alternatively first acquire several temporary images of a target scene, and then determine whether haze exists in these temporary images, and the image dehazing instruction is triggered when haze exists, so that the image dehazing device starts the image dehazing. Certainly, the image dehazing device may alternatively trigger the image dehazing instruction in other manners that can achieve the same objective. This is not limited in the embodiments of this application. It may be understood that, the target scene may be any scene in an actual application, for example, a scene including a user face, so that the image dehazing device may use a dehazed and clear face image to verify a user identity, for another example, a scene including a specific building or a specific site, so that the image dehazing device may use a dehazed and clear image to perform security monitoring on the scene. Certainly, the target scene may alternatively be another scene, for example, each crossroad of a city or a hazy scenic spot. This is not limited in the embodiments of this application. In the embodiments of this application, to facilitate obtaining a dehazed image using the color image and the infrared image, resolution of the color image needs to be the same as resolution of the infrared image. In this case, resolution of the infrared image pickup device may be the same as resolution of the color image pickup device, that is, the infrared image pickup device and the color image pickup device are directly used to acquire the color image and the infrared image of the same resolution. The resolution of the infrared image pickup device may be different from the resolution of the color image pickup device, that is, the color image and the infrared image of the same resolution are obtained in an image scaling manner. Certainly, the image dehazing device may alternatively obtain the color image and the infrared image of the same resolution in other manners. This is not limited in the embodiments of this application. S102. Calculate, based on a pixel value of each pixel point of the color image and a pixel value of each pixel point of the infrared image, haze density information of the each pixel point, where the haze density information describes distribution of haze density in the target scene. In some implementations, S102may include calculating, by the device based on a first pixel value of each pixel of the first image and a second pixel value of each pixel of the second image, haze density information of the each pixel. After obtaining the color image and the infrared image, the image dehazing device first extracts the pixel value of the each pixel point of the color image and extracts the pixel value of the each pixel point of the infrared image, and then calculates the haze density information of the each pixel point according to a color pixel value and an infrared pixel value of the each pixel point. In this way, the image dehazing device can learn the distribution of haze in the target scene. In the embodiments of this application, the image dehazing device may calculate the haze density information according to a difference between infrared light and visible light in haze penetration. When the haze density is smaller or even no haze exists, haze penetration degrees of both the infrared light and the visible light are higher. However, when the haze density is larger, the haze penetration degree of the infrared light is higher, and therefore brightness of the infrared image captured by the infrared image pickup device is higher, and the haze penetration degree of the visible light, especially blue light with a shorter wavelength, is lower, and therefore brightness of an image obtained by the color image pickup device in a blue-light channel is certainly lower. The pixel value of the infrared image includes brightness information of an infrared channel, and the pixel value of the color image includes pixel information of the blue-light channel. Based on this, the image dehazing device may analyze the haze density information of the each pixel point when the image dehazing device only obtains the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image. Based on this, in some embodiments of this application, the image dehazing device may obtain the brightness information of the blue-light channel of the each pixel point from the pixel value of the each pixel point of the color image, obtain the brightness information of the infrared channel of the each pixel point from the pixel value of the each pixel point of the infrared image, and then obtain the distribution of the haze density of the each pixel point based on the brightness information of the blue-light channel and the brightness information of the infrared channel. In some other embodiments, the image dehazing device may alternatively input the color image and the infrared image into a trained haze density prediction model, extract features of the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image through the trained haze density prediction model, and perform recognition and prediction on the features, to obtain the haze density information of the each pixel point. It may be understood that, in the target scene, the distribution of haze may be uneven. Therefore, to accurately describe the distribution of haze in the target scene, the haze density information calculated in the embodiments of this application is not for the entire target scene, but corresponds one by one to the each pixel point of the obtained image in the target scene. In other words, the number of pixel points in the image in the target scene is the same as the amount of the haze density information. S103. Generate an image fusion factor of the each pixel point according to the haze density information, where the image fusion factor is used for controlling a fusion degree of the color image and the infrared image. In some implementations, S103may include generating, an image fusion factor of the each pixel according to the haze density information, the image fusion factor indicating a fusion degree between the color image and the infrared image. In some other implementations, S103may include generating, by the device, an image fusion factor of the each pixel according to the haze density information, the image fusion factor indicating a fusion degree between the first image and the second image. The image dehazing device generates the image fusion factor of the each pixel point according to the calculated haze density information of the each pixel point, to control fusion of the color image and the infrared image using the image fusion factor and use information in the infrared image to complement missing information in the color image due to haze. Because the distribution of haze is uneven, in the color image, a degree of information missing of each the pixel point is different. Therefore, the image dehazing device generates the image fusion factor for the each pixel point, to use the information of the color image in an area with lower haze density and use more information of the infrared image in an area of heavy haze. In some embodiments of this application, the image dehazing device may directly compare the haze density information of the each pixel point with a preset fusion parameter, to generate the image fusion factor of the each pixel point, or may compare the haze density information of the each pixel point with haze density information of other pixel points, to generate the image fusion factor of the each pixel point. This is not limited in the embodiments of this application. S104. Fuse the color image and the infrared image according to the image fusion factor, to obtain a dehazed image. In some implementations, S104may include fusing, by the device, the first image and the second image according to the image fusion factor to obtain a dehazed image. After generating the image fusion factor, the image dehazing device may assign fusion weights to the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image in combination with the image fusion factor of the each pixel point. Then, the image dehazing device fuses the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image according to the fusion weights, and an image finally obtained is the dehazed image. In this way, the image dehazing device completes the entire image dehazing process. In the embodiments of this application, the image dehazing device may fuse the color image and the infrared image in an RGB space according to the fusion weights, to obtain the dehazed image; or map the color image to YUV space, then fuse the color image and the infrared image, and map the fused image reversely to the RGB space, to obtain the dehazed image. Exemplarily, the embodiments of this application provide a schematic diagram of an image dehazing process.FIG.4(a)is a schematic diagram of a pre-dehazed color image according to an embodiment of this application. It can be learned from the figure that the entire image is hazy due to a haze effect of the image and quality of the image is low. The image dehazing device calculates the haze density information of the each pixel point according to the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image obtained at the same moment with the color image, then generates the image fusion factor of the each pixel point based on the haze density information, and finally fuses the color image and the infrared image according to the image fusion factor, to obtain the dehazed image.FIG.4(b)is a schematic diagram of a dehazed image according to an embodiment of this application. It can be learned that the dehazed image has a clearer image effect and higher quality by comparingFIG.4(a)andFIG.4(b). In the embodiments of this application, when haze exists in the target scene, the terminal receives an image dehazing instruction, acquires the color image and the infrared image corresponding to the target scene at the same moment in response to the image dehazing instruction, and then calculates the haze density information of the each pixel point according to the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image. Next, the terminal generates the image fusion factor of the each pixel point based on the haze density information, and finally fuses the color image and the infrared image according to the image fusion factor, to obtain the dehazed image. In this way, the terminal can control the fusion degree of the color image and the infrared image using the image fusion factor, so that when the haze density is higher, information missing in the color image due to haze is complemented using more information of the infrared image, to improve quality of the dehazed image, thereby improving an image dehazing effect. In some embodiments of this application, the haze density information of the each pixel point is calculated based on the pixel value of the each pixel point of the color image and the pixel value of the each pixel point of the infrared image, which is the implementation process of S102, and may include S1021to S1022as follows: S1021. Extract a blue-light brightness value of the each pixel point from the pixel value of the each pixel point of the color image, and use the pixel value of the each pixel point of the infrared image as an infrared brightness value of the each pixel point. Because in the visible light, the blue light has the shortest wavelength and is most likely to be affected by haze, the penetration degree of the blue light highly varies with haze density. Therefore, to make a difference between the infrared light and the visible light be more obvious in haze penetration, the image dehazing device may extract only the brightness of the blue-light channel in the visible light. In this case, the image dehazing device reads one by one the pixel value of the each pixel point of the color image, and extract a pixel value of the blue-light channel of the each pixel point from the pixel value of the each pixel point as a blue-light brightness value. In addition, the image dehazing device further directly uses the pixel value of the each pixel point of the infrared image as an infrared brightness value of the each pixel point, so that the haze density information is subsequently calculated according to the blue-light brightness value and the infrared brightness value. Exemplarily, when a pixel value of a pixel point of the color image is (0, 125, 255), the image dehazing device extracts the pixel value 255 of the blue-light channel as a blue-light brightness value of the pixel point. S1022. Calculate the haze density information of the each pixel point according to a difference between the blue-light brightness value and the infrared brightness value. After obtaining the blue-light brightness value and the infrared brightness value of the each pixel point, the image dehazing device first calculates the difference between the blue-light brightness value and the infrared brightness value for the each pixel point, and then calculates the haze density information of the each pixel point according to the calculated difference. In some embodiments of this application, the image dehazing device may perform subtraction on the blue-light brightness value and the infrared brightness value of the each pixel point, and directly uses the obtained difference result as the difference between the blue-light brightness value and the infrared brightness value of the each pixel point. In some other embodiments of this application, after performing subtraction on the blue-light brightness value and the infrared brightness value of the each pixel point to obtain the difference result, the image dehazing device may further perform other processing such as scaling or normalization on the difference result, and the processed difference result is used as the difference between the blue-light brightness value and the infrared brightness value of the each pixel point. A manner of calculating the difference between the blue-light brightness value and the infrared brightness value may be set according to an actual situation. This is not limited in the embodiments of this application. In the embodiments of this application, the image dehazing device may first extract the blue-light brightness value from the pixel value of the color image and directly use the pixel value of the infrared image as the infrared brightness value, to obtain the blue-light brightness value and the infrared brightness value of the each pixel point. Next, the image dehazing device calculates the haze density information of the each pixel point based on the difference between the blue-light brightness value and the infrared brightness value of the each pixel point. In this way, the image dehazing device can obtain the haze density information of the each pixel point, and further learn the distribution of the haze density in the target scene. FIG.5is a schematic flowchart2of an image dehazing method according to an embodiment of this application. In some embodiments of this application, the haze density information of the each pixel point is calculated based on the difference between the blue-light brightness value and the infrared brightness value, which is the implementation process of S1022, and may include S1022ato S1022cas follows: S1022a. Calculate the difference of the blue-light brightness value and the infrared brightness value, to obtain a difference result of the each pixel point. When calculating the haze density information, the image dehazing device may first subtract the infrared brightness value from the blue-light brightness value, to obtain a temporary difference of the each pixel point, and then take the absolute value of the temporary value, to obtain a non-difference result. The image dehazing device may alternatively subtract the blue-light brightness value from the infrared brightness value, to obtain the difference result of the each pixel point. In this way, the image dehazing device can visually and clearly learn the difference between the visible light and the infrared light in the haze penetration through a magnitude of the numeric value of the difference result. Exemplarily, when the blue-light brightness value of the each pixel point is represented as Iblue(x, y), and the infrared brightness value of the each pixel point is represented as Inir(x, y), a non-negative difference result of the each pixel point may be represented as |Iblue(x, y)−Inir(x, y)|, where (x, y) is coordinates of the pixel point. S1022b. Extract a maximum difference result from the difference result of the each pixel point as a haze density information calculation factor. After obtaining the difference result of the each pixel point, the image dehazing device compares the difference results of all pixel points, determines a maximum difference result from the difference results of all pixel points, and extracts the maximum difference result as the haze density information calculation factor. Exemplarily, when the difference result of the each pixel point is |Iblue(x, y)−Inir(x, y)|, the haze density information calculation factor may be represented as maxx,y∈S⁢(Ib⁢l⁢u⁢e⁡(x,y)-Inir⁡(x,y)), (x, y) is coordinates of the pixel point, and S is a set including all the pixel points. S1022c. Compare the difference result of the each pixel point with the haze density information calculation factor, to obtain the haze density information of the each pixel point. The image dehazing device calculates a ratio of the difference result to the haze density information calculation factor for the each pixel point using the difference result of the each pixel point as a dividend and using the haze density information calculation factor as a divisor, where the ratio is the haze density information. In some embodiments, the image dehazing device may alternatively perform scaling using the haze density information calculation factor as a dividend and using the difference result of the each pixel point as a divisor, where the obtained ratio is the haze density information. Exemplarily, the embodiments of this application provide a formula for calculating the haze density information of the each pixel point, as shown in formula (1): dN-B⁡(x,y)=Iblue⁡(x,y)-Inir⁡(x,y)maxx,y∈S⁢(Ib⁢l⁢u⁢e⁡(x,y)-Inir⁡(x,y))(1) Iblue(x, y) is the blue-light brightness value, Inir(x, y) is the infrared brightness value, (x, y) is coordinates of the pixel point, S is a set including all pixel points, and dN-B(x, y) is the haze density information. After obtaining numeric values of the blue-light brightness value Iblue(x, y) and the infrared brightness value Inir(x, y), the image dehazing device may calculate the numeric value of the haze density information dN-B(x, y) for the each pixel point. In the embodiments of this application, the image dehazing device may first calculate the difference result of the blue-light brightness value and the infrared brightness value of the each pixel point, then extract the maximum difference result from the difference result of the each pixel point as the haze density information calculation factor, and finally compare the difference result of the each pixel point with the haze density information calculation factor, where the finally obtained ratio is the haze density information. In this way, the image dehazing device can obtain the haze density information through calculation, and further learn the distribution of the haze density in the target scene. In some embodiments of this application, after the calculating, based on a pixel value of each pixel point of the color image and a pixel value of each pixel point of the infrared image, haze density information of the each pixel point, and before the generating an image fusion factor of the each pixel point according to the haze density information, that is, after S102, and before S103, the method may further include S105to S106as follows: S105. Calculate a dark channel value of the each pixel point according to the pixel value of the each pixel point of the color image. After calculating the haze density information of the each pixel point, the image dehazing device may further calculate the dark channel value for the each pixel point, and optimize the haze density information using the calculated dark channel value. The dark channel value means that in a non-sky area, some pixels have at least one color channel with a low value. However, when haze exists in the target scene, transmittance of the visible light decreases, so that overall brightness of an image captured by a color image pickup device increases, and the dark channel value increases. Therefore, the image dehazing device can assist in determining the haze density in the target scene according to the dark channel value. S106. Compare the haze density information with the dark channel value, to obtain a comparison result, and generate optimized haze density information for the each pixel point according to the comparison result. The image dehazing device compares the haze density information with the dark channel value, to obtain the comparison result for the each pixel point. When the comparison result indicates that the dark channel value is less than the haze density information, the image dehazing device selects the dark channel value as the optimized haze density information; and when the comparison result indicates that the dark channel value is greater than the haze density information, the image dehazing device still uses the haze density information as the optimized haze density information. In other words, the image dehazing device may select the smaller one from the dark channel value and the haze density information as the optimized haze density information, to correct and optimize the haze density information using the dark channel value. Exemplarily, the embodiments of this application provide a formula for calculating the optimized haze density information, as shown in formula (2): Dhaze(x,y)=min(dN-B(x,y),Jdark(x,y))  (2) dN-B(x, y) is the haze density information, Jdark(x, y) is the dark channel value, Dhaze(x, y) is the optimized haze density information, and (x, y) is coordinates of the pixel point. After learning numeric values of the haze density information dN-B(x, y) and the dark channel value Jdark(x, y), the image dehazing device may substitute the numeric values of the above parameters into formula (2), to obtain the numeric value of the optimized haze density information Dhaze(x, y). It may be understood that, after the image dehazing device obtains the optimized haze density information, the process of generating an image fusion factor of the each pixel point according to the haze density information, that is, the implementation process of S103, correspondingly changes into the process of generating the image fusion factor of the each pixel point according to the optimized haze density information. In the embodiments of this application, the image dehazing device may alternatively calculate the dark channel value for the each pixel point, then compare the haze density information with the dark channel value, and select the smaller one as the optimized haze density information. In this way, the image dehazing device can correct and optimize the haze density information using the dark channel value, so that the obtained optimized haze density information can more accurately describe the distribution of the haze density in the target scene, and the subsequently calculated image fusion factor is more accurate. In some embodiments of this application, the dark channel value of the each pixel point is calculated according to the pixel value of the each pixel point of the color image, which is the implementation process of S105, and may include S1051to S1054as follows: S1051. Obtain a blue-light brightness value, a red-light brightness value, and a green-light brightness value respectively from the pixel value of the each pixel point of the color image. For the pixel value of the each pixel point of the color image, the image dehazing device reads a pixel value of a blue-light channel as a blue-light brightness value, and reads a pixel value of a red-light channel as a red-light brightness value, and reads a pixel value of a green-light channel as a green-light brightness value, so that the image dehazing device subsequently computes the dark channel value according to the blue-light brightness value, the green-light brightness value, and red-light brightness value. S1052. Select a minimum brightness value from the blue-light brightness value, the red-light brightness value, and the green-light brightness value as a candidate brightness value of the each pixel point. The image dehazing device compares magnitudes of the blue-light brightness value, the red-light brightness value, and the green-light brightness value of the each pixel point, to determine a magnitude relationship among the blue-light brightness value, and the red-light brightness value, and the green-light brightness value of the each pixel point, and then selects the minimum brightness value from the brightness values of the three channels as the candidate brightness value of the each pixel point, so that the dark channel value is subsequently determined from the candidate brightness value. Exemplarily, when the brightness value of each channel at the each pixel point is represented as JC(x, y), the candidate brightness value of the each pixel point may be represented as minC∈{R,G,B}⁢JC⁡(x,y). R is the red-light channel, G is the green-light channel, and B is the blue-light channel. S1053. Determine all pixel points in a preset pixel range as neighborhood pixel points for the each pixel point. The image dehazing device, using a pixel point as a center, determines all the pixel points in the preset pixel range as the neighborhood pixel points of the pixel point. The image dehazing device may obtain a neighborhood pixel point corresponding to the each pixel point by performing this operation on the each pixel point. In the embodiments of this application, the preset pixel range may be set according to an actual situation. For example, the preset pixel range may be set as 2×2, that is, the image dehazing device uses four pixel points including an upper pixel point, a lower pixel point, a left pixel point, and a right pixel point of the each pixel point as the neighborhood pixel points, or the preset pixel range may be set as 3×3, that is, the image dehazing device uses eight pixel points surrounding the each pixel point as the neighborhood pixel points. This is not limited in the embodiments of this application. S1054. Select a minimum candidate brightness value from candidate brightness values of the neighborhood pixel points as the dark channel value of the each pixel point. Because each of the neighborhood pixel points has a corresponding candidate brightness value, the image dehazing device selects, for the each pixel point, the minimum candidate brightness value from the candidate brightness values corresponding to the neighborhood pixel points of the each pixel point as the dark channel value of the each pixel point. Exemplarily, when four pixel points including an upper pixel point, a lower pixel point, a left pixel point, and a right pixel point of a pixel point are used as the neighborhood pixel points, the image dehazing device selects a minimum candidate brightness value from the candidate brightness values respectively corresponding to the four pixel points including the upper pixel point, the lower pixel point, the left pixel point, and the right pixel point as the dark channel value of the pixel point. Exemplarily, the embodiments of this application provide a formula for calculating the dark channel value, as shown in a formula (3): Jd⁢a⁢r⁢k⁡(x,y)=minx,y∈W⁡(minC∈{R,G,B}⁢JC⁡(x,y))(3) JC(x, y) is the brightness value of each channel of the each pixel point, minC∈{R,G,B}⁢JC⁡(x,y) is the candidate brightness value of the each pixel point, W is a window determined using the each point and the preset pixel range, and Jdark(x, y) is the dark channel value. After learning numeric values of the above parameters, the image dehazing device may substitute the numeric values of the above parameters into formula (3), to obtain the dark channel value of the each pixel point. In the embodiments of this application, the image dehazing device may first obtain the blue-light brightness value, the red-light brightness value, and the green-light brightness value of the each pixel point of the color image, select the minimum brightness value from the three brightness values of the three channels as the candidate brightness value, then determine the neighborhood pixel points for the each pixel point, and select the minimum candidate brightness value from the candidate brightness values corresponding to the neighborhood pixel points as the dark channel value of the each pixel point. In this way, the image dehazing device can complete the process of calculating the dark channel value for the each pixel point, so that the haze density information is subsequently optimized using the dark channel value. In some embodiments of this application, the image fusion factor of the each pixel point is generated according to the haze density information, which is the implementation process of S103, and may include S1031to S1032as follows: S1031. Select maximum haze density information from the haze density information. S1032. Compare the haze density information with the maximum haze density information, to obtain the image fusion factor of the each pixel point. The image dehazing device first compares a magnitude of the haze density information of the each pixel point, and selects and marks the maximum haze density information. Then, the image dehazing device compares all the haze density information of the each pixel point with the selected maximum haze density information, where the obtained ratio result is the image fusion factor of the each pixel point. When fusing the color image and the infrared image, the image dehazing device uses information in the infrared image to complement brightness information in the color image, instead of simply superimposing brightness of the infrared image and the color image. Therefore, the image fusion factor needs to be less than 1, and the haze density information may be compared with the maximum haze density information to achieve this objective. Exemplarily, the embodiments of this application provide a formula for calculating the image fusion factor, as shown in formula (4): w⁡(x,y)=dN-B⁡(x,y)maxx,y∈S⁡(dN-B⁡(x,y))(4) dN-B(x, y) is the haze density information, maxx,y∈S⁡(dN-B⁡(x,y)) is the maximum haze density information, w(x, y) is the image fusion factor, and S is a set including all the pixel points. After learning numeric values of the haze density information and the maximum haze density information, the image dehazing device substitutes the numeric values into formula (4), to obtain the numeric value of the image fusion factor. In the embodiments of this application, the image dehazing device may first select the maximum haze density information from the haze density information, then compare the haze density information with the maximum haze density information, and use the ratio result as the image fusion factor. In this way, the image dehazing device can obtain the image fusion factor, so that fusion of the color image and the infrared image is subsequently controlled using the image fusion factor. FIG.6is a schematic flowchart3of an image dehazing method according to an embodiment of this application. In some embodiments of this application, the color image and the infrared image are fused according to the image fusion factor, to obtain the dehazed image, which is the implementation process of S104, and may include S1041to S1044as follows: S1041. Map the color image from an RGB space to a YUV space, to obtain mapping brightness information and mapping chrominance information of the each pixel point. In some implementations, the mapping brightness information may be referred as a mapped brightness value; and/or mapping chrominance information may be referred as a mapped chrominance value. S1041may include mapping the color image from an RGB space to a YUV space, to obtain a mapped brightness value and a mapped chrominance value of the each pixel. In the embodiments of this application, when fusing the infrared image and the color image, the image dehazing device first maps the color image from the RGB space to the YUV space, to obtain the mapping brightness information of the each pixel point, that is, Y-channel information, and the mapping chrominance information, that is, U-channel and V-channel information. It may be understood that, when mapping the color image to the YUV space, the image dehazing device may calculate the mapping brightness information and the mapping chrominance information of the each pixel point respectively according to Y=0.3R+0.59G+0.111B, U=0.493(B-Y), and V=0.877(R-Y). S1042. Calculate a first brightness result of the each pixel point using the image fusion factor and the infrared brightness value of the each pixel point of the infrared image, and calculate a second brightness result of the each pixel point using the image fusion factor and the mapping brightness information. In some implementations, S1042may include calculating a first brightness result of the each pixel based on the image fusion factor and the second pixel value of the each pixel of the infrared image, and calculating a second brightness result of the each pixel using the image fusion factor and the mapped brightness value. After obtaining the mapping brightness information and the mapping chrominance information, the image dehazing device may fuse the infrared image and the color image using the mapping brightness information. In this case, the image dehazing device first weights the infrared brightness value using the image fusion factor, or performs exponent operation on the infrared brightness value using the image fusion factor, to obtain the first brightness result of the each pixel point, that is, brightness information needing to be complemented from the infrared image. Then, the image dehazing device calculates the second brightness result of the each pixel point using the image fusion factor and the mapping brightness information, the second brightness result being brightness information needing to be provided from the color image. For example, the mapping brightness information and the image fusion factor are multiplied, to obtain the second brightness result. S1043. Superpose the first brightness result and the second brightness result, to obtain fused brightness information of the each pixel point. S1044. Map the fused brightness information and the mapping chrominance information reversely to the RGB space, to obtain the dehazed image. In some implementations, S1044may include reversely mapping the fused brightness information and the mapped chrominance value from the YUV space to the RGB space, to obtain the dehazed image. After obtaining the first brightness result and the second brightness result, the image dehazing device superposes the first brightness result and the second brightness result, and uses the finally superposed result as the fused brightness information of the each pixel point. Then, the image dehazing device combines the fused brightness information and the original mapping chrominance information, and uses the reverse mapping from YUV to RGB, to obtain the final dehazed image. In the embodiments of this application, the image dehazing device may first map the color image to the YUV space, to obtain the mapping brightness information and the mapping chrominance information of the each pixel point, then fuse the infrared brightness value and the mapping brightness information in combination with the image fusion factor, to obtain the fused brightness information of the each pixel point, and finally map the fused brightness information in combination with the mapping chrominance information reversely to the color image, to obtain the dehazed image. In this way, the image dehazing device can use the brightness information of the infrared image to complement the brightness information of the color image, to obtain a dehazed image of higher quality. Based onFIG.3,FIG.7is a schematic flowchart4of an image dehazing method according to an embodiment of this application. In some embodiments of this application, before the acquiring, when an image dehazing instruction is obtained, a color image and an infrared image corresponding to a target scene at the same moment in response to the image dehazing instruction, that is, before S101, the method may further include S107to S109as follows: S107. Acquire an initial color image of the target scene when an image acquisition instruction is received. In some implementations, the initial color image of the target scene may not need to be color, it may be a black-white (or a grey scale), or it may be an infrared image (e.g., during low-light conditions). In some implementations, S107may include acquiring an initial image of the target scene. S108. Perform haze detection on the initial color image, to obtain a detection result. In some implementations, S108may include performing haze detection on the initial image, to obtain a detection result. Before obtaining the image dehazing instruction, the image dehazing device needs to first generate the image dehazing instruction. The image dehazing device first acquires the initial color image of the target scene when the image acquisition instruction is received, and detects whether haze exists in the initial color image, to obtain the detection result. It may be understood that, the image acquisition instruction may be an instruction triggered by a user and indicates that the user needs to start to acquire an image in the target scene. Exemplarily, when identity verification is performed on an image including a human face in the target scene, the image acquisition instruction may be triggered after the user enables an identity verification function. The image acquisition instruction may alternatively be triggered regularly by an image acquisition device and indicates that the image acquisition device needs to regularly acquire an image in the target scene. Exemplarily, a vehicle situation at each crossroad is acquired regularly at 7 am every day. The image dehazing device may automatically detect whether haze exists in the initial color image using a machine learning method or a deep learning method. For example, the image dehazing device may train a deep learning model using a hazy training picture. After obtaining the initial color image, the image dehazing device uses the deep learning model to perform haze detection on the initial color image, where a result outputted by the deep learning model is the detection result. The image dehazing device may alternatively automatically determine whether haze exists in the initial color image through entire brightness and chrominance of the image. For example, when the entire brightness is high and the chrominance is low, it is determined that haze exists in the initial color image. S109. Generate the image dehazing instruction when the detection result indicates that haze exists in the initial color image. In some implementation, S109may include determining whether the detection result indicates that haze exists in the initial image; and/or in response to determining that the detection result indicates that the haze exists in the initial image, generating the image dehazing instruction. When the detection result obtained by the image dehazing device indicates that haze exists in the color image, the image dehazing device generates the image dehazing instruction, so that an image dehazing function is subsequently triggered according to the image dehazing instruction, to obtain the dehazed image of the target scene. When the detection result obtained by the image dehazing device indicates that no haze exists in the color image, the image dehazing device dose not generate the image dehazing instruction. In this case, the image dehazing device directly completes a subsequent function such as identity verification or security monitoring based on the initial color image without entering the image dehazing process. In the embodiments of this application, when the image acquisition instruction is received, the image dehazing device acquires the initial color image of the target scene, and performs the haze detection on the initial color image, to obtain the detection result. Only when the detection result indicates that haze exists in the initial color image, the image dehazing device generates the image dehazing instruction, to trigger the image dehazing function. In this way, the image dehazing device can determine whether to enter the image dehazing process according to an actual situation of the target scene, so that when no haze exists in the target scene, no image dehazing is performed, to save processing time. In some embodiments of this application, the first brightness result of the each pixel point is calculated using the image fusion factor and the infrared brightness value of the each pixel point of the infrared image, and the second brightness result of the each pixel point is calculated using the image fusion factor and the mapping brightness information, which is the implementation process of S1042, and may include S1042ato S1042bas follows: S1042a. Multiply the image fusion factor by the infrared brightness value, to obtain the first brightness result. The image dehazing device directly multiplies the image fusion factor by the infrared brightness value, and uses the obtained product as the first brightness result. Exemplarily, when the image fusion factor is represented as w(x, y) and the infrared brightness value is represented as Inir(x, y), the obtained first brightness result may be represented as w(x, y)×Inir(x, y). S1042b. Multiply the mapping brightness information by the image fusion factor, to obtain a product result, and perform subtraction on the mapping brightness information and the product result, to obtain the second brightness result. The image dehazing device first multiplies the mapping brightness information by the image fusion factor, to obtain a product result, and then subtracts the product result from the mapping brightness information, where the obtained difference is the second brightness result. Exemplarily, when the image fusion factor is represented as w(x, y) and the mapping brightness information is represented as Ivis(x, y), the second brightness result may be represented as (1−w(x, y))×Ivis(x, y). In the embodiments of this application, the image dehazing device first multiplies the image fusion factor by the infrared brightness value, to obtain the first brightness result, then multiplies the mapping brightness information by the image fusion factor, to obtain the product result, and finally performs subtraction on the mapping brightness information and the product result, to obtain the second brightness result. In this way, the image dehazing device can calculate the first brightness result and the second brightness result, so that the fused brightness information is subsequently calculated according to the first brightness result and the second brightness result. In some embodiments of this application, after the comparing the haze density information with the maximum haze density information, to obtain the image fusion factor of the each pixel point, that is, after S1032, the method may further include S1033as follows: S1033. Perform filter optimization on the image fusion factor, to obtain an optimized image fusion factor. When the haze density information is directly compared with the maximum haze density information, and the infrared image and the color image are fused using the obtained image fusion factor, there may be an effect such as a halo or a double image in the dehazed image, affecting quality of the dehazed image. To avoid the problem, in the embodiments of this application, after obtaining the image fusion factor, the image dehazing device may perform the filter optimization on the image fusion factor using a filter, to obtain the optimized image fusion factor, and subsequently fuse the color image and the infrared image according to the optimized image fusion factor, to obtain the dehazed image, to further improve the quality of the dehazed image. That is, after the image dehazing device obtains the optimized image fusion factor, the process of fusing the color image and the infrared image according to the image fusion factor, that is, the implementation process of S104, correspondingly changes to the process of fusing the color image and the infrared image according to the optimized image fusion factor, to obtain the dehazed image. In the embodiments of this application, the image dehazing device may select a guided image filter to smooth the image fusion factor, or select a mean filter to smooth the image fusion factor, or may select a filter of another type to smooth the image fusion factor. This is not limited in the embodiments of this application. In the embodiments of this application, after obtaining the image fusion factor, the image dehazing device may perform the filter optimization on the image fusion factor, to obtain the optimized image fusion factor, so that the image dehazing device may subsequently control the fusion of the color image and the infrared image based on the optimized image fusion factor, to obtain the dehazed image of higher quality. An exemplary application of the embodiments of this application in an actual application scenario is described below. The embodiments of this application are implemented using a scenario in which a human face is used to perform identity verification.FIG.8is a schematic diagram of an image pickup system according to an embodiment of this application. The image pickup system (the image dehazing device) includes an infrared light imaging system8-1and a visible light imaging system8-2. The infrared light imaging system8-1acquires an infrared light human face picture8-3(an infrared image) of a human face, and the visible light imaging system8-2acquires a visible light human face picture8-4(a color image) of the human face. Then, the image pickup system transmits the infrared light human face picture8-3and the visible light human face picture8-4together to a dehazing system8-5, to obtain a hazeless picture8-6(a dehazed image) of the human face. Because infrared light has a wavelength greater than that of all visible light and may penetrate haze to form an image, when the depth of field and the haze density change, a penetration degree changes a little. However, among blue light, red light, and green light of the visible light, the blue light has a shortest wavelength, and a penetration degree of the blue light highly changes as the depth of field and the haze density. Therefore, by analyzing a brightness difference between the infrared channel and the blue-light channel, the haze density information may be obtained. In this case, the image pickup system may calculate the brightness difference between the infrared channel and the blue-light channel according to formula (1). In the embodiments of this application, the result obtained through formula (1) is a brightness difference factor (the haze density information). Because in a partial non-sky area, some pixel points have at least one color channel with a low value. Therefore, the image pickup system may further calculate a value of a dark channel (the dark channel value), to represent the haze density. In the embodiments of this application, the image pickup system may calculate the dark channel value according to formula (3). Then, the image pickup system selects a smaller one from the brightness difference factor and the dark channel value of a pixel point as atmosphere haze particle distribution (the optimized haze density information) of the pixel point, that is, the atmosphere haze particle distribution of the each pixel point is obtained through formula (2), to obtain an atmosphere haze particle distribution diagram. Although the infrared light human face picture8-3may clearly reflect scene information in a hazy area, there exist problems of much noise, low contrast, and insufficient details, and when the image pickup system uses an inappropriate noise reduction method, final quality of an image is reduced due to these inherent shortcomings. Compared with the infrared light human face picture8-3, the visible light human face picture8-4has higher definition, and infrared information does not need to be added during noise reduction to degenerate visible light information. Based on this, the image pickup system may fuse the infrared light human face picture8-3and the visible light human face picture8-4, and more infrared information is used in an area with higher haze density, and the visible light information is more used in an area with lower haze density, that is, the haze density determines use degrees of the infrared information and the visible light information. In this case, the image pickup system may calculate a fusion weighting factor (the image fusion factor) according to formula (5): w⁡(x,y)=Dh⁢a⁢z⁢e⁡(x,y)maxx,y∈S⁡(Dh⁢a⁢z⁢e⁡(x,y))(5) Dhaze(x, y) is the atmosphere haze particle distribution, and w(x, y) is the fusion weighting factor. Because an artificial effect such as a halo or a double image appears when the fusion weighting factor is directly used, based on this, the image pickup system uses the guided image filter to smooth the fusion weighting factor (a filter optimized image fusion factor), to suppress the artificial effect and improve the image quality. After the image pickup system obtains the fusion weighting factor and performs the filter optimization on the fusion weighting factor, a fusion rule is established according to formula (6), to obtain the hazeless picture8-6: I(x,y)=w(x,y)×Inir(x,y)+(1−w(x,y))×Ivis(x,y)  (6) w(x, y) is the fusion weighting factor, Inir(x, y) is the brightness of the infrared light human face picture8-3(the infrared brightness value), Ivis(x, y) is the brightness of the visible light human face picture8-4(the mapping brightness information), and I(x, y) is the hazeless picture8-6. Because the visible light human face picture8-4includes brightness information and chrominance information (the mapping chrominance information), and the infrared light human face picture8-3includes no chrominance information, before the image fusion, the image pickup system needs to separate the brightness information and the chrominance information of the visible light human face picture8-4and extracts the brightness information. That is, the image pickup system maps the visible light human face picture8-4from the RGB space to the YUV space, superposes the brightness information and the brightness of the infrared light human face picture8-3, to obtain a new brightness component (the fused brightness information), and maps the new brightness component in combination with the previous chrominance information reversely to the RGB space, to obtain the dehazed picture8-6of the human face. Through the foregoing manner, the image pickup system can control the fusion of the infrared light human face picture8-3and the visible light human face picture8-4using the fusion weighting factor, so that the brightness information of the infrared light is used to complement the brightness information of the visible light when the haze density is high, and the quality of the dehazed picture8-6is improved. An exemplary structure that is of the image dehazing apparatus455provided in the embodiments of this application and that is implemented as a software module continues to be described below. In some embodiments, as shown inFIG.2, the software module of the image dehazing apparatus455stored in the memory450may include:an image acquisition part4551, configured to acquire, when an image dehazing instruction is obtained, a color image and an infrared image corresponding to a target scene at the same moment in response to the image dehazing instruction, where the image dehazing instruction is triggered when haze exists in the target scene;a haze density determining part4552, configured to calculate, based on a pixel value of each pixel point of the color image and a pixel value of each pixel point of the infrared image, haze density information of the each pixel point, where the haze density information describes distribution of haze density in the target scene;a factor generating part4553, configured to generate an image fusion factor of the each pixel point according to the haze density information, where the image fusion factor is used for controlling a fusion degree of the color image and the infrared image; andan image fusion part4554, configured to fuse the color image and the infrared image according to the image fusion factor, to obtain a dehazed image. In some embodiments of this application, the haze density determining part4552is configured to: extract a blue-light brightness value of the each pixel point from the pixel value of the each pixel point of the color image, and use the pixel value of the each pixel point of the infrared image as an infrared brightness value of the each pixel point, and calculate the haze density information of the each pixel point according to a difference between the blue-light brightness value and the infrared brightness value. In some embodiments of this application, the haze density determining part4552is configured to: calculate a difference between the blue-light brightness value and the infrared brightness value, to obtain a difference result of the each pixel point; extract a maximum difference result from the difference result of the each pixel point as a haze density information calculation factor; and compare the difference result of the each pixel point with the haze density information calculation factor, to obtain the haze density information of the each pixel point. In some embodiments of this application, the haze density determining part4552is further configured to: calculate a dark channel value of the each pixel point according to the pixel value of the each pixel point of the color image; and compare the haze density information with the dark channel value, to obtain a comparison result, and generate optimized haze density information for the each pixel point according to the comparison result. Correspondingly, the factor generating part4553is further configured to generate the image fusion factor of the each pixel point according to the optimized haze density information. In some embodiments of this application, the haze density determining part4552is configured to: obtain a blue-light brightness value, a red-light brightness value, and a green-light brightness value respectively from the pixel value of the each pixel point of the color image; select a minimum brightness value from the blue-light brightness value, the red-light brightness value, and the green-light brightness value as a candidate brightness value of the each pixel point; determine all pixel points in a preset pixel range as neighborhood pixel points for the each pixel point; and select a minimum candidate brightness value from candidate brightness values of the neighborhood pixel points as the dark channel value of the each pixel point. In some embodiments of this application, the factor generating part4553is configured to: select maximum haze density information from the haze density information; and compare the haze density information with the maximum haze density information, to obtain the image fusion factor of the each pixel point. In some embodiments of this application, the image fusion part4554is configured to: map the color image from an RGB space to a YUV space, to obtain mapping brightness information and mapping chrominance information of the each pixel point; calculate a first brightness result of the each pixel point using the image fusion factor and the infrared brightness value of the each pixel point of the infrared image, and calculate a second brightness result of the each pixel point using the image fusion factor and the mapping brightness information; superpose the first brightness result and the second brightness result, to obtain fused brightness information of the each pixel point; and map the fused brightness information and the mapping chrominance information reversely to the RGB space, to obtain the dehazed image. In some embodiments of this application, the image dehazing apparatus455further includes an instruction generating part4555. The instruction generating part4555is configured to: acquire an initial color image of the target scene when an image acquisition instruction is received; perform haze detection on the initial color image, to obtain a detection result; and generate the image dehazing instruction when the detection result indicates that haze exists in the initial color image. In some embodiments of this application, the image fusion part4554is configured to: multiply the image fusion factor by the infrared brightness value, to obtain the first brightness result; and multiply the mapping brightness information by the image fusion factor, to obtain a product result, and perform subtraction on the mapping brightness information and the product result, to obtain the second brightness result. In some embodiments of this application, the factor generating part4553is configured to perform filter optimization on the image fusion factor, to obtain an optimized image fusion factor. Correspondingly, the image fusion part4554is further configured to fuse the color image and the infrared image according to the optimized image fusion factor, to obtain the dehazed image. An embodiment of this application provides a computer storage medium storing an executable instruction, where the computer storage medium stores an executable image dehazing instruction, and is configured to cause, when the executable image dehazing instruction is executed by a processor, the processor to implement the image dehazing method provided in the embodiments of this application, such as the method shown inFIG.3,FIG.5,FIG.6, orFIG.7. In some embodiments, the computer storage medium may be a memory such as an FRAM, a ROM, a PROM, an EPROM, an EEPROM, a flash memory, a magnetic surface memory, a disc, or a CD-ROM; or the may be any device including one of or any combination of the memories. In various embodiments in the present disclosure, a unit may refer to a software unit, a hardware unit, or a combination thereof. A software unit may include a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal, such as those functions described in this disclosure. A hardware unit may be implemented using processing circuitry and/or memory configured to perform the functions described in this disclosure. Each unit can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more units. Moreover, each unit can be part of an overall unit that includes the functionalities of the unit. The description here also applies to the term unit and other equivalent terms. In various embodiments in the present disclosure, a module may refer to a software module, a hardware module, or a combination thereof. A software module may include a computer program or part of the computer program that has a predefined function and works together with other related parts to achieve a predefined goal, such as those functions described in this disclosure. A hardware module may be implemented using processing circuitry and/or memory configured to perform the functions described in this disclosure. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. Moreover, each module can be part of an overall module that includes the functionalities of the module. The description here also applies to the term module and other equivalent terms. In some embodiments, the executable image dehazing instruction may adopt the form of programs, software, software modules, scripts or codes, be written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and be deployed in any form, including being deployed as an independent program or as a module, component, subroutine or other unit suitable for being used in a computing environment. As an example, the executable instruction may but do not necessarily correspond to files in the file system, and may be stored as part of the file that saves other programs or data, for example, the executable instruction is stored in one or more scripts of a hypertext markup language (HTML) document, in a single file dedicated to the program discussed, or in multiple coordinated files (for example, a file storing one or more modules, subprograms, or code parts). As an example, the executable instruction may be deployed to be executed on one calculating device, or on a plurality of calculating devices located in one location, or on a plurality of calculating devices that are distributed in a plurality of locations and interconnected by a communication network. The foregoing descriptions are merely embodiments of this application, but are not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and scope of this application shall fall within the protection scope of this application.
82,430
11861811
DETAILED DESCRIPTION A neural network structure, namely a warped external recurrent neural network, may be used for reconstructing data. The warped external recurrent neural network can be applied to reconstruct image data and non-image data, such as audio data, data acquired by depth sensors (e.g., lidar, radar, and the like), data acquired by temperature sensors, density data (e.g., medical imaging and geological), and the like. The warped external recurrent neural network is not recurrent at each layer and has a feed-forward flow—only warping external state output by the final layer. In contrast, in a conventional recurrent neural network, hidden state generated at each layer is provided as a feedback input to the generating layer. The warped external recurrent neural network is trained end-to-end to minimize the errors, between pairs of aliased and antialiased images. During supervised training, the warped external recurrent neural network learns to identify aliased image features and to adaptively remove (i.e., filter out) the undesirable artifacts (e.g., aliased image features) and/or modify areas with missing and incorrect information. After being trained, the warped external recurrent neural network may be deployed to reconstruct data. FIG.1Aillustrates a block diagram of a warped external recurrent neural network100, in accordance with an embodiment. The warped external recurrent neural network includes an encoder/decoder neural network model110, a temporal warp function115, and a combiner function120. Although the warped external recurrent neural network100is described in the context of processing units, one or more of the encoder/decoder neural network model110, the temporal warp function115, and the combiner function120may be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the encoder/decoder neural network model110may be implemented by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing layers of a neural network. Furthermore, persons of ordinary skill in the art will understand that any system that performs the operations of the warped external recurrent neural network100is within the scope and spirit of embodiments of the present invention. The encoder/decoder neural network model110receives input data at time t and warped external state from the previous iteration i.e., at time t−1. The input data includes artifacts that are removed during the reconstruction process to produce output data that approximates the input data without the artifacts. In the context of the following description, output data that approximates the input data without the artifacts has fewer artifacts compared with the input data. The warped external state from time t−1 includes warped reconstructed data from time t−1. The encoder/decoder neural network model110processes the input data and the warped external state using multiple layers to produce at least one filter kernel. In an embodiment, the at least one filter kernel is a collection of filter kernels corresponding to different spatial areas. The combiner function120receives the at least one filter kernel, the input data from time t, and warped reconstructed data from time t−1. The combiner function120applies at least a first portion of the at least one filter kernel to the reconstructed data to produce filtered first input data. The combiner function120applies at least a second portion of the at least one filter kernel to the input data to produce filtered second input data. In an embodiment, the filtered first input data corresponds to a portion of the input data from time t−1 without artifacts and the filtered second input data corresponds to a portion of the input data from time t. The combiner function120then sums the filtered first input data and the filtered second input data to produce at least a portion of the external state at time t. The external state at time t includes a portion of the reconstructed input data at time t−1. In an embodiment, the at least one filter kernel is applied to different portions of the reconstructed data and the input data to produce remaining portions of the external state. The temporal warp function115also receives per-datum differences (difference data) corresponding to the input data at time t and the external state from time t−1. Note, that when the input data is image data, the per-datum differences may be motion flow or motion vectors. The temporal warp function115warps the external state based on the per-datum differences to produce the warped external state for time t−1. The warping aligns the external state from time t−1 to the input data at time t. In the context of the following description, the external state includes hidden state from only the last layer of the encoder/decoder neural network model110and reconstructed data that approximates the input data without artifacts. The hidden state generated by the last layer may implicitly include information from previous input data frames that is incorporated into the hidden state for each timestep. In an embodiment, the external state comprises features of the input data that are extracted by the encoder/decoder neural network model110. In an embodiment, the encoder/decoder neural network model110receives the warped external state including reconstructed data from time t−1 and the combiner function120receives the reconstructed data from time t−1. More illustrative information will now be set forth regarding various optional architectures and features with which the foregoing framework may be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described. When the warped external recurrent neural network100is used to generate antialiased images, the input images include aliasing artifacts and the external state includes a reconstructed image that is antialiased. As shown inFIG.1A, the external state is warped by the temporal warp function115according to difference data to produce warped external state including a processed warped reconstructed image for a first image (time t−1) in a sequence. The warping aligns the external state to a next aliased input image (time t) in the sequence. The next aliased input image and the warped external state are processed by the encoder/decoder neural network model110to produce second external state including a reconstructed second image that approximates the next aliased input image (time t) in the sequence without artifacts (i.e., the antialiased next image). In an embodiment, the external state includes the reconstructed input for only one image (the last processed image). By applying the warped external recurrent neural network100over an image sequence, one image at a time, the neural network model outputs a sequence of temporally-stable reconstructed images, one image at a time. In an embodiment, the external state carries information about one or more previous images. The encoder/decoder neural network model110is trained to predict spatially variant filter kernels using supervised learning techniques to learn parameters (e.g., weights and bias values) that maximize image quality and temporal stability. In an embodiment, the encoder/decoder neural network model110computes a dynamic per-pixel kernel filter and the combiner function120applies the filter kernel to the input image and the reconstructed previous image. The warped external recurrent neural network100achieves high-quality, temporally-stable antialiasing by integrating information from current and prior frames. To maximize temporal reuse, the temporal warp function115warps prior reconstructed image data using the per-pixel motion vectors. To achieve reuse without incurring the storage and performance overheads of conventional recurrent neural networks, the warped external recurrent neural network100uses no additional storage during inferencing and incurs no slowdown compared to a feed-forward neural network. In an embodiment, a sequence of one sample-per-pixel temporally-unstable images is converted into a temporally-stable image sequence equivalent in quality to 16 (or some other value greater than 1) samples-per-pixel images. When performing antialiasing, the output image sequence does not suffer from the over-blurring, ghosting, flickering artifacts of current solutions and the warped external recurrent neural network100produce a temporally-stable image sequence in real time. Reconstruction performance of the warped external recurrent neural network100is dependent on image size and is independent of scene complexity. FIG.1Billustrates a flowchart of a method130for temporally stable data restoration, in accordance with an embodiment. Although method130is described in the context of a processing unit, the method130may also be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method130may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing the warped external recurrent neural network100. Furthermore, persons of ordinary skill in the art will understand that any system that performs method130is within the scope and spirit of embodiments of the present invention. At step135, a sequence of input data including artifacts is received by the warped external recurrent neural network100. The sequence includes a first input data frame and a second input data frame. In the context of the following description, an input data frame may include image data or other types of data. In an embodiment, the input data comprises rendered image frames. At step140, the first input data frame is processed using layers of a neural network model to produce external state including a reconstructed first data frame that approximates the first input data frame without artifacts. In an embodiment, the first input data frame is processed by layers of the encoder/decoder neural network model110. Hidden state generated by a particular one of the layers during processing of the first input data frame is not provided as an input to the particular one of the layers to process the second input data frame. In other words, each of the layers does not incorporate a feedback connection to use hidden state generated for a previous frame to generate outputs and/or hidden state for the current frame. At step145, the external state is warped by the temporal warp function115, using difference data corresponding to changes between the first input data frame and the second input data frame (e.g., optical flow, motion vectors, or the like), to produce warped external state. Warping the external state anchors individual characteristics or features to regions within the data frames. The warped external state enables improved tracking over time by integrating information associated with changing features over multiple frames in a sequence, producing more temporally stable and higher quality reconstructed data. At step150, the second input data frame is processed, based on the warped external state, using the layers of the neural network model to produce a reconstructed second data frame that approximates the second input data frame without artifacts. The encoder/decoder neural network model110is trained using training datasets including sequences of data frames and inter-frame per-datum differences to predict kernels that produce temporally stable sequences of reconstructed data frames. Importantly, when deployed, the warped external recurrent neural network100may perform reconstruction in real-time, for example, keeping pace with image rendering or image capture. In an embodiment, the parameters (e.g., weights and biases) of the encoder/decoder neural network model110, determined during training, are immutable at inference time and cannot adapt to particular data regions within each frame. In another embodiment, the parameters of the encoder/decoder neural network model110determined during training may be modified or calculated during inferencing. Kernel-predicting networks (KPNs), enable a neural network model to generate adaptive spatially-varying kernels by training the neural network model coupled to a filtering module, such as the combiner function120. As shown inFIG.1A, a kernel-predicting autoencoder may be constructed by feeding the spatially-varying kernels output by the encoder/decoder neural network model110to the combiner function120. In contrast with conventional KPNs, image sequences generated by the warped external recurrent neural network100are temporally stable and the spatially-varying kernels output by the encoder/decoder neural network model110are temporally stable. In an embodiment, the warped external state and the processed second input data frame are processed by the encoder/decoder neural network model110to generate spatially-varying filter kernels. When the sequence of input data is an image sequence, the combiner function120applies a first filter kernel to pixels of the reconstructed first data frame (from time t−1) and applies a second filter kernel to pixels of the processed second input data frame (from time t). In an embodiment, the second filter kernel is a hierarchy of filter kernels. The filtered pixels are combined (i.e., summed) to produce external state including pixels of the reconstructed second data frame. In an embodiment, the combiner function120computes the color of a reconstructed pixel p by performing the dot product of the spatially-varying kernel-predicted weights Apagainst a 5×5 pixel patch of the input x around p: ŷp=Ap,5×5p(x).  (1) While a larger adaptive filter might improve image quality, a 5×5 kernel may be used to provide a good balance between computational cost and reconstruction quality, especially because aliasing artifacts tend to occur at small scale. In an embodiment, one spatially-varying kernel-predicted weight is removed from the 5×5 frame kernel filter Apto ensure that the encoder/decoder neural network model110output has 32 channels. Processing slowdown may be avoided on particular hardware platforms by rounding down the output channels of the final convolution layer from 34 to 32. FIG.1Cillustrates a flowchart of the method160for temporally stable data restoration, in accordance with an embodiment. Although method160is described in the context of a processing unit, the method160may also be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method160may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing the warped external recurrent neural network100. Furthermore, persons of ordinary skill in the art will understand that any system that performs method160is within the scope and spirit of embodiments of the present invention. The method160includes steps135,145, and150from the method130. At step135, a sequence of input data including artifacts is received by the warped external recurrent neural network100. The sequence includes a first input data frame and a second input data frame. In the context of the following description, an input data frame may include image data or other types of data. Step140of the method130may include steps142,144,146, and148. At step142, the first input data frame and warped external state for a previous data frame is processed using layers of the encoder/decoder neural network model110to produce spatially varying kernels. The warped external state includes a reconstructed previous data frame that approximates the previous input data frame without artifacts. Hidden state generated by each one of the layers during processing of the reconstructed previous data frame and the first input data frame is not provided as an input to the layer that generated the hidden state to process the second input data frame. In an embodiment, the spatially varying kernels include a first filter kernel and a second filter kernel. At step144, the combiner function120applies the first filter kernel to the warped reconstructed previous data frame to produce a filtered portion of the reconstructed previous data frame. At step146, the combiner function120applies the second filter kernel to the first input data frame to produce a filtered portion of the first input data frame. Steps144and146may be repeated to produce additional filtered portions of the reconstructed previous data frame and additional filtered portions of the first input data frame. At step148, the combiner function120produces the reconstructed first data frame by summing the filtered portion of the reconstructed previous data frame and the filtered portion of the first input data frame. Step148may be repeated to sum the additional filtered portions of the warped reconstructed previous data frame and the additional filtered portions of the first input data frame. Steps145and150are completed as previously described. To best understand the encoder/decoder neural network model110a review of an implementation using a convolutional neural network (CNN) is instructional. A CNN stacks convolutional layers, that are trained via stochastic gradient descent (SGD). Each convolutional layer Cnxnlapplies a convolution Wlover a n×n region of the previous layer activation al-1and offsets the result via a bias vector bl, followed by non-linear function σ: al=Cnxnl(al-1)=σ(Wl*al-1+bl). Complex CNNs stack dozens of layers and generate very deep data representations of hundreds of feature channels per datum. Achieving state-of-the-art results requires substantial computational resources, partially explaining the lack of deep neural networks (DNN) in real-time rendering. The recent introduction of GPU tensor cores that accelerate mixed precision matrix multiplication enables the use of DNNs for real-time processing. Convolutional autoencoders are a class of deep neural networks that learn end-to-end mappings between images. A first encoder block extracts a progressively compressed representation of the input x through a sequence of convolutional layers followed by a pooling operatorm×mwhich downsamples by computing the largest activation in a m×m region. The pooling operator may implement max pooling, average pooling, or other types of pooling. Starting with e0=x, successive encoder stages can be computed as: ei+1=m×m(Cn×n( . . .Cn×n(ei))). The last encoder stage generates a latent variables representation of the input, which is uncompressed by a succession of decoder stages: di+1=k×k(Cn×n( . . .Cn×n(di))), wherek×kis a k×k upsampling operator. Finally, the output image is computed as: ŷ=(Cn×n( . . .Cn×n(d0))). Therefore, antialiasing may be modeled as an image reconstruction problem and a convolutional autoencoder may be used as a starting point to develop the encoder/decoder neural network model110. FIG.2Aillustrates a block diagram of the encoder/decoder neural network model110fromFIG.1A, in accordance with an embodiment. In an embodiment, each stage of the encoder portion of the encoder/decoder neural network model110uses one convolutional layer and a pooling layer. In an embodiment, 3×3 convolutions are used in the encoder/decoder neural network model110. In an embodiment, the convolutional layers of the encoder portion are N×N, each followed by a 2×2 max pooling layer, where N=32, 64, 96, 128, 160, and 160 in a feed-forward sequence with the output of each max pooling layer input to each convolutional layer. In an embodiment, strided convolutions are used instead of pooling. In an embodiment, each stage of the decoder portion of the encoder/decoder neural network model110uses a nearest upsampling layer followed by two convolutional layers. In an embodiment, a pair of convolutional layers of the decoder portion are N×N, each proceeded by a 2×2 nearest upsampling layer, where N=160, 128, 96, 64, and 32 in a feed-forward sequence with the output of each upsampling layer input to each convolutional layer pair. In an embodiment, the output of each encoder stage is propagated to the corresponding decoder stage via residual skip connections and accumulated with the output of an upsampling layer. The residual skip connections improve reconstruction quality and may enable faster training of deep convolutional networks by improving the back propagation of gradients. In an embodiment, the encoder/decoder neural network model110utilizes specialized tensor cores within a GPU, such as the tensor cores within the PPU300shown inFIG.3. To take advantage of the tensor cores, 16-bit tensors and weights are used. In an embodiment, the slowest layers of an encoder/decoder neural network model110are the outermost, high resolution layers. The cost of a 3×3 convolution layer at 1080 pixel resolution varies with input and output channel counts. In particular, little variation occurs as output channel count changes from 8 to 64, but a significant slowdown occurs at 40 input channels. Thus, to maximize performance, in an embodiment, the number of input channels for convolution layers is limited to 32. In an embodiment, when the CUDA API is used to implement matrix operations on the tensor cores, each nearest upsampling layer is fused with the following residual skip connection, such that both operations happen within the same CUDA kernel. Defining specific layer and channel counts is not as relevant for achieving high image quality and temporal stability performance of an encoder/decoder neural network model, as real-time performance constraints mostly dictate how many layers can be used at higher resolution. The external recursion and warping features of the encoder/decoder neural network model110are more important in order to achieve high image quality and temporal stability. A convolutional autoencoder is a stateless feedforward neural network that does not remember information from past frames. Furthermore, a conventional autoencoder cannot learn a temporally coherent representation of the input data when trained only on single image pairs (each image pair includes an image with artifacts and a “ground truth” image without the artifacts). Feeding the neural network sequences of several frames adds computational and memory overheads that can easily prevent optimizing inference performance for real-time rendering applications. FIG.2Billustrates a diagram of a prior art recurrent neural network layer201. Neurons or nodes (represented by circles) in a first recurrent convolutional layer generate outputs to neurons or nodes of a second convolutional layer. The recurrent convolutional layer is a stateful machine that can learn how to use past information generated by the recurrent convolutional layer and stored in hidden state hi−1to process a new input xiwhile generating hidden state hi. RNN layers process arbitrarily long sequences of inputs, such as image sequences, and are natural candidates for temporally stable image reconstruction. The hidden state generated by the first recurrent convolutional layer during processing of a first frame is stored and provided as an input to the first recurrent convolutional layer to process a second frame. FIG.2Cillustrates a diagram of layers of an external neural network layer202without hidden state recursion, in accordance with an embodiment. Instead of receiving hidden state generated by the first convolutional layer during processing of a previous frame, the first convolutional layer receives the warped external state generated by the encoder/decoder neural network model110during processing of a previous frame. The encoder/decoder neural network model110functions as a single recurrent layer. The warped external state generated by the encoder/decoder neural network model110, combiner function120, and temporal warp function115acts as the hidden state for the encoder/decoder neural network model110and is used as input along with the next data frame. The external recursion configuration has four key advantages. First, the encoder/decoder neural network model110may be trained with data frame sequences, allowing the encoder/decoder neural network model110to output high quality as well as temporally coherent results. Second, the external state is reduced to just a few megabytes per data frame. Third, the additional cost to process the external state is virtually zero on modern GPUs. Lastly, at inference time the encoder/decoder neural network model110acts like a non-recurrent neural network, allowing use of simpler layer models since no internal state is stored or updated. However, using external recurrence alone is insufficient to guarantee temporally coherent results. The max pooling layers within the encoder/decoder neural network model110effectively force learning of translational invariance of data features within the frames. While useful for classification, translational invariance reduces a neural network's ability to precisely identify correspondences between features from current and prior data frames, degrading temporal reuse along edges (where temporal reuse is most important). Therefore, motion data, such as per-pixel motion vectors generated by modern real-time applications or other sources, such as through optical flow analysis, are used to warp the external state. In an embodiment, the external state is warped with a reprojection filter θ to align the external state at t−1 to the current data frame at t. In an embodiment, the reprojection filter is bi-linear, bi-cubic, or any other type of filter. When the warped external recurrent neural network100is used to perform antialiasing, the warping aligns pixels from prior image frames to the same surfaces in the current image frame, removing range limitations imposed by a receptive field of the encoder/decoder neural network model110and enabling small kernels to find correspondences with prior image frames. At the beginning of each training iteration, difference data for the input data frame are used by the temporal warp function115to fetch pixels from the previous reconstructed data frame, using bilinear filtering to smooth the fetched results. Relying on the difference data enables efficient searching to identify correspondences between features. The warped (i.e., reprojected) external state coincides with the final antialiased image. However, the warping is not restricted to antialiasing and applies to any hidden state containing higher level representations of images or other data. Because the external state and the current data frame are aligned, the encoder/decoder neural network model110is explicitly trained to generate spatio-temporal filters that integrate the previously reconstructed data frame with the current data frame, while “hallucinating” new samples. Equation (1) is modified to include a second 3×3 pixel kernel B generated by the encoder/decoder neural network model110that acts on the warped hidden state θ(h): ŷp=Ap,5×5p(x)+<Bp,3×3p(θ(h)).  (2) A smaller pixel footprint is used in kernel B because image features stored in hidden state are less likely to contain aliasing artifacts and the smaller kernel enables faster performance. In an embodiment, one spatially-varying kernel-predicted weight is removed from each from the (5×5) pixel kernel Apand the (3×3) pixel kernel B to ensure that the encoder/decoder neural network model110output has 32 channels. The warped external recurrent neural network100driven by Equation (1) and Equation (2) yields significantly lower spatial and temporal error, improving reconstructed data quality and temporal stability. The approach in Equation (2) does not dictate any particular integration method, letting the encoder/decoder neural network model110learn how to best accumulate and reject individual samples. Adaptive Sampling and Denoising of Rendered Sequences In an embodiment, the encoder/decoder neural network model110is configured to perform hierarchical kernel prediction for denoising sequences of image frames rendered using adaptive sampling. While one level of kernel prediction is used for creating anti-aliased images from aliased input, one level may not be adequate for denoising sparsely sampled ray traced images. Typically, much larger filters, covering large regions on screen, are needed to denoise sparsely sampled ray traced images. Therefore, in an embodiment, one level of kernel prediction is replaced with three levels of kernel prediction for denoising sparsely sampled ray traced images. In other embodiment, fewer or more levels of kernel prediction may be used. Specifically, the outputs of the last three hierarchical levels of the encoder/decoder neural network model110are each fed through a small network to predict filter weights of a multi-scale kernel. In an embodiment, the small network is implemented using a fully-connected network (e.g., 1×1 convolutions) or a few layers with rectified linear units (ReLUs) between each layer to map the outputs to filter weights. In an embodiment, the filter weights are a set of unique 5×5 weights (e.g., filter kernel) for each pixel in a scaled image frame. In an embodiment, the scaled image frames are the outputs of the last three hierarchical levels of the encoder/decoder neural network model110input image. The last three hierarchical levels, working back from the output, correspond to a full resolution noisy image, a 2× downscaled version of the full resolution noisy image, and a 4× downscaled version of the full resolution noisy image, respectively. The unique filter kernels are applied at each respective pixel in the scaled image frames to produce filtered current frames. The filtered current frames are combined by the combiner function120to produce a multi-scale filtered current frame. In an embodiment, the filtered current frames are combined by upscaling the filtered current frames to full resolution and summing the resulting three full resolution multi-scale filtered current frames. In an embodiment, the upscaled filtered current frames Fiare combined using a weighted sum operation sum(Fi*wi), where the weight wifor each pixel is determined during training instead of simply computing sum(Fi) for the upscaled filtered current frames. Additionally, a predicted temporal 5×5 pixel kernel, is applied to the warped external state including the denoised previous frame to produce filtered warped external state that is then combined with the multi-scale filtered current frame to produce a reconstructed image frame that approximates the image frame without artifacts. FIG.2Dillustrates a block diagram of a temporal adaptive sampling and denoising system200, in accordance with an embodiment. The warped external recurrent neural network200includes a sample map estimator neural network model210, a renderer205, a denoiser neural network model and combiner220, and a temporal warp function215. Although the temporal adaptive sampling and denoising system200is described in the context of processing units, one or more of the sample map estimator neural network model210, denoiser neural network model and combiner220, and the temporal warp function215may be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the sample map estimator neural network model210may be implemented by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing layers of a neural network. Furthermore, persons of ordinary skill in the art will understand that any system that performs the operations of the temporal adaptive sampling and denoising system200is within the scope and spirit of embodiments of the present invention. The temporal adaptive sampling and denoising system200is a warped external recurrent neural network structure that reconstructs sparsely sampled ray traced images. Rendered images, including images rendered using ray tracing, may include artifacts resulting from inadequate sampling. The artifacts may include a loss of high-frequency details and residual noise. For example, to produce nice looking soft shadows, many rays directed towards the light sources in the scene are needed, but many rays cannot necessarily be traced in real time. Therefore, an insufficient number of rays may be traced, resulting in a rendered image that may be very noisy. The denoiser neural network model and combiner220is applied to adaptively sampled rendered images that may include artifacts. The denoiser neural network model and combiner220receives rendered image frames (e.g., adaptive samples) output by the renderer205or receives rendered image frames from other sources to produce reconstructed images. The rendered image frame (time=t) and the warped external state for the reconstructed image frame from the previous timestep (time=t−1) are processed by the denoiser neural network model and combiner220to produce external state including the reconstructed image frame (time=t). In an embodiment, the external state comprises features of the rendered image frame that are extracted by the denoiser neural network model and combiner220. The reconstructed image frame has a higher effective sample count compared with the rendered image frame, meaning the reconstructed image frame appears to have fewer artifacts compared with the rendered image frame. The external state is transmitted by the recursive feedback loop and warped by the temporal warp function215according to per-pixel difference data (e.g., motion vectors) to produce warped external state including a processed warped reconstructed image frame (time=t−1). The warping aligns the external state to the next image frame in the sequence to be rendered. In an embodiment, the external state is the denoised (e.g., reconstructed) rendered image frame (e.g., three channels, RGB). In another embodiment, the external state comprises more than 3 channels, such as 32 or 64 channels that are re-projected (e.g., warped according to the difference data). Parameters of the sample map estimator neural network model210and the denoiser neural network model and combiner220that contribute to the external state are learned during training of the temporal adaptive sampling and denoising system200. The reconstructed rendered image frame may be generated for any learned representation of the external state. In this embodiment, the external state consists of just three channels: the denoised color (R,G,B). In another embodiment, the external state could be a greater number of channels, such as 32 or 64 channels. The external state includes at least reconstructed data which is the reconstructed (e.g., denoised) adaptively sampled ray traced image frame corresponding to the input data. The external state output by the denoiser neural network model and combiner220for each rendered image frame is re-projected by the temporal warp function215to produce the warped external state. In an embodiment, the temporal warp function215may be configured to perform the same operations as the temporal warp function115. The re-projection is accomplished using per-pixel difference data (e.g., motion vectors) that indicate changes between the rendered image and a subsequent rendered image in a sequence (e.g., video frames). As shown inFIG.2D, the rendered image is an adaptively sampled noisy image, rendered as dictated by a sample map. The sample map specifies a number of samples to be generated for each pixel in the rendered image. When the renderer205is configured to generate the adaptive samples using ray tracing, the sample map specifies a number of paths to be traced for each pixel in the rendered image. Generally, the quality of the rendered image improves as the number of samples increases, but the processing performance (e.g., pixels per second) may limit the quality to maintain an interactive (e.g., real time) frame rate. Compared with a fixed sampling rate of 4 spp (samples per pixel), an adaptive sampling redistributes the samples across the entire rendered image and may specify an average sampling rate of 4 spp with individual pixel sampling rates ranging from zero to eight or even higher. The sample map estimator neural network model210processes the input data for each image in a sequence based on the warped external state that is also used by the denoiser neural network model and combiner220. The input data comprises per-pixel guide such as depth, surface normal vector, and motion vectors. Generally, guide data are data defined per-pixel to help the reconstruction process. However, in contrast with conventional adaptive sampling techniques, the guide data does not include sampled color data, such as uniformly sampled one sample-per-pixel “seed” color data. Instead of receiving a low resolution version of the image, the sample map estimator neural network model210receives the warped reconstructed image included in the warped external state. Therefore, an initial rendering pass is not required to generate low resolution the color data. In an embodiment, the guide data does not include three-dimensional primitives. As previously described, the input to the denoiser neural network model and combiner220is the adaptively sampled noisy image (e.g., adaptive samples), rendered as dictated by the sample map. The denoiser neural network model and combiner220also receives the same inputs as provided to the sample map estimator neural network model210, namely the input data and the warped external state. The sample map estimator neural network model210and the denoiser neural network model and combiner220are jointly trained to learn the parameters for generating the sample maps and the external state, respectively. FIG.2Eillustrates high resolution (e.g., 1024 spp) reference image, the reference image at 4 spp, and a sample map and a reconstructed image generated by the temporal adaptive sampling and denoising system200, in accordance with an embodiment. The reference image is sampled at 4 spp to produce the 4 sample-per-pixel image. Within the sample map, the darker portions indicate pixels to be sampled at a low rate and the brighter portions indicate pixels to be sampled at higher rates. Note that the brighter portions correspond to edges and areas of more detail or greater color variation in the reference image and the darker portions correspond to areas in the reference image of constant or similar color. As previously described, the sample map estimator neural network model210receives input data including guide data that is processed to generate the sample map. The guide data may include the surface normal at the primary hit point intersecting each pixel, the depth value for each pixel, and/or a motion vector per pixel. In an embodiment, other per-pixel guide data may be used, such as material parameters. Through training, the sample map estimator neural network model210learns spatio-temporal sampling strategies such as placing more samples in disoccluded regions, tracking specular highlights, and the like, boosting the effective sample count and increasing temporal stability. Use of the warped external state alleviates the need for an initial uniform sampling step, as is often used in conventional adaptive sampling techniques. The renderer205generates each rendered image in the sequence according to the respective sample map. The rendered image is processed by the denoiser neural network model and combiner220, removing artifacts, to produce the reconstructed image shown inFIG.2E. The reconstructed image approximates the reference image and has fewer artifacts compared with the rendered image. FIG.2Fillustrates a flowchart of a method230for spatio-temporal adaptive sampling, in accordance with an embodiment. Although method230is described in the context of a processing unit, the method230may also be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method230may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing the sample map estimator neural network model210. Furthermore, persons of ordinary skill in the art will understand that any system that performs method230is within the scope and spirit of embodiments of the present invention. At step235, guide data for a rendered image frame in a sequence of rendered image frames is received by the sample map estimator neural network model210. The sequence includes a previous rendered image frame and the rendered image frame. In an embodiment, the guide data is per-pixel feature guides including one or more feature buffers for the current rendered image frame (e.g., surface normal vectors, depth, motion vectors, and albedo at first hit). To denoise a frame in the sequence, guide data for the current frame and information for the previous frame in the sequence that is encoded in the external state are processed by the sample map estimator neural network model210. However, only a single image frame needs to be rendered in sequence to estimate each sample map. Therefore, denoising can be performed in real-time. A minimal amount of temporary data is generated (guide data and warped external state) in real-time for each frame to produce a reconstructed/denoised image frame and the temporary data may then be discarded rather than stored to memory. In contrast, for off-line (e.g., non-real-time) denoising, the sequence of image frames and corresponding guide data is generated and stored and later read as each image frame is reconstructed/denoised. At step237, external state including a reconstructed first rendered image frame that approximates the first rendered image frame without artifacts is received by the sample map estimator neural network model210. In other words, the reconstructed first rendered image has fewer artifacts compared with the first rendered image. The external state is warped by the temporal warp function215, using difference data corresponding to changes between the first rendered image frame and the second rendered image frame (e.g., optical flow, motion vectors, or the like), to produce warped external state. Warping the external state anchors individual characteristics or features to regions within the rendered image frames. The warped external state enables improved tracking over time by integrating information associated with changing features over multiple frames in a sequence, producing more temporally stable and higher quality reconstructed images. In an embodiment, the external state is produced by the temporal adaptive sampling and denoising system200processing guide data for the first rendered image frame. In an embodiment, an earlier (e.g., t−1) rendered image frame in the sequence is not available and the external state is initialized to zero or another predetermined value for processing the guide data for the first rendered image frame. Because the external state is initialized, there may be a short transition period of 2-3 frames between different sequences (e.g., clips) before the sample maps and reconstructed rendered image frames produced by the temporal adaptive sampling and denoising system200achieve a desired level of quality. At step239, the guide data for the second rendered image frame is processed, based on the warped external state, using the layers of the sample map estimator neural network model210to produce a sample map that indicates a number of samples to be computed for each pixel in the second rendered image. In an embodiment, the warped external state is input to one or more layers of the sample map estimator neural network model210. In an embodiment, during processing of the guide data for the second rendered image, the warped external state replaces hidden state generated by a particular one of the layers during processing of the guide data for the first rendered image frame as an input to the one or more layers. In other words, each of the layers does not incorporate a feedback connection to use hidden state generated for a previous sample map to generate outputs and/or hidden state for the current sample map. In an embodiment, the sample map is produced using direct prediction, by combining all output features into a single component gray-scale image using a final convolutional layer. The sample map may be normalized to reach a desired average sample count: sˆ(p)=round⁢(M·es⁡(p)∑i=1Mes⁡(i)·n),(3) where M is the number of pixels in the image, n is the average number of samples per pixel, and s(p) is the unnormalized output of the network. In an embodiment, the normalization calculation is similar to a softmax operation. For training the temporal adaptive sampling and denoising system200, implementing normalization by applying a softmax operation over the unrolled image tensor may improve stability of the gradient computation for backpropagation as opposed to chaining the individual operations. In an embodiment, color inputs (e.g., albedo and the warped t−1 reconstructed rendered image frame) to the sample map estimator neural network model210are converted to gray scale using CCIR 601 weights ν=0.2989R+0.587G+0.114B. Using gray scale encourages the adaptive sampling generated by the sample map estimator neural network model210to be independent of chroma and motivated by noise levels, geometric complexity, and/or animation/disocclusion. The sample map for the second rendered image frame may be used by the renderer205to generate adaptive samples representing the second rendered image frame including artifacts. The artifacts may be a loss of high-frequency details and residual noise. The adaptive samples and the warped external state may then be processed using layers of the denoiser neural network model and combiner220to produce external state including a reconstructed second image frame that approximates the second image frame without artifacts. FIG.2Gillustrates a block diagram of a configuration240for training the temporal adaptive sampling and denoising system200ofFIG.2D, in accordance with an embodiment. The training configuration240includes the temporal adaptive sampling and denoising system200and a parameter adjustment unit245. Although the training configuration240is described in the context of processing units, one or more of the temporal adaptive sampling and denoising system200and the parameter adjustment unit245may be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. Furthermore, persons of ordinary skill in the art will understand that any system that performs the operations of the training configuration240is within the scope and spirit of embodiments of the present invention. The training dataset includes guide data for sequences of image frames and corresponding sequences of target image frames without artifacts (e.g., reference image frames). Performance of the temporal adaptive sampling and denoising system200is improved when the training dataset includes the types of artifacts that exist in the sequences of input data to be restored when the temporal adaptive sampling and denoising system200is deployed. The temporal adaptive sampling and denoising system200processes the guide data in the training dataset to generate reconstructed image frames. The parameter adjustment unit245receives the reconstructed image frames and target image frames included in the training dataset and adjusts parameters of the temporal adaptive sampling and denoising system200based on errors between the reconstructed data and the target data frames. After training is complete, the parameters are fixed and the temporal adaptive sampling and denoising system200may be deployed to perform data reconstruction. During deployment the parameter adjustment unit245is not used. In an embodiment, the sample map estimator neural network model210is deployed without the denoiser neural network model and combiner220and/or the renderer205. In an embodiment, the denoiser neural network model and combiner220is deployed without the sample map estimator neural network model210and/or the renderer205. The objective for the temporal adaptive sampling and denoising system200is to generate reconstructed data y that matches the reference solution y as close as possible. In other words, the objective is to find the vector of all convolutional parameters (e.g., weights and biases) of the temporal adaptive sampling and denoising system200that minimizes the error or loss function on the training data without overfitting. Because the choice of loss function can significantly alter the outcome of the training process, the loss function may vary. To account for the recurrent term (the warped reconstructed rendered image frame), back-propagation through time may be employed, training on sequences of N frames. When sequences of five frames are used, one frame is used for initialization and the four subsequent frames are used to train unrolled iterations of the temporal adaptive sampling and denoising system200. For the first frame, the recurrent term may be initialized to the noisy uniformly sampled image at a target sample count. Because the sample map estimator neural network model210and the denoiser neural network model and combiner220are trained end-to-end together, the loss term is only computed on the final reconstructed image, and there is no specific loss computed for the sample map. In an embodiment, both a spatial L1and a temporal L1loss terms are used, weighted equally and both computed in gamma corrected logarithmic space. The temporal L1term is intended to suppress temporal flickering during animation and is computed as the L1norm of the temporal finite differences between frame i and frame i−1. Let xibe the denoised frame and yibe the corresponding reference frame, where i is the current time step. The temporal gradients are Δxi=xi−xi−1and Δyi=yi−y, and the loss, L, is: L=L1(xi,y1)+L1(Δxi,Δyi)  (4) In an embodiment, when training, the spatial loss is only applied to the last reconstructed image frame of each sequence, and the temporal loss by necessity involves the last two reconstructed image frames. The training procedure ensures that the temporal adaptive sampling and denoising system200has access to a reasonable amount of temporal information and avoids transients from initializing the recurrent term. FIG.2Hillustrates a flowchart of a method250for training the temporal adaptive sampling and denoising system200ofFIG.2D, in accordance with an embodiment. Although method250is described in the context of a processing unit, the method250may also be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method250may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of implementing the temporal adaptive sampling and denoising system200. Furthermore, persons of ordinary skill in the art will understand that any system that performs method250is within the scope and spirit of embodiments of the present invention. The method250includes steps235,237, and239from the method230. At step235, guide data corresponding to a sequence of target image frames in a training dataset is received by the temporal adaptive sampling and denoising system200. At step255, the renderer205renders the image according to the sample map. At step260, the denoiser neural network model and combiner220processes the rendered image to produce external state including a reconstructed image frame. At step265, the parameter adjustment unit245determines if the training is complete. A loss function may be computed by the parameter adjustment unit245to measure distances (i.e., differences or gradients) between the target output data and the reconstructed data. The temporal adaptive sampling and denoising system200is deemed to be sufficiently trained when the reconstructed data generated for the input data from the training dataset match the target output data or a threshold accuracy is achieved for the training dataset. If the training is not complete at step265, then at step270the parameter adjustment unit245adjusts the parameters based on differences between the target output data frames and the output data frames before returning to step235to process additional sequences. If the training is not complete at step265, then at step275, the temporal warp function215warps the external state using the difference data to produce warped external state. The parameter adjustment unit245is configured to adjust the parameter values to reduce differences between the target output data and the reconstructed data. If the training is complete at step265, then at step280, the temporal adaptive sampling and denoising system200is deployed to reconstruct sequences of data frames. The warped external recurrent neural networks100and temporal adaptive sampling and denoising system200are designed with three key goals: reconstructed image quality, temporal stability, and real-time performance. To achieve the goals, recurrence is utilized to help improve the resulting temporal stability, and to ensure efficiency. However, in contrast with conventional autoencoders, the recurrence is externalized such that the previous output is an additional input to the encoder/decoder neural network model110, the sample map estimator neural network model210, and the denoiser neural network model and combiner220. Finally, a temporal warp is applied to align past output with the current frame, significantly improving the ability of encoder/decoder neural network model110or210to use past information. The relative visual quality of the image sequences output by the warped external recurrent neural networks100and the temporal adaptive sampling and denoising system200is substantially better than real-time alternatives and approaches the quality of supersampled image sequences. To produce the high-quality and highly temporally stable results for anti-aliasing, only a single sample-per-pixel color is required by the warped external recurrent neural networks100for input image frames along with motion vectors. In contrast, for supersample antialiasing (SSAA) techniques, at least 16 samples-per-pixel color are needed for each pixel to produce similar quality output image frames. The single sample-per-pixel color input is replaced with guide data for the temporal adaptive sampling and denoising system200to reconstruct adaptively sample sequences of rendered images. The use of external recurrence with temporal warping generates consistently better static image quality and rendered image sequences that are more temporally stable. In summary, warped external recurrent neural networks100are able to perform antialiasing with high-quality results and without any additional artifacts. When the encoder/decoder neural network model110is trained end-to-end to minimize the L2distance between pairs of aliased and antialiased images, the encoder/decoder neural network model110learns to identify aliased image features and to adaptively filter the aliased image features out (i.e., remove or reduce the aliased image features). Similarly, when the temporal adaptive sampling and denoising system200is trained end-to-end to minimize the L1spatial and temporal distance between target images and reconstructed rendered images, the sample map estimator neural network model210learns to determine a number of samples for each pixel needed to provide spatio-temporal image fidelity and the and denoiser neural network model and combiner220learns to adaptively filter spatial and temporal artifacts resulting from adaptive sampling, respectively. Rather than relying on an initial uniformly sampled images as inputs to the sample map estimator neural network model210, the warped external state is used to produce low sample count sample maps to preserve high-frequency details while achieving temporally stable adaptive sampling. In sum, the temporal adaptive sampling and denoising system200uses temporal reprojection and adaptive sampling to achieve high quality temporally stable, denoising of path traced animation sequences at interactive rates. By combining temporal reuse and adaptive sampling, the temporal adaptive sampling and denoising system200, learns to reconstruct difficult temporal effects, such as disocclusion and view dependent shading (e.g., specular highlights). Parallel Processing Architecture FIG.3illustrates a parallel processing unit (PPU)300, in accordance with an embodiment. In an embodiment, the PPU300is a multi-threaded processor that is implemented on one or more integrated circuit devices. The PPU300is a latency hiding architecture designed to process many threads in parallel. A thread (i.e., a thread of execution) is an instantiation of a set of instructions configured to be executed by the PPU300. In an embodiment, the PPU300is a graphics processing unit (GPU) configured to implement a graphics rendering pipeline for processing three-dimensional (3D) graphics data in order to generate two-dimensional (2D) image data for display on a display device such as a liquid crystal display (LCD) device. In other embodiments, the PPU300may be utilized for performing general-purpose computations. While one exemplary parallel processor is provided herein for illustrative purposes, it should be strongly noted that such processor is set forth for illustrative purposes only, and that any processor may be employed to supplement and/or substitute for the same. One or more PPUs300may be configured to accelerate thousands of High Performance Computing (HPC), data center, and machine learning applications. The PPU300may be configured to accelerate numerous deep learning systems and applications including autonomous vehicle platforms, deep learning, high-accuracy speech, image, and text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and the like. As shown inFIG.3, the PPU300includes an Input/Output (I/O) unit305, a front end unit315, a scheduler unit320, a work distribution unit325, a hub330, a crossbar (Xbar)370, one or more general processing clusters (GPCs)350, and one or more partition units380. The PPU300may be connected to a host processor or other PPUs300via one or more high-speed NVLink310interconnect. The PPU300may be connected to a host processor or other peripheral devices via an interconnect302. The PPU300may also be connected to a local memory comprising a number of memory devices304. In an embodiment, the local memory may comprise a number of dynamic random access memory (DRAM) devices. The DRAM devices may be configured as a high-bandwidth memory (HBM) subsystem, with multiple DRAM dies stacked within each device. The NVLink310interconnect enables systems to scale and include one or more PPUs300combined with one or more CPUs, supports cache coherence between the PPUs300and CPUs, and CPU mastering. Data and/or commands may be transmitted by the NVLink310through the hub330to/from other units of the PPU300such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). The NVLink310is described in more detail in conjunction withFIG.5B. The I/O unit305is configured to transmit and receive communications (i.e., commands, data, etc.) from a host processor (not shown) over the interconnect302. The I/O unit305may communicate with the host processor directly via the interconnect302or through one or more intermediate devices such as a memory bridge. In an embodiment, the I/O unit305may communicate with one or more other processors, such as one or more the PPUs300via the interconnect302. In an embodiment, the I/O unit305implements a Peripheral Component Interconnect Express (PCIe) interface for communications over a PCIe bus and the interconnect302is a PCIe bus. In alternative embodiments, the I/O unit305may implement other types of well-known interfaces for communicating with external devices. The I/O unit305decodes packets received via the interconnect302. In an embodiment, the packets represent commands configured to cause the PPU300to perform various operations. The I/O unit305transmits the decoded commands to various other units of the PPU300as the commands may specify. For example, some commands may be transmitted to the front end unit315. Other commands may be transmitted to the hub330or other units of the PPU300such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). In other words, the I/O unit305is configured to route communications between and among the various logical units of the PPU300. In an embodiment, a program executed by the host processor encodes a command stream in a buffer that provides workloads to the PPU300for processing. A workload may comprise several instructions and data to be processed by those instructions. The buffer is a region in a memory that is accessible (i.e., read/write) by both the host processor and the PPU300. For example, the I/O unit305may be configured to access the buffer in a system memory connected to the interconnect302via memory requests transmitted over the interconnect302. In an embodiment, the host processor writes the command stream to the buffer and then transmits a pointer to the start of the command stream to the PPU300. The front end unit315receives pointers to one or more command streams. The front end unit315manages the one or more streams, reading commands from the streams and forwarding commands to the various units of the PPU300. The front end unit315is coupled to a scheduler unit320that configures the various GPCs350to process tasks defined by the one or more streams. The scheduler unit320is configured to track state information related to the various tasks managed by the scheduler unit320. The state may indicate which GPC350a task is assigned to, whether the task is active or inactive, a priority level associated with the task, and so forth. The scheduler unit320manages the execution of a plurality of tasks on the one or more GPCs350. The scheduler unit320is coupled to a work distribution unit325that is configured to dispatch tasks for execution on the GPCs350. The work distribution unit325may track a number of scheduled tasks received from the scheduler unit320. In an embodiment, the work distribution unit325manages a pending task pool and an active task pool for each of the GPCs350. The pending task pool may comprise a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC350. The active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by the GPCs350. As a GPC350finishes the execution of a task, that task is evicted from the active task pool for the GPC350and one of the other tasks from the pending task pool is selected and scheduled for execution on the GPC350. If an active task has been idle on the GPC350, such as while waiting for a data dependency to be resolved, then the active task may be evicted from the GPC350and returned to the pending task pool while another task in the pending task pool is selected and scheduled for execution on the GPC350. The work distribution unit325communicates with the one or more GPCs350via XBar370. The XBar370is an interconnect network that couples many of the units of the PPU300to other units of the PPU300. For example, the XBar370may be configured to couple the work distribution unit325to a particular GPC350. Although not shown explicitly, one or more other units of the PPU300may also be connected to the XBar370via the hub330. The tasks are managed by the scheduler unit320and dispatched to a GPC350by the work distribution unit325. The GPC350is configured to process the task and generate results. The results may be consumed by other tasks within the GPC350, routed to a different GPC350via the XBar370, or stored in the memory304. The results can be written to the memory304via the partition units380, which implement a memory interface for reading and writing data to/from the memory304. The results can be transmitted to another PPU304or CPU via the NVLink310. In an embodiment, the PPU300includes a number U of partition units380that is equal to the number of separate and distinct memory devices304coupled to the PPU300. A partition unit380will be described in more detail below in conjunction withFIG.4B. In an embodiment, a host processor executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU300. In an embodiment, multiple compute applications are simultaneously executed by the PPU300and the PPU300provides isolation, quality of service (QoS), and independent address spaces for the multiple compute applications. An application may generate instructions (i.e., API calls) that cause the driver kernel to generate one or more tasks for execution by the PPU300. The driver kernel outputs tasks to one or more streams being processed by the PPU300. Each task may comprise one or more groups of related threads, referred to herein as a warp. In an embodiment, a warp comprises 32 related threads that may be executed in parallel. Cooperating threads may refer to a plurality of threads including instructions to perform the task and that may exchange data through shared memory. Threads and cooperating threads are described in more detail in conjunction withFIG.5A. FIG.4Aillustrates a GPC350of the PPU300ofFIG.3, in accordance with an embodiment. As shown inFIG.4A, each GPC350includes a number of hardware units for processing tasks. In an embodiment, each GPC350includes a pipeline manager410, a pre-raster operations unit (PROP)415, a raster engine425, a work distribution crossbar (WDX)480, a memory management unit (MMU)490, and one or more Data Processing Clusters (DPCs)420. It will be appreciated that the GPC350ofFIG.4Amay include other hardware units in lieu of or in addition to the units shown inFIG.4A. In an embodiment, the operation of the GPC350is controlled by the pipeline manager410. The pipeline manager410manages the configuration of the one or more DPCs420for processing tasks allocated to the GPC350. In an embodiment, the pipeline manager410may configure at least one of the one or more DPCs420to implement at least a portion of a graphics rendering pipeline. For example, a DPC420may be configured to execute a vertex shader program on the programmable streaming multiprocessor (SM)440. The pipeline manager410may also be configured to route packets received from the work distribution unit325to the appropriate logical units within the GPC350. For example, some packets may be routed to fixed function hardware units in the PROP415and/or raster engine425while other packets may be routed to the DPCs420for processing by the primitive engine435or the SM440. In an embodiment, the pipeline manager410may configure at least one of the one or more DPCs420to implement a neural network model and/or a computing pipeline. The PROP unit415is configured to route data generated by the raster engine425and the DPCs420to a Raster Operations (ROP) unit, described in more detail in conjunction withFIG.4B. The PROP unit415may also be configured to perform optimizations for color blending, organize pixel data, perform address translations, and the like. The raster engine425includes a number of fixed function hardware units configured to perform various raster operations. In an embodiment, the raster engine425includes a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, and a tile coalescing engine. The setup engine receives transformed vertices and generates plane equations associated with the geometric primitive defined by the vertices. The plane equations are transmitted to the coarse raster engine to generate coverage information (e.g., an x,y coverage mask for a tile) for the primitive. The output of the coarse raster engine is transmitted to the culling engine where fragments associated with the primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. Those fragments that survive clipping and culling may be passed to the fine raster engine to generate attributes for the pixel fragments based on the plane equations generated by the setup engine. The output of the raster engine425comprises fragments to be processed, for example, by a fragment shader implemented within a DPC420. Each DPC420included in the GPC350includes an M-Pipe Controller (MPC)430, a primitive engine435, and one or more SMs440. The MPC430controls the operation of the DPC420, routing packets received from the pipeline manager410to the appropriate units in the DPC420. For example, packets associated with a vertex may be routed to the primitive engine435, which is configured to fetch vertex attributes associated with the vertex from the memory304. In contrast, packets associated with a shader program may be transmitted to the SM440. The SM440comprises a programmable streaming processor that is configured to process tasks represented by a number of threads. Each SM440is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently. In an embodiment, the SM440implements a SIMD (Single-Instruction, Multiple-Data) architecture where each thread in a group of threads (i.e., a warp) is configured to process a different set of data based on the same set of instructions. All threads in the group of threads execute the same instructions. In another embodiment, the SM440implements a SIMT (Single-Instruction, Multiple Thread) architecture where each thread in a group of threads is configured to process a different set of data based on the same set of instructions, but where individual threads in the group of threads are allowed to diverge during execution. In an embodiment, a program counter, call stack, and execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within the warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. When execution state is maintained for each individual thread, threads executing the same instructions may be converged and executed in parallel for maximum efficiency. The SM440will be described in more detail below in conjunction withFIG.5A. The MMU490provides an interface between the GPC350and the partition unit380. The MMU490may provide translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In an embodiment, the MMU490provides one or more translation lookaside buffers (TLBs) for performing translation of virtual addresses into physical addresses in the memory304. FIG.4Billustrates a memory partition unit380of the PPU300ofFIG.3, in accordance with an embodiment. As shown inFIG.4B, the memory partition unit380includes a Raster Operations (ROP) unit450, a level two (L2) cache460, and a memory interface470. The memory interface470is coupled to the memory304. Memory interface470may implement 32, 64, 128, 1024-bit data buses, or the like, for high-speed data transfer. In an embodiment, the PPU300incorporates U memory interfaces470, one memory interface470per pair of partition units380, where each pair of partition units380is connected to a corresponding memory device304. For example, PPU300may be connected to up to Y memory devices304, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random access memory, or other types of persistent storage. In an embodiment, the memory interface470implements an HBM2 memory interface and Y equals half U. In an embodiment, the HBM2 memory stacks are located on the same physical package as the PPU300, providing substantial power and area savings compared with conventional GDDR5 SDRAM systems. In an embodiment, each HBM2 stack includes four memory dies and Y equals 4, with HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits. In an embodiment, the memory304supports Single-Error Correcting Double-Error Detecting (SECDED) Error Correction Code (ECC) to protect data. ECC provides higher reliability for compute applications that are sensitive to data corruption. Reliability is especially important in large-scale cluster computing environments where PPUs300process very large datasets and/or run applications for extended periods. In an embodiment, the PPU300implements a multi-level memory hierarchy. In an embodiment, the memory partition unit380supports a unified memory to provide a single unified virtual address space for CPU and PPU300memory, enabling data sharing between virtual memory systems. In an embodiment the frequency of accesses by a PPU300to memory located on other processors is traced to ensure that memory pages are moved to the physical memory of the PPU300that is accessing the pages more frequently. In an embodiment, the NVLink310supports address translation services allowing the PPU300to directly access a CPU's page tables and providing full access to CPU memory by the PPU300. In an embodiment, copy engines transfer data between multiple PPUs300or between PPUs300and CPUs. The copy engines can generate page faults for addresses that are not mapped into the page tables. The memory partition unit380can then service the page faults, mapping the addresses into the page table, after which the copy engine can perform the transfer. In a conventional system, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing the available memory. With hardware page faulting, addresses can be passed to the copy engines without worrying if the memory pages are resident, and the copy process is transparent. Data from the memory304or other system memory may be fetched by the memory partition unit380and stored in the L2 cache460, which is located on-chip and is shared between the various GPCs350. As shown, each memory partition unit380includes a portion of the L2 cache460associated with a corresponding memory device304. Lower level caches may then be implemented in various units within the GPCs350. For example, each of the SMs440may implement a level one (L1) cache. The L1 cache is private memory that is dedicated to a particular SM440. Data from the L2 cache460may be fetched and stored in each of the L1 caches for processing in the functional units of the SMs440. The L2 cache460is coupled to the memory interface470and the XBar370. The ROP unit450performs graphics raster operations related to pixel color, such as color compression, pixel blending, and the like. The ROP unit450also implements depth testing in conjunction with the raster engine425, receiving a depth for a sample location associated with a pixel fragment from the culling engine of the raster engine425. The depth is tested against a corresponding depth in a depth buffer for a sample location associated with the fragment. If the fragment passes the depth test for the sample location, then the ROP unit450updates the depth buffer and transmits a result of the depth test to the raster engine425. It will be appreciated that the number of partition units380may be different than the number of GPCs350and, therefore, each ROP unit450may be coupled to each of the GPCs350. The ROP unit450tracks packets received from the different GPCs350and determines which GPC350that a result generated by the ROP unit450is routed to through the Xbar370. Although the ROP unit450is included within the memory partition unit380inFIG.4B, in other embodiment, the ROP unit450may be outside of the memory partition unit380. For example, the ROP unit450may reside in the GPC350or another unit. FIG.5Aillustrates the streaming multi-processor440ofFIG.4A, in accordance with an embodiment. As shown inFIG.5A, the SM440includes an instruction cache505, one or more scheduler units510, a register file520, one or more processing cores550, one or more special function units (SFUs)552, one or more load/store units (LSUs)554, an interconnect network580, a shared memory/L1 cache570. As described above, the work distribution unit325dispatches tasks for execution on the GPCs350of the PPU300. The tasks are allocated to a particular DPC420within a GPC350and, if the task is associated with a shader program, the task may be allocated to an SM440. The scheduler unit510receives the tasks from the work distribution unit325and manages instruction scheduling for one or more thread blocks assigned to the SM440. The scheduler unit510schedules thread blocks for execution as warps of parallel threads, where each thread block is allocated at least one warp. In an embodiment, each warp executes 32 threads. The scheduler unit510may manage a plurality of different thread blocks, allocating the warps to the different thread blocks and then dispatching instructions from the plurality of different cooperative groups to the various functional units (i.e., cores550, SFUs552, and LSUs554) during each clock cycle. Cooperative Groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (i.e., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces. Cooperative Groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on the threads in a cooperative group. The programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. Cooperative Groups primitives enable new patterns of cooperative parallelism, including producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks. A dispatch unit515is configured to transmit instructions to one or more of the functional units. In the embodiment, the scheduler unit510includes two dispatch units515that enable two different instructions from the same warp to be dispatched during each clock cycle. In alternative embodiments, each scheduler unit510may include a single dispatch unit515or additional dispatch units515. Each SM440includes a register file520that provides a set of registers for the functional units of the SM440. In an embodiment, the register file520is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file520. In another embodiment, the register file520is divided between the different warps being executed by the SM440. The register file520provides temporary storage for operands connected to the data paths of the functional units. Each SM440comprises L processing cores550. In an embodiment, the SM440includes a large number (e.g., 128, etc.) of distinct processing cores550. Each core550may include a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes a floating point arithmetic logic unit and an integer arithmetic logic unit. In an embodiment, the floating point arithmetic logic units implement the IEEE 754-2008 standard for floating point arithmetic. In an embodiment, the cores550include 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores. Tensor cores configured to perform matrix operations, and, in an embodiment, one or more tensor cores are included in the cores550. In particular, the tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In an embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation D=A×B+C, where A, B, C, and D are 4×4 matrices. In an embodiment, the matrix multiply inputs A and B are 16-bit floating point matrices, while the accumulation matrices C and D may be 16-bit floating point or 32-bit floating point matrices. Tensor Cores operate on 16-bit floating point input data with 32-bit floating point accumulation. The 16-bit floating point multiply requires 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with the other intermediate products for a 4×4×4 matrix multiply. In practice, Tensor Cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements. An API, such as CUDA 9C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use Tensor Cores from a CUDA-C++ program. At the CUDA level, the warp-level interface assumes 16×16 size matrices spanning all 32 threads of the warp. Each SM440also comprises M SFUs552that perform special functions (e.g., attribute evaluation, reciprocal square root, and the like). In an embodiment, the SFUs552may include a tree traversal unit configured to traverse a hierarchical tree data structure. In an embodiment, the SFUs552may include texture unit configured to perform texture map filtering operations. In an embodiment, the texture units are configured to load texture maps (e.g., a 2D array of texels) from the memory304and sample the texture maps to produce sampled texture values for use in shader programs executed by the SM440. In an embodiment, the texture maps are stored in the shared memory/L1 cache470. The texture units implement texture operations such as filtering operations using mip-maps (i.e., texture maps of varying levels of detail). In an embodiment, each SM340includes two texture units. Each SM440also comprises N LSUs554that implement load and store operations between the shared memory/L1 cache570and the register file520. Each SM440includes an interconnect network580that connects each of the functional units to the register file520and the LSU554to the register file520, shared memory/L1 cache570. In an embodiment, the interconnect network580is a crossbar that can be configured to connect any of the functional units to any of the registers in the register file520and connect the LSUs554to the register file and memory locations in shared memory/L1 cache570. The shared memory/L1 cache570is an array of on-chip memory that allows for data storage and communication between the SM440and the primitive engine435and between threads in the SM440. In an embodiment, the shared memory/L1 cache570comprises 128 KB of storage capacity and is in the path from the SM440to the partition unit380. The shared memory/L1 cache570can be used to cache reads and writes. One or more of the shared memory/L1 cache570, L2 cache460, and memory304are backing stores. Combining data cache and shared memory functionality into a single memory block provides the best overall performance for both types of memory accesses. The capacity is usable as a cache by programs that do not use shared memory. For example, if shared memory is configured to use half of the capacity, texture and load/store operations can use the remaining capacity. Integration within the shared memory/L1 cache570enables the shared memory/L1 cache570to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data. When configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. Specifically, the fixed function graphics processing units shown inFIG.3, are bypassed, creating a much simpler programming model. In the general purpose parallel computation configuration, the work distribution unit325assigns and distributes blocks of threads directly to the DPCs420. The threads in a block execute the same program, using a unique thread ID in the calculation to ensure each thread generates unique results, using the SM440to execute the program and perform calculations, shared memory/L1 cache570to communicate between threads, and the LSU554to read and write global memory through the shared memory/L1 cache570and the memory partition unit380. When configured for general purpose parallel computation, the SM440can also write commands that the scheduler unit320can use to launch new work on the DPCs420. The PPU300may be included in a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and the like. In an embodiment, the PPU300is embodied on a single semiconductor substrate. In another embodiment, the PPU300is included in a system-on-a-chip (SoC) along with one or more other devices such as additional PPUs300, the memory204, a reduced instruction set computer (RISC) CPU, a memory management unit (MMU), a digital-to-analog converter (DAC), and the like. In an embodiment, the PPU300may be included on a graphics card that includes one or more memory devices304. The graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In yet another embodiment, the PPU300may be an integrated graphics processing unit (iGPU) or parallel processor included in the chipset of the motherboard. Exemplary Computing System Systems with multiple GPUs and CPUs are used in a variety of industries as developers expose and leverage more parallelism in applications such as artificial intelligence computing. High-performance GPU-accelerated systems with tens to many thousands of compute nodes are deployed in data centers, research facilities, and supercomputers to solve ever larger problems. As the number of processing devices within the high-performance systems increases, the communication and data transfer mechanisms need to scale to support the increased bandwidth. FIG.5Bis a conceptual diagram of a processing system500implemented using the PPU300ofFIG.3, in accordance with an embodiment. The exemplary system565may be configured to implement the methods130,160,230, and250shown inFIGS.1B,1C,2F, and2Hrespectively. The processing system500includes a CPU530, switch510, and multiple PPUs300each and respective memories304. The NVLink310provides high-speed communication links between each of the PPUs300. Although a particular number of NVLink310and interconnect302connections are illustrated inFIG.5B, the number of connections to each PPU300and the CPU530may vary. The switch510interfaces between the interconnect302and the CPU530. The PPUs300, memories304, and NVLinks310may be situated on a single semiconductor platform to form a parallel processing module525. In an embodiment, the switch510supports two or more protocols to interface between various different connections and/or links. In another embodiment (not shown), the NVLink310provides one or more high-speed communication links between each of the PPUs300and the CPU530and the switch510interfaces between the interconnect302and each of the PPUs300. The PPUs300, memories304, and interconnect302may be situated on a single semiconductor platform to form a parallel processing module525. In yet another embodiment (not shown), the interconnect302provides one or more communication links between each of the PPUs300and the CPU530and the switch510interfaces between each of the PPUs300using the NVLink310to provide one or more high-speed communication links between the PPUs300. In another embodiment (not shown), the NVLink310provides one or more high-speed communication links between the PPUs300and the CPU530through the switch510. In yet another embodiment (not shown), the interconnect302provides one or more communication links between each of the PPUs300directly. One or more of the NVLink310high-speed communication links may be implemented as a physical NVLink interconnect or either an on-chip or on-die interconnect using the same protocol as the NVLink310. In the context of the present description, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit fabricated on a die or chip. It should be noted that the term single semiconductor platform may also refer to multi-chip modules with increased connectivity which simulate on-chip operation and make substantial improvements over utilizing a conventional bus implementation. Of course, the various circuits or devices may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. Alternately, the parallel processing module525may be implemented as a circuit board substrate and each of the PPUs300and/or memories304may be packaged devices. In an embodiment, the CPU530, switch510, and the parallel processing module525are situated on a single semiconductor platform. In an embodiment, the signaling rate of each NVLink310is 20 to 25 Gigabits/second and each PPU300includes six NVLink310interfaces (as shown inFIG.5B, five NVLink310interfaces are included for each PPU300). Each NVLink310provides a data transfer rate of 25 Gigabytes/second in each direction, with six links providing 300 Gigabytes/second. The NVLinks310can be used exclusively for PPU-to-PPU communication as shown inFIG.5B, or some combination of PPU-to-PPU and PPU-to-CPU, when the CPU530also includes one or more NVLink310interfaces. In an embodiment, the NVLink310allows direct load/store/atomic access from the CPU530to each PPU's300memory304. In an embodiment, the NVLink310supports coherency operations, allowing data read from the memories304to be stored in the cache hierarchy of the CPU530, reducing cache access latency for the CPU530. In an embodiment, the NVLink310includes support for Address Translation Services (ATS), allowing the PPU300to directly access page tables within the CPU530. One or more of the NVLinks310may also be configured to operate in a low-power mode. FIG.5Cillustrates an exemplary system565in which the various architecture and/or functionality of the various previous embodiments may be implemented. The exemplary system565may be configured to implement the methods130,160,230, and250shown inFIGS.1B,1C,2F, and2Hrespectively. As shown, a system565is provided including at least one central processing unit530that is connected to a communication bus575. The communication bus575may be implemented using any suitable protocol, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol(s). The system565also includes a main memory540. Control logic (software) and data are stored in the main memory540which may take the form of random access memory (RAM). The system565also includes input devices560, the parallel processing system525, and display devices545, i.e. a conventional CRT (cathode ray tube), LCD (liquid crystal display), LED (light emitting diode), plasma display or the like. User input may be received from the input devices560, e.g., keyboard, mouse, touchpad, microphone, and the like. Each of the foregoing modules and/or devices may even be situated on a single semiconductor platform to form the system565. Alternately, the various modules may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. Further, the system565may be coupled to a network (e.g., a telecommunications network, local area network (LAN), wireless network, wide area network (WAN) such as the Internet, peer-to-peer network, cable network, or the like) through a network interface535for communication purposes. The system565may also include a secondary storage (not shown). The secondary storage610includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (DVD) drive, recording device, universal serial bus (USB) flash memory. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner. Computer programs, or computer control logic algorithms, may be stored in the main memory540and/or the secondary storage. Such computer programs, when executed, enable the system565to perform various functions. The memory540, the storage, and/or any other storage are possible examples of computer-readable media. The architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the system565may take the form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. Machine Learning Deep neural networks (DNNs) developed on processors, such as the PPU300have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications. Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time. A child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching. Similarly, a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects. At the simplest level, neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon. An artificial neuron or perceptron is the most basic model of a neural network. In one example, a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object. A deep neural network (DNN) model includes multiple layers of many connected perceptrons (e.g., nodes) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of the DLL model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. The second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors. The next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand. Once the DNN is trained, the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (the process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time. During training, data flows through the DNN in a forward propagation phase until a prediction is produced that indicates a label corresponding to the input. If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset. Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions that are supported by the PPU300. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information. Neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. With thousands of processing cores, optimized for matrix math operations, and delivering tens to hundreds of TFLOPS of performance, the PPU300is a computing platform capable of delivering performance required for deep neural network-based artificial intelligence and machine learning applications.
99,436
11861812
EXAMPLE EMBODIMENT Example Embodiment Following is a description of a camera parameter estimation apparatus, a camera parameter estimation method, and a program according to an example embodiment of the invention, with reference toFIGS.1to4. [Apparatus Configuration] First, the configuration of a camera parameter estimation apparatus according to the example embodiment will be described with reference toFIG.1.FIG.1is a block diagram showing a configuration of the camera parameter estimation apparatus according to the example embodiment of the invention. A camera parameter estimation apparatus10according to this example embodiment shown inFIG.1estimates a geometric parameter of a camera that has shot an image of an object and a lens distortion parameter of a lens distortion model expressed by a single unknown. As shown inFIG.1, the camera parameter estimation apparatus10is provided with a data obtaining unit11and a parameter estimation unit12. The data obtaining unit11obtains image corresponding points of an object and an approximation order for polynomial approximation of a lens distortion model expressed by a single unknown. The parameter estimation unit12estimates a geometric parameter and a lens distortion parameter based on the image corresponding points and the approximation order obtained by the data obtaining unit11. Here, “geometric parameter” refers to a parameter defining transformation of image corresponding points. Specific examples of a geometric parameter include a homography matrix that represents planar transformation between two images, a basic matrix that represents a relative projection relation of a camera between two images, and a projection transformation matrix that represents transformation between 3D points and corresponding 2D points projected onto one image. The input image corresponding points differ according to the geometric parameter to be estimated. For example, if the geometric parameter is a homography matrix or a basic matrix, the image corresponding points are corresponding 2D points on images. Also, if the geometric parameter is a projection transformation matrix, the image corresponding points are 3D points and 2D points obtained by projecting these 3D points. The number of corresponding points required for estimation differs according to the degree of freedom of the geometric parameter. Note that this is a widely known fact and thus details thereof are omitted. Also, these corresponding points are not those of a degenerate configuration also known as a critical configuration. The reason being that a geometric parameter cannot be logically estimated for such corresponding points. Thus, in this example embodiment, a geometric parameter and a lens distortion parameter are estimated based on the image corresponding points and an approximation order for polynomial approximation of a lens distortion model. With this example embodiment, it is possible to estimate a lens distortion parameter and a geometric parameter of a camera by using a model that can express radial distortion of a lens with a single unknown. Also, in this example embodiment, the parameter estimation unit12can estimate a plurality of geometric parameter candidates and a plurality of lens distortion parameter candidates based on the image corresponding points and the approximation order. In this case, the parameter estimation unit12selects, from the candidates, the geometric parameter candidate and the lens distortion parameter candidate that minimize an error function, and outputs the selected candidates as the geometric parameter and the lens distortion parameter. Also, the error function is a function that represents the transformation relation between the image corresponding points and the geometric parameter and lens distortion parameter. Furthermore, in this example embodiment, if the number of image corresponding points does not meet a set condition (for example, the number of image corresponding points is excessively high), the parameter estimation unit12selects the geometric parameter candidate and the lens distortion parameter candidate that minimize the error function through non-linear optimization. [Apparatus Operations] Next, operations of the camera parameter estimation apparatus10according to this example embodiment will be described with reference toFIG.2.FIG.2is a flowchart showing operation of the camera parameter estimation apparatus according to this example embodiment of the invention. The following description refers toFIG.1as appropriate. Also, in the first example embodiment, a camera parameter estimation method is implemented by operating the camera parameter estimation apparatus10. Thus, the description of the camera parameter estimation method in this example embodiment is be replaced with the description of operation of the camera parameter estimation apparatus below. As shown inFIG.2, the data obtaining unit11obtains image corresponding points of an object and an approximation order for polynomial approximation of a lens distortion model expressed by a single unknown (step A1). Next, the parameter estimation unit12estimates geometric parameter candidates and lens distortion parameter candidates based on the image corresponding points and the approximation order obtained in step A1(step A2). Next, the parameter estimation unit12selects, from the candidates estimated in step A2, the geometric parameter candidate and the lens distortion parameter that minimize the error function (step A3). Then, the parameter estimation unit12outputs the candidates selected in step A3as the geometric parameter and the lens distortion parameter (step A4). SPECIFIC EXAMPLES Next, specific examples of this example embodiment will be described. Note that in the specific examples below, the distortion center is assumed as being the image center (that is, half of the size of an image). This assumption is also employed in above-described Non-Patent Documents 2 to 5, and is a valid assumption as recent digital cameras are manufactured to high levels of precision. Note that a known value calculated in advance may be used as the distortion Center, and May be Freely Changed by a Person Skilled in the Art According to the Situation in which it is to be used. (1) Lens Distortion Model First the lens distortion model will be described. Also, in the following description, the superscript T represents a transposition of a matrix and a vector. Also, w represents the distortion coefficient, murepresents the undistorted image coordinates, and mdrepresents distorted image coordinates. Furthermore, the undistorted image coordinates mucan be represented by the following Equation 1, and the distorted image coordinates mdcan be represented by the following Equation 2. mu=[xu,yu]T[Equation 1] md=[xd,yd]T[Equation 2] Also, {tilde over (m)}uand {tilde over (m)}drepresent the homogenized coordinates of image coordinates. Furthermore, {tilde over (m)}ucan be represented by the following Equation 3, and the {tilde over (m)}dcan be represented by the following Equation 4. {tilde over (m)}u=[xu,yu,1]T[Equation 3] {tilde over (m)}u=[xd,yd,1]T[Equation 4] The distortion model is defined here. When the lens distortion is assumed to be radial distortion only, the transformation relation between muand mdis represented by Equation 5 below. md=τdτu⁢mu[Equation⁢5] Also, the following Equations 6 and 7 hold true, and thus by following Non-Patent Document 5 mentioned above, the transformation relation between ruand rdbased on the FOV model can be represented by Equations 8 and 9. ru=√{square root over (xu2+yu2)}  [Equation 6] rd=√{square root over (xd2+yd2)}  [Equation 7] rd=1w⁢arctan⁡(2⁢ru⁢tan⁢w2)[Equation⁢8] ru=tan⁡(rd⁢w)2⁢tan⁢w2[Equation⁢9] In the Equations 5, 8, and 9, the undistorted values, that is, xu′, yu′, and ruare generally unknown, and can be estimated from measurable distorted values xd′, yd′, and rd. It is known that, if the distortion coefficient w is obtained from the Equations 5, 8, and 9, rucan be calculated from rd, and mucan be directly calculated from md. When the above Equation 5 is modified and the modified Equation 5 is expressed using homogenized coordinates, Equation 10 below is realized. rurd⁢md=mu⟺[xdydzd]∝[xuyu1][Equation⁢10] In Equation 10, zd=rd/ruholds true, and the symbol ∝ indicates that the left side and the right side are unvarying to each other to a constant factor. Using the fact that the distortion coefficient w is close to zero (0), when zdundergoes Taylor expansion, zdis represented by the following Equations 11 to 13. It should be noted that, in Equation 11, a2nis an approximation coefficient corresponding to w2n. xd=rurd=2⁢rd⁢tan⁢w2tan⁡(rd⁢w)~1+a2⁢w2+a4⁢w4+…+a2⁢n⁢w2⁢n[Equation⁢11] a2=1-4⁢rd212[Equation⁢12] a4=3-8⁢rd4-10⁢rd2360[Equation⁢13] When the Equations 11 to 13 are used, Equation 10 is expressed by Equations 14 and 15. [xdyd1]+[00a2]⁢w2+[00a4]⁢w4+…∝[xuyu1]⟺m~d+∑ui=1w2⁢i⁢p2⁢i∝m~u[Equation⁢14] p2⁢i=[00a2⁢i][Equation⁢15] The polynomial approximations of the FOV models represented by the Equations 11 to 13, 14, and 15 are lens distortion models according to this specific example. (2) Calculation of Distortion Coefficient and Camera Parameter Next, the method for calculating the distortion coefficient w and camera parameters (geometric parameter and lens distortion parameter) will be described. In this specific example, for convenience, a method for simultaneously estimating the distortion coefficient w and the basic matrix will be described assuming that n=2. When n=2, the distortion model is represented by Equation 16 below. Note that n is a variable shown in Equation 11. {tilde over (m)}u∝{tilde over (m)}d+w2p2+w4p4[Equation 16] The epipolar equation that is satisfied by the basic matrix F is expressed by Equation 17 below. {tilde over (m)}′uTF{tilde over (m)}u−0  [Equation 17] {tilde over (m)}′urepresents the homogenized coordinates of the corresponding points on two images, relative to the {tilde over (m)}u. When Equation 16 is substituted into Equation 17, an epipolar equation including the distortion coefficient w is expressed by Equation 18 below. ({tilde over (m)}′d+w2p′2+w4p′d)TF({tilde over (m)}d+w2p2+w4pd)={tilde over (m)}′dTF{tilde over (m)}d+w2({tilde over (m)}′dTFp2+p′2TF{tilde over (m)}d)+ w4({tilde over (m)}′dTFp4+p′2TFp2+p′4TF{tilde over (m)}d)+w6(p′2TFp4+p′4TFp2)+ w8p′4TFp4[Equation 18] Then, when nine corresponding points are provided and Equation 18 is expressed by a matrix, Equation 19 is realized as the following. (D1+λD2+λ2D3+λ3D4+λ4D5)f=0[Equation 19] Here, λ=w2holds true and f indicates the vectorized F. Also, D is a coefficient matrix for each λ. The Equation 19 is a 9×9 matrix polynomial problem in which f is an eigenvector and λ is an eigenvalue, and can be solved by using a library of iterative solvers for linear systems. Also, λ=w2holds true, and thus it is sufficient to extract eigenvectors that correspond only to positive eigenvalues. Also, the distortion coefficient w can be calculated as w=λ{circumflex over ( )}(1/2). Also, when more than nine corresponding points are provided, it is sufficient to multiply the transposition of D1in Equation 19 from the left, and similarly solve Equation 19 as a 9×9 matrix polynomial. When solving Equation 19, nine eigenvalues and their corresponding eigenvectors can be obtained. If the number of corresponding points exceeds nine, it is sufficient to select a set of eigenvalues and eigenvectors with which an algebraic error or a re-projection error based on Equation 17 is minimized. When the number of corresponding points is nine, the algebraic error and the re-projection error are also zero, but a suitable λ may be empirically selected. That is, when λ=w2holds true, λ may be set to the smallest positive real number larger than zero using the fact that Equation 15 is a Taylor expansion of an FOV model around w=0. Also, if the range of λ is known in advance, λ may be selected through threshold processing based on the known range. Equation 19 is an approximation of the FOV model, and thus an algebraic error or a re-projection error based on Equation 17 may be minimized through non-linear optimization such as a Newton method by setting the obtained distortion coefficient w as an initial value, and employing a real FOV model that is based on Equations 8 and 9. Also, a polynomial approximation with an increased approximation order may be used instead of a real FOV model. In this specific example, n=2, but a larger n may be used in order to increase the approximation accuracy, and in this case as well, the problem is reduced to a similar matrix polynomial problem. On the other hand, when it is known in advance that there is little distortion, n may be set to 1 when it is desired to keep the computation amount low. In that case, the order of λ in the matrix polynomial is small, and thus can be solved with a reduced calculation amount. If the obtained solution is inaccurate, the above-described non-linear optimization may be performed. Effects According to Example Embodiment As described above, according to the present example embodiment, in either case where the lens distortion is small or large, the lens distortion parameter can be expressed by a single unknown and estimated simultaneously with the geometric parameter of the camera. The reason for this is as follows. The FOV model is a method of formulating lens distortion of an extremely wide-angle fish eye lens. Equations 11 to 13 are polynomial approximations of the FOV model, but the distortion coefficient w takes a value close to 0 even in a wide-angle lens. Accordingly, polynomial approximation is valid even when distortion is large, and thus the lens distortion parameter can be estimated by increasing the approximation order unlike the conventional method in which the unknown of the lens distortion parameter is increased. Conversely, if there is little distortion, it is sufficient to reduce the approximation order n. Thus, according to this example embodiment, the magnitude of distortion can be addressed by increasing or decreasing the approximation order without changing the number of parameters, and thus issues with the conventional method can be resolved. [Variations] The invention is not limited to only the example embodiments described above. In the invention, various modifications that will be appreciated by those skilled in the art can be made to the example embodiments described above. For example, the invention can also be carried out through modes disclosed in the following variations. (1) First Variation In the above-described embodiment, the geometric parameter of the camera is not limited to a basic matrix. For example, other geometric parameters of the camera including a homography matrix and a perspective projection matrix can be calculated using a similar method. As disclosed in the above Non-Patent Document 3, the problem is similarly reduced to a matrix polynomial in either case. (2) Second Variation In this example embodiment, as shown inFIG.3, the camera parameter estimation apparatus10may be provided with an approximation order determination unit13that calculates an approximation order from the focal length of a camera at the time of shooting and the resolution of the shot image. The approximation order determination unit13can obtain the image resolution and an approximate focal length at the time of shooting by using, for example, tag information and Exif information embedded in the image.FIG.3is a block diagram showing another example of the configuration of the camera parameter estimation apparatus according to the example embodiment of the invention. In the second variation, the data obtaining unit11only obtains the image corresponding points. Also, the approximation order determination unit13can calculate an angle of view based on a ratio between the width or height of an image and the focal distance of the camera at the time of shooting, and use an approximate lens distortion value that corresponds to the magnitude of the angle of view as prior information to determine the approximation order of the lens distortion model. (3) Third Variation In the above-described example embodiment, the method for calculating the geometric parameter of the camera and the lens distortion parameter is not limited to a matrix polynomial. For example, Non-Patent Documents 4 and 5 above disclose a method for calculating a perspective projection matrix when there is one lens distortion parameter, and the methods disclosed in the Non-Patent Documents 4 and 5 may be used in place of the matrix polynomial in this example embodiment as well. [Program] A program according to this example embodiment may be a program that causes a computer to execute steps A1to A4shown inFIG.2. By installing this program in the computer and executing the program, the camera parameter estimation apparatus10and the camera parameter estimation method according to this example embodiment can be realized. In this case, a processor of the computer performs processing while functioning as the data obtaining unit11, the parameter estimation unit12, and also as the approximation order determination unit13. Also, the program according to this example embodiment may be executed by a computer system constructed using a plurality of computers. In this case, for example, each computer may respectively function as any of the data obtaining unit11, the parameter estimation unit12, and the approximation order determination unit13. Here, a computer that realizes the camera parameter estimation apparatus10by executing the program according to this example embodiment will be described with reference toFIG.4.FIG.4is a block diagram showing an example of a computer that realizes the camera parameter estimation apparatus according to the example embodiment of the invention. As shown inFIG.4, the computer110includes a CPU (Central Processing Unit)111, a main memory112, a storage apparatus113, an input interface114, a display controller115, a data reader/writer116, and a communication interface117. These units are each connected so as to be capable of performing data communication with each other through a bus121. Also, the computer110may also include a GPU (Graphics Processing Unit) or an FPGA (Field-Programmable Gate Array) in addition to the CPU111or instead of the CPU111. The CPU111expands program (codes) according to this example embodiment, which was stored in the storage apparatus113, in the main memory112and performs various operations by executing each code in a predetermined order. The main memory112is typically a volatile storage apparatus such as a DRAM (Dynamic Random Access Memory). Also, the program according to this example embodiment is provided in a state of being stored in a computer-readable recording medium120. Note that the program according to this example embodiment may be distributed on the Internet, which is connected through the communication interface117. Also, other than a hard disk drive, a semiconductor storage apparatus such as a flash memory is be given as a specific example of the storage apparatus113. The input interface114mediates data transmission between the CPU111and an input device118, which may be a keyboard or mouse. The display controller115is connected to a display device119, and controls display by the display device119. The data reader/writer116mediates data transmission between the CPU111and the recording medium120, and executes reading out of the program from the recording medium120and writing of processing results in the computer110to the recording medium120. The communication interface117mediates data transmission between the CPU111and other computers. Also, general-purpose semiconductor storage devices such as CF (Compact Flash (registered trademark)) and SD (Secure Digital), a magnetic recording medium such as a Flexible Disk, or an optical recording medium such as a CD-ROM (Compact Disk Read-Only Memory) are given as specific examples of the recording medium120. Note that the camera parameter estimation apparatus10according to this example embodiment can be realized not only by a computer with the program installed, but also by using hardware corresponding to each part. Further, a configuration may be adopted in which a portion of the camera parameter estimation apparatus10is realized by a program, and the remaining portions are realized by hardware. A portion or all of the example embodiments described above can be realized according to (supplementary note 1) to (supplementary note 12) described below, but the following description does not limit the invention. (Supplementary Note 1) A camera parameter estimation apparatus for estimating a geometric parameter of a camera that has shot an image of an object and a lens distortion parameter of a lens distortion model represented by a single unknown, the apparatus including:a data obtaining unit configured to obtain image corresponding points relating to the object and an approximation order for polynomial approximation of the lens distortion model; anda parameter estimation unit configured to estimate the geometric parameter and the lens distortion parameter based on the image corresponding points and the approximation order. (Supplementary Note 2) The camera parameter estimation apparatus according to Supplementary Note 1,wherein the parameter estimation unit estimates a plurality of candidates for the geometric parameter and a plurality of candidates for the lens distortion parameter based on the image corresponding points and the approximation order, andselects the geometric parameter candidate and the lens distortion parameter candidate that minimize an error function that represents a transformation relation between the image corresponding points and the geometric parameter and lens distortion parameter from the candidates. (Supplementary Note 3) The camera parameter estimation apparatus according to Supplementary Note 2,wherein, if the number of the image corresponding points does not meet a set condition, the parameter estimation unit selects the geometric parameter candidate and the lens distortion parameter candidate that minimize the error function through non-linear optimization. (Supplementary Note 4) The camera parameter estimation apparatus according to any one of Supplementary Notes 1 to 3, further including:an approximation order determination unit configured to determine the approximation order of the lens distortion model based on a focal length of the camera at the time of shooting and a resolution of the shot image. (Supplementary Note 5) A camera parameter estimation method for estimating a geometric parameter of a camera that has shot an image of an object and a lens distortion parameter of a lens distortion model represented by a single unknown, the method including:(a) a step of obtaining image corresponding points relating to the object and an approximation order for polynomial approximation of the lens distortion model; and(b) a step of estimating the geometric parameter and the lens distortion parameter based on the image corresponding points and the approximation order. (Supplementary Note 6) The camera parameter estimation method according to Supplementary Note 5, wherein, in the (b) step, a plurality of candidates for the geometric parameter and a plurality of candidates for the lens distortion parameter are estimated based on the image corresponding points and the approximation order, andthe geometric parameter candidate and the lens distortion parameter candidate that minimize an error function that represents a transformation relation between the image corresponding points and the geometric parameter and lens distortion parameter are selected from the candidates. (Supplementary Note 7) The camera parameter estimation method according to Supplementary Note 6, wherein, in the (b) step, if the number of the image corresponding points does not meet a set condition, the geometric parameter candidate and the lens distortion parameter candidate that minimize the error function through non-linear optimization are selected. (Supplementary Note 8) The camera parameter estimation method according to any one of Supplementary Notes 5 to 7,further including a step (c) of determining an approximation order of the lens distortion model based on a focal length of the camera at the time of shooting and a resolution of the shot image. (Supplementary Note 9) A computer-readable recording medium that includes a program recorded thereon for, with a computer, estimating a geometric parameter of a camera that has shot an image of an object and a lens distortion parameter of a lens distortion model represented by a single unknown, the program including instructions that cause the computer to carry out the steps of:(a) obtaining image corresponding points relating to the object and an approximation order for polynomial approximation of the lens distortion model; and(b) estimating the geometric parameter and the lens distortion parameter based on the image corresponding points and the approximation order. (Supplementary Note 10) The computer-readable recording medium according to Supplementary Note 9,wherein, in the (b) step, a plurality of candidates for the geometric parameter and a plurality of candidates for the lens distortion parameter are estimated based on the image corresponding points and the approximation order, andthe geometric parameter candidate and the lens distortion parameter candidate that minimize an error function that represents a transformation relation between the image corresponding points and the geometric parameter and lens distortion parameter are selected from the candidates. (Supplementary Note 11) The computer-readable recording medium according to Supplementary Note 10,wherein, in the (b) step, if the number of the image corresponding points does not meet a set condition, the geometric parameter candidate and the lens distortion parameter candidate that minimize the error function through non-linear optimization are selected. (Supplementary Note 12) The computer-readable recording medium according to any one of Supplementary Notes 9 to 11, the program further including an instruction that causes the computer to carry out the step of:(c) determining an approximation order of the lens distortion model based on a focal length of the camera at the time of shooting and a resolution of the shot image. Although the invention is described above with reference to example embodiments, the invention is not limited by the above example embodiments. Various modifications that will be appreciated by those skilled in the art can be made to the configurations or details of the invention within the scope of the invention. INDUSTRIAL APPLICABILITY As described above, according to the invention, it is possible to estimate lens distortion parameters and geometric parameters of a camera by using a model that can express radial distortion of a lens with a single unknown. The invention is effective in various image processing systems in which lens distortion needs to be corrected. DESCRIPTION OF REFERENCE SIGNS 10Camera parameter estimation apparatus11Data obtaining unit12Parameter estimation unit13Approximation order determination unit110Computer111CPU112Main memory113Storage apparatus114Input interface115Display controller116Data reader/writer117Communication interface118Input device119Display device120Recording medium121Bus
27,654
11861813
DETAILED DESCRIPTION Embodiments of the present disclosure are described in detail below, and examples of embodiments are illustrated in the accompanying drawings, throughout which the same or similar labels represent the same or similar elements or elements with the same or similar functions. The embodiments described below with reference to the drawings are exemplary, are intended to explain the present disclosure and are not to be construed as a limitation to the present disclosure. Compared with a conventional lens camera, the wide-angle camera has a larger field of vision (FOV), however, there is a large distortion for the wide-angle camera, and the image edge may have a serious distortion. In the related art, in order to compensate for the distortion of an image shot by the wide-angle camera, the distortion of the image needs to be corrected. The disclosure is intended to provide a solution of solving the technical problem in the related art that a corrected distortion image has low resolution due to directly processing the distortion image based on an interpolation algorithm. Referring toFIG.2, the method for correcting a distorted image in the embodiment of the disclosure includes the following steps: acquiring a distorted image to be corrected and a first coordinate of each pixel in the distorted image; acquiring a second coordinate corresponding to the first coordinate, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate; acquiring a distance between the first coordinate and a coordinate of a center point of the distorted image, and determining a smoothing processing coefficient corresponding to the distance based on a smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient; and acquiring a distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. In some embodiments, acquiring the second coordinate corresponding to the first coordinate, includes: determining internal parameters of a camera module for shooting the distorted image; and acquiring the second coordinate by calculating the internal parameters and the first coordinate based on a preset algorithm. In some embodiments, the smoothing processing function is: S⁡(x)=11+e-20⁢(x-0.5), where, x is a normalized distance corresponding to the distance, and S(x) is the smoothing processing coefficient. In some embodiments, acquiring the distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate, includes: determining a floating point coordinate corresponding to each first coordinate by calculating the smoothing processing coefficient, the second coordinate and the first coordinate based on a preset algorithm; acquiring an integer coordinate and a pixel value of each pixel by performing interpolation calculation on the floating point coordinate; and acquiring the distortion correction image based on the integer coordinate point and the pixel value. In some embodiments, acquiring the distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate, includes: determining a first weight of the second coordinate and a second weight of the first coordinate based on the smoothing processing coefficient, in which the first weight is proportional to the smoothing processing coefficient and the second weight is inversely proportional to the smoothing processing coefficient; calculating a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate; and acquiring the distortion correction image by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. Referring toFIG.6, the apparatus for correcting a distorted image in the embodiment of the disclosure includes a first acquiring module10, a second acquiring module20, a third acquiring module30, a determining module40and a correction module50. The first acquiring module10is configured to acquire a distorted image to be corrected and a first coordinate of each pixel in the distorted image. The second acquiring module20is configured to calculate a second coordinate corresponding to the first coordinate, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate. The third acquiring module30is configured to acquire a distance between the first coordinate and a coordinate of a center point of the distorted image. The determining module40is configured to determine a smoothing processing coefficient corresponding to the distance based on a preset smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient. The correction module50is configured to acquire a distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. Referring toFIG.7, the second acquiring module20includes a first determining unit21and a first acquiring unit22. The first determining unit21is configured to determine internal parameters of a camera module for shooting the distorted image. The first acquiring unit22is configured to acquire the second coordinate by calculating the internal parameters and the first coordinate based on a preset algorithm. In some embodiments, the smoothing processing function is: S⁡(x)=11+e-20⁢(x-0.5), where, x is a normalized distance corresponding to the distance, and S(x) is the smoothing processing coefficient. Referring toFIG.8, in some embodiments, the correction module50includes a second determining unit51, a first calculating unit52and a second acquiring unit53. The second determining unit51is configured to determine a floating point coordinate corresponding to each first coordinate by calculating the smoothing processing coefficient, the second coordinate and the first coordinate based on a preset algorithm. The first calculating unit52is configured to acquire an integer coordinate and a pixel value of each pixel by performing interpolation calculation on the floating point coordinate. The second acquiring unit53is configured to acquire the distortion correction image based on the integer coordinate point and the pixel value. Referring toFIG.9, in some embodiments, the correction module50includes a third determining unit54, a second calculating unit55and a correction unit56. The third determining unit54is configured to determine a first weight of the second coordinate and a second weight of the first coordinate based on the smoothing processing coefficient, in which the first weight is proportional to the smoothing processing coefficient and the second weight is inversely proportional to the smoothing processing coefficient. The second calculating unit55is configured to calculate a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate. The correction unit56is configured to acquire the distortion correction image by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. The electronic device in the embodiment includes a memory, a processor and computer programs stored on the memory and executable by the processor. When the computer programs are executed by the processor the following steps are implemented: acquiring a distorted image to be corrected and a first coordinate of each pixel in the distorted image; acquiring a second coordinate corresponding to the first coordinate, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate; acquiring a distance between the first coordinate and a coordinate of a center point of the distorted image, and determining a smoothing processing coefficient corresponding to the distance based on a smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient; and acquiring a distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining internal parameters of a camera module for shooting the distorted image; and acquiring the second coordinate by calculating the internal parameters and the first coordinate based on a preset algorithm. In some embodiments, the smoothing processing function is: S⁡(x)=11+e-20⁢(x-0.5), where, x is a normalized distance corresponding to the distance, and S(x) is the smoothing processing coefficient. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining a floating point coordinate corresponding to each first coordinate by calculating the smoothing processing coefficient, the second coordinate and the first coordinate based on a preset algorithm; acquiring an integer coordinate and a pixel value of each pixel by performing interpolation calculation on the floating point coordinate; and acquiring the distortion correction image based on the integer coordinate point and the pixel value. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining a first weight of the second coordinate and a second weight of the first coordinate based on the smoothing processing coefficient, the first weight being proportional to the smoothing processing coefficient and the second weight being inversely proportional to the smoothing processing coefficient; calculating a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate; and acquiring the distortion correction image by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. A non-transitory computer readable storage medium is stored with computer programs thereon in the embodiment of the disclosure. When the computer programs are executed by the processor, the following steps are implemented: acquiring a distorted image to be corrected and a first coordinate of each pixel in the distorted image; acquiring a second coordinate corresponding to the first coordinate, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate; acquiring a distance between the first coordinate and a coordinate of a center point of the distorted image, and determining a smoothing processing coefficient corresponding to the distance based on a smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient; and acquiring a distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining internal parameters of a camera module for shooting the distorted image; and acquiring the second coordinate by calculating the internal parameters and the first coordinate based on a preset algorithm. In some embodiments, the smoothing processing function is: S⁡(x)=11+e-20⁢(x-0.5), where, x is a normalized distance corresponding to the distance, and S(x) is the smoothing processing coefficient. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining a floating point coordinate corresponding to each first coordinate by calculating the smoothing processing coefficient, the second coordinate and the first coordinate based on a preset algorithm; acquiring an integer coordinate and a pixel value of each pixel by performing interpolation calculation on the floating point coordinate; and acquiring the distortion correction image based on the integer coordinate point and the pixel value. In some embodiments, when the computer programs are executed by the processor, the following steps may be further implemented: determining a first weight of the second coordinate and a second weight of the first coordinate based on the smoothing processing coefficient, the first weight being proportional to the smoothing processing coefficient and the second weight being inversely proportional to the smoothing processing coefficient; calculating a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate; and acquiring the distortion correction image by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. A method and an apparatus for correcting a distorted image in the embodiments of the disclosure are described with reference to the drawings. The application body of the method for correcting a distorted image in the embodiment of the disclosure is a smart terminal with a camera module including wide-angle cameras, and the smart terminal may be a mobile phone, a notebook computer, a smart wearable device, etc. At present, on the smart terminal, if the influence of gray values of several points just adjacent to a sample point to be detected is only considered, but the influence of a change rate of the gray values among these adjacent points is not considered, a high frequency component of the interpolated image may be lost and the image edge may become blurred to a certain degree. Comparing the output image obtained by the method with the input image, there still exists a problem that the image quality is damaged and the calculation precision is not high due to improper consideration of the interpolation function design. For the technical problem in the related art that the definition of the corrected distortion image obtained directly with a bilinear interpolation algorithm is lost, the disclosure provides a novel method for distortion correction, which achieves correcting the distortion at different regions of the image in different degrees by introducing a weighted smoothing function, and reduces the loss of image definition as much as possible while ensuring high timeliness of the algorithm. The execution body for improving image distortion correction in the embodiments of the disclosure is a CPU of a smart terminal. As illustrated in a hardware flowchart of the solution inFIG.1, on the smart terminal, firstly, a cmos sensor in a wide-angle camera performs photosensitive processing to convert an optical signal into raw format data; the raw format data is then processed by an image signal processor (ISP) to convert an image into one in a yuv format; calculation is performed by the CPU, distortion correction is performed on the yuv image through previously known camera internal parameters; finally, after the distortion correction processing, the yuv data is sent to a display for displaying, and a Jpeg format coding is performed by a coder and stored in a memory of the smart terminal. Specifically,FIG.2is a flowchart of a method for correcting a distorted image according to an embodiment of the disclosure. The image distortion processing in the embodiment of the disclosure is described in taking a distorted image shot by a wide-angle camera as an example. As illustrated inFIG.2, the method includes: at101, a distorted image to be corrected and a first coordinate of each pixel in the distorted image are acquired. Specifically, a distorted image previously shot by the camera module may be read from a system memory, a distorted image shot by the camera module in real time may be obtained, and a distorted image may be an image that is processed by conventional de-distortion. Since the image after distortion processing in the related art is still distorted, it is defined as a distorted image in the disclosure. Further, the first coordinate of each pixel in the distorted image is acquired based on an image recognition algorithm. At102, a second coordinate corresponding to the first coordinate is acquired, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate. Specifically, the first coordinate of the distorted image is a coordinate with a certain distortion, assuming that when there is no shooting distortion in the image shot by the camera module for shooting a distorted image, the coordinate corresponding to the first coordinate should be the second coordinate, at this time, in order to achieve the correction of the first coordinate, a second coordinate without distortion may be acquired based on a corresponding relationship between internal parameters of the camera module and image distortion degrees. The image distortion degrees are decided by the internal parameters of the camera module. In a possible implementation, internal parameters of the camera module for shooting the distorted image are determined, which determine a distortion degree of the first coordinate, and a second coordinate corresponding to the first coordinate is determined based on a corresponding relationship between the internal parameters and the distortion degrees. Specifically, in the embodiment, the camera module is controlled to shoot a trained object at a plurality of angles and obtain a plurality of reference images, in which the trained object has a regular shape, a contour mark, etc. so as to quickly find a reference point in the corresponding image for calibrating. For example, the trained object may be a checkerboard pattern, so that the pixel of each checkerboard corner is easily detected. The checkerboard corner in the checkerboard pattern may be served as a reference point. Further, an image coordinate corresponding to the reference point in the trained object in each reference image is acquired, and a world coordinate based on the reference point is pre-measured. The internal parameters of the camera module may be calculated based on world coordinates and image coordinates of pre-stored reference points. The internal parameters may include an x-coordinate cx and a y-coordinate cy of a principal point, a normalized focal length fx in x direction, a normalized focal length fy in y direction, radial distortion coefficients k1, k2, k3 and tangential distortion coefficients p1, p2. Further, the distorted first coordinate and the internal parameters are calculated based on a preset calculation equation to acquire a second coordinate. For example, when the trained object is a checkerboard, the plane checkerboard pattern plate is shot at different angles with a camera to obtain 6-9 full-size images, ensuring that the checkerboard pattern is full of the whole FOV of the camera, in which the pixel point of each checkerboard corner is easily detected and the checkerboard corner in the checkerboard pattern may be served as a reference point. The checkerboard corner is detected at a sub-pixel scale on the collected 6-9 full-size reference images to obtain image coordinates of the checkerboard corners of each image. Since the checkerboard for calibrating is customized and the coordinate of the corner on a three-dimensional world space is previously known, the world coordinate of the reference point in the checkerboard may be obtained. The internal parameters of the camera may be obtained based on a corresponding relationship between image planes and checkerboard planes through the obtained image coordinate of the reference point and the world coordinate of the reference point. Further, an original undistorted image is calculated by the x-coordinate cx and y-coordinate cy of the principal point, the normalized focal length fx in x direction, the normalized focal length fy in y direction, the radial distortion coefficients k1, k2, k3, the tangential distortion coefficients p1, p2 and the known distorted images in the obtained internal parameters of the camera. Specifically, for a second coordinate (u0, v0), the corresponding camera coordinate (that is, a coordinate corresponding to an undistorted coordinate in a camera coordinate system) is (x0, y0), where x⁢0=u⁢0-c⁢xf⁢x;y⁢0=v⁢0-c⁢yf⁢y; A coordinate of a distorted point corresponding to the camera coordinate is (x′, y′), where x′=x0·(1+k1·r2+k2·r4+k3·r6)+2p1·x0·y0+p2·(r2+2x02); y′=y0·(1+k1·r2+k2·r4+k3·r6)+2p2·x0·y0+p1·(r2+2y02); where, r2=x02+y02; Further, a distorted coordinate (a first coordinate) of the obtained distorted point in the distorted image is calculated below: ud=fx·x′+cx; vd=fy·y′+cy; In this way, the distorted (first) coordinate (ud, vd) corresponding to the undistorted (second) coordinate (u0, v0) in the undistorted image is obtained, and based on the corresponding relationship, the second coordinate may be calculated. As another possible implementation, a depth model is pre-trained based on a large number of sample images, the input of the depth model is a distorted first coordinate, and the output is an undistorted second coordinate. The trained depth model represents the corresponding relationship between internal parameters of the camera module and image distortion degrees. Therefore, the second coordinate may be determined by inputting the first coordinate into the trained depth model. At103, a distance between the first coordinate and a coordinate of a center point of the distorted image is acquired, and a smoothing processing coefficient corresponding to the distance is determined based on a smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient. It may be understood that, due to shooting mechanism of the camera module, the closer to the edge of the image, the higher the distortion degree, the closer to the central region, the smaller the distortion degree. Therefore, a distance between a distorted coordinate and a coordinate of a center point of the distorted image may be calculated, and a smoothing processing coefficient may be calculated based on the preset smoothing processing function and the distance, in which the smoothing processing coefficient is configured to correct a distorted image. It should be noted that, the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient. That is, the closer to the edge region of the image, the larger the distance, the larger the smoothing processing coefficient, the stronger the correction processing; the further to the edge region of the image, the smaller the distance, the smaller the smoothing processing coefficient, the weaker the correction processing. Therefore, it is obvious that, the smoothing processing function may enable the correction degree of the distorted image from the center to the edge to be gradually enhanced, which accordingly ensures smoothing transition, enhances the authenticity of the processed image, and achieves smoothing correction on the distorted image based on the smoothing processing function. In an embodiment of the disclosure, a distance may be calculated by an equation (1), where, in the following equation, x is a normalized Euclidean distance value between a current first coordinate (ud, vd) and a coordinate of a center point of the distorted image (u′, v′): x=√{square root over ((ud−u′)2+(vd−v′)2)}  equation (1) Further, as a possible example, a smoothing processing function is expressed by an equation (2), S⁡(x)=11+e-20⁢(x-0.5),equation⁢(2) where, x is a Euclidean distance corresponding to the distance, S(x) is a smoothing processing function. In the example, the corresponding smoothing processing function is illustrated inFIG.3, where the horizontal axis is the Euclidean distance, and the longitudinal axis is a value of the smoothing processing coefficient. As illustrated inFIG.3, the larger the Euclidean distance, the larger the value of the smoothing processing coefficient. The value of the smoothing processing coefficient is smoothly increased to ensure the processing quality of the subsequent image. At104, a distortion correction image is acquired by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. Specifically, smoothing correction is performed on the first coordinate in combination with the smoothing processing coefficient and the second coordinate. At this time, since an undistorted coordinate is combined when the distortion is corrected, the definition of the image may be greatly improved. Furthermore, the smoothing processing coefficient is related to the distance and is basically a positive correlation function of the distance. In the image of the wide-angle camera, the distortion amplitude of the central region is relatively small and that of the edge region is relatively large, the sensitivity of the human eye on the definition of the image central region is higher than that on the definition of the edge region. Therefore, it is achieved that the correction degree of central distortion, and the distortion correction degree from the image center point to the image edge may be sequentially smoothly enhanced by the smoothing processing coefficient. In this way, the definition of the image center may be ensured, and the distortion correction degree of the image edge may be also ensured. As a possible implementation, the first coordinate is corrected by a preset equation as illustrated in an equation (3), where (u1, v1) is a floating point coordinate, (u0, v0) is an undistorted coordinate, (ud, vd) is a distorted coordinate, and s is a smoothing coefficient. Based on the above description, the closer to the edge region, the larger the s, the closer the obtained (u1, v1) is to the distorted coordinate (ud, vd), and the higher the corresponding correction degree is; the closer to the central region, the smaller the s, the closer the obtained (u1, v1) is to the undistorted coordinate (u0, v0), and the smaller the corresponding correction degree. (u1,v1)=(ud,vd)·s+(u0,v0)·(1−s)  equation (3) Based on the imaging principle, an integer coordinate and a pixel value of each pixel are acquired by performing interpolation calculation on the floating point coordinate; and an undistorted image is acquired based on the integer coordinate and the pixel value. Specifically, at this time, the (u1, v1) is often a floating point coordinate, and the actual image coordinate (u2, v2) is an integer coordinate. Therefore, a pixel gray value of the integer coordinate (u2, v2) is obtained by performing interpolation calculation on adjacent pixels of the floating point coordinate (u1, v1). In an embodiment, the RGB value of the integer coordinate may be obtained by respectively performing interpolation calculation. A bilinear interpolation method may be used to perform linear interpolation in the x and y directions respectively with gray scales of four adjacent pixels of the pixel to be solved. As illustrated inFIG.4, four known floating point coordinates adjacent to an unknown integer coordinate (u2, v2) calculated in u and v directions are (u1′, v1′), (u1″, v1′), (u1″, v1″), (u1′, v1″), respectively. At the first step, (u2, v1′) is obtained by performing linear interpolation on (u1″, v1′) and (u1′, v1′) in u direction, and (u2, v1″) is obtained by performing linear interpolation is performed on (u1″, v1″) and (u1′, v1″) in u direction; at the second step, a pixel gray value corresponding to the integer coordinate (u2, v2) is obtained by performing linear interpolation on (u2, v1′) and (u2, v1″) in v direction. (u2, v2) sequentially traverses the coordinates of all pixels of the whole image, so as to obtain a distortion correction image. Therefore, on the basis of ensuring high timeliness of an algorithm, the loss of image definition after interpolation is weakened as much as possible. As an example, a specific flowchart framework of the algorithm is illustrated inFIG.5, where internal parameters of a camera are obtained by Zhang's calibration method, including principal point coordinates (cx, cy), focal lengths (fx, fy), radial distortion coefficients (k1, k2, k3) and tangential distortion coefficients (p1, p2); a distorted coordinate (ud, vd) of the undistorted coordinate (u0, v0) in the distorted image is calculated; weighted fusion is performed on (ud, vd) and (u0, v0) by the smoothing function in the embodiments to obtain a fused floating point coordinate (u1, v1); and bilinear interpolation is performed on the (u1, v1) to obtain a final coordinate (u2, v2) of the distortion correction image. A complete distortion correction image may be obtained by traversing all coordinates. As another possible implementation, a first weight of the second coordinate and a second weight of the first coordinate are determined based on the smoothing processing coefficient, in which the first weight is proportional to the smoothing processing coefficient and the second weight is inversely proportional to the smoothing processing coefficient; a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate are calculated; and a distortion correction image is acquired by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. Therefore, the closer to the edge of the distorted image, the more stronger the coordinate correction of relevant pixels in a consideration ratio of second coordinate, the closer to the center, the more dependent on the original first coordinate so as to remain the relevant pixels, thereby ensuring the authenticity of the image and improving the smoothness of the corrected pattern. Of course, in an embodiment of the disclosure, considering that in different scenarios, when de-distortion processing is performed on the distorted image, the center region with a high definition and the edge region with a high distortion correction degree are different in size ratio, so correction adjustment degrees may be acquired and correction adjustment coefficients are determined based on the correction adjustment degrees. For example, a progress bar of the correction degree may be provided, and the correction adjustment coefficient may be determined based on a corresponding relationship between the progress bar and the correction degree. For another example, the shot object corresponding to the distorted image may be automatically detected, different correction degrees are determined based on different types and colors of the shot object, for example, a correction degree of the shot object being a face image is relatively high, and the correction degree is higher when the shot image is a night scene image compared to the correction degree when it is shot in daytime. In summary, with the method for correcting a distorted image in the embodiment of the disclosure, a distorted image to be corrected and a first coordinate of each pixel in the distorted image are acquired, and a second coordinate corresponding to the first coordinate is calculated, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate, a distance between the first coordinate and a coordinate of a center point of the distorted image is further calculated, and a smoothing processing coefficient corresponding to the distance is determined based on a preset smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient; and a distortion correction image is acquired by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. Thus, an improvement is performed based on a conventional wide-angle distortion correction algorithm, distortion correction processing is performed by additionally using a weighted smoothing function and still using a bilinear interpolation algorithm. Compared with the conventional distortion correction algorithm, a distribution of distortion in the whole image is considered, and it is achieved that the distortion for different regions of the image is distinguished for correction, on the basis of ensuring high timeliness of the algorithm, not only a loss degree of definition of the distortion correction image is weakened, but also the distortion of the image region with a large distortion is completely eliminated, which achieves an optimal photographing experience. In order to achieve the above embodiments, the disclosure provides an apparatus for correcting a distorted image.FIG.6is a structural diagram of an apparatus for correcting a distorted image according to an embodiment of the disclosure. As illustrated inFIG.6, the apparatus for correcting a distorted image includes a first acquiring module10, a second acquiring module20, a third acquiring module30, a determining module40and a correction module50. The first acquiring module10is configured to acquire a distorted image to be corrected and a first coordinate of each pixel in the distorted image. Specifically, the first acquiring module10may read a distorted image previously shot by the camera module from a system memory, and may also acquire a distorted image shot by the camera module in real time, and a distorted image may be an image that is processed by a conventional de-distortion. Since the image after distortion processing in the related art is still distorted, it is defined as a distorted image in the disclosure. Further, the first acquiring module10acquires a first coordinate of each pixel in the distorted image based on an image recognition algorithm. The second acquiring module20is configured to calculate a second coordinate corresponding to the first coordinate, in which the second coordinate is an undistorted coordinate corresponding to the first coordinate. Specifically, the first coordinate of the distorted image is a coordinate with a certain distortion, assuming that when there is no shooting distortion in the image shot by the camera module for shooting a distorted image, the coordinate corresponding to the first coordinate should be the second coordinate, at this time, in order to achieve the correction of the first coordinate, the second acquiring module20may acquire a second coordinate without distortion. In an embodiment of the disclosure, as illustrated inFIG.7, on the basis ofFIG.6, the second acquiring module20includes a first determining unit21and a first acquiring unit22. The first determining unit21is configured to determine internal parameters of the camera module for shooting the distorted image. The first acquiring unit22is configured to acquire the second coordinate by calculating the internal parameters and the first coordinate based on a preset algorithm. Specifically, in the embodiment, the camera module is controlled to shoot a trained object at a plurality of angles and obtain a plurality of reference images, in which the trained object has a regular shape, a contour mark, etc. so as to quickly find a reference point in the corresponding image for calibrating. For example, the trained object may be a checkerboard pattern, so that the pixel of each checkerboard corner is easily detected. The checkerboard corner in the checkerboard pattern may be served as a reference point. Further, the first determining unit21acquires an image coordinate corresponding to the reference point in the trained object in each reference image, and a world coordinate based on the reference point is pre-measured, and internal parameters of the camera module may be calculated based on world coordinates and image coordinates of pre-stored reference points. The internal parameters may include an x-coordinate cx and a y-coordinate cy of a principal point, a normalized focal length fx in x direction, a normalized focal length fy in y direction, radial distortion coefficients k1, k2, k3 and tangential distortion coefficients p1, p2. Further, the first acquiring unit22calculates the distorted first coordinate and the internal parameters based on a preset equation to acquire a second coordinate. The third acquiring module30is configured to acquire a distance between a first coordinate and a coordinate of a center point of the distorted image. It may be understood that, due to shooting mechanism of the camera module, the closer to the edge of the image, the higher the distortion degree, the closer to the central region, the smaller the distortion degree. Therefore, the third acquiring module30may calculate a distance between a distorted coordinate and a coordinate of a center point of the distorted image, and calculate a smoothing processing coefficient based on the preset smoothing processing function and the distance, in which the smoothing processing coefficient is configured to correct a distorted image. It should be noted that, the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient. That is, the closer to the edge region of the image, the larger the distance, the larger the smoothing processing coefficient, the stronger the correction processing; the further to the edge region of the image, the smaller the distance, the smaller the smoothing processing coefficient, the weaker the correction processing. Therefore, it is obvious that, the smoothing processing function may enable the correction degree of the distorted image from the center to the edge to be gradually enhanced, which accordingly ensures smoothing transition, enhances the authenticity of the processed image, and achieves smoothing correction on the distorted image based on the smoothing processing function. The determining module40is configured to determine a smoothing processing coefficient corresponding to the distance based on a preset smoothing processing function, in which the smoothing processing function is configured to indicate a proportional relationship between the distance and the smoothing processing coefficient. The correction module50is configured to acquire a distortion correction image by performing smoothing correction on the first coordinate based on the smoothing processing coefficient and the second coordinate. Specifically, smoothing correction is performed on the first coordinate in combination with the smoothing processing coefficient and the second coordinate. At this time, since an undistorted coordinate is combined when the distortion is corrected, the definition of the image may be greatly improved. Furthermore, the smoothing processing coefficient is related to the distance and is basically a positive correlation function of the distance. In the image of the wide-angle camera, the distortion amplitude of the central region is relatively small and that of the edge region is relatively large, the sensitivity of the human eye on the definition of the image central region is higher than on the definition of the edge region. Therefore, the correction module50achieves reducing the correction degree of central distortion and may achieve that the distortion correction degree from the image center point to the image edge is sequentially smoothly enhanced by the smoothing processing coefficient. In this way, the definition of the image center may be ensured, and the distortion correction degree of the image edge may be also ensured. In an embodiment of the disclosure, as illustrated inFIG.8, on the basis ofFIG.6, the correction module50includes a second determining unit51, a first calculating unit52and a second acquiring unit53. The second determining unit51is configured to determine a floating point coordinate corresponding to each first coordinate by calculating the smoothing processing coefficient, the second coordinate and the first coordinate based on a preset algorithm. The first calculating unit52is configured to acquire an integer coordinate and a pixel value of each pixel by performing interpolation calculation on the floating point coordinate. The second acquiring unit53is configured to acquire the distortion correction image based on the integer coordinate point and pixel value. As illustrated inFIG.9, on the basis ofFIG.6, the correction module50includes a third determining unit54, a second calculating unit55and a correction unit56. The third determining unit54is configured to determine a first weight of the second coordinate and a second weight of the first coordinate based on the smoothing processing coefficient, in which the first weight is proportional to the smoothing processing coefficient and the second weight is inversely proportional to the smoothing processing coefficient. The second calculating unit55is configured to calculate a first product of the first weight and the second coordinate, and a second product of the second weight and the first coordinate. The correction unit56is configured to acquire the distortion correction image by performing smoothing correction on the first coordinate based on a sum of the first product and the second product. It should be noted that, the explanation of the embodiments of the method for correcting a distorted image is applied to an apparatus for correcting a distorted image, which will not be repeated here. In summary, the apparatus for correcting a distorted image in the embodiment of the disclosure performs improvements based on a conventional wide-angle distortion correction algorithm, and performs distortion correction processing by additionally using a weighted smoothing function and still using a bilinear interpolation algorithm. Compared with a conventional distortion correction algorithm, a distribution of distortion in the whole image is considered, and it is achieved that the distortion for different regions of the image is distinguished for correction, on the basis of ensuring high timeliness of the algorithm, not only a loss degree of definition of the distortion correction image is weakened, but also the distortion of the image region with a large distortion is completely eliminated, which achieves an optimal photographing experience. In order to achieve the above embodiments, the disclosure further provides an electronic device. The electronic device includes a memory, a processor and computer programs stored on the memory and executable by the processor. When the computer programs are executed by the processor, the method for correcting a distorted image as described in the above embodiments is implemented. In order to achieve the above embodiments, the disclosure further provides a non-transitory computer readable storage medium stored with computer programs thereon, and when the computer programs are executed by a processor, the method for correcting a distorted image as described in the above method embodiments is implemented. In the specification of the disclosure, descriptions with reference to terms “an embodiment”, “some embodiments”, “example”, “specific example” or “some examples” mean specific features, structures, materials or characteristics described in combination with the embodiment or example are included in at least one embodiment or example of the disclosure. In the specification, the schematic representations of the above terms do not have to be the same embodiment or example. Moreover, specific features, structures, materials or characteristics described may be combined in one or more embodiments or examples in a suitable manner. Furthermore, those skilled in the art may combine and integrate the different embodiments or examples described in the specification, as well as features of the different embodiments or examples without conflicting with each other. In addition, the terms “first” and “second” are only for describing purposes and are not to be construed as indicating or implying relative importance, or implicitly indicating the number of technical features indicated. Thus, the features defined with the terms “first” and “second” may explicitly or implicitly include at least one of features. In the description of the disclosure, “a plurality of” means at least two, for example, two, three, unless otherwise expressly and specifically stated. Any process or method described in the flowchart or otherwise described herein may be understood as representing one or more modules, segments, or portions of codes of executable instructions for implementing the steps of a customized logical function or process, and the scope of the preferred embodiments of the present disclosure includes additional implementations, in which the functions may be executed not in the sequence shown or discussed, the scope also including the functions are executed in a substantially simultaneous manner or in a reverse sequence, which will be appreciated by those skilled in the art who the embodiments of the disclosure belong to. The logics and/or steps represented in the flowchart or described in other ways herein, for example, may be considered as an ordered list of executable instructions configured to implement logic functions, which may be specifically implemented in any computer readable medium, for use of a system, an apparatus or a device for executing instructions (such as a computer-based system, a system including a processor, or other systems that may obtain and execute the instructions from the system, the apparatus or the device for executing instructions) or in combination with the system, the apparatus or the device for executing instructions. A “computer readable medium” in the disclosure may be an apparatus that may contain, store, communicate, propagate or transmit a program for use of a system, an apparatus or a device for executing instructions or in combination with the system, the apparatus or the device for executing instructions. A more specific example (a non-exhaustive list) of a computer readable medium includes the followings: an electronic connector (an electronic apparatus) with one or more cables, a portable computer disk box (a magnetic device), a random access memory (RAM), a read-only memory (ROM), an electrically programmable read-only memory (an EPROM or a flash memory), an optical fiber apparatus, and a portable optical disk read-only memory (CDROM). In addition, a computer readable medium even may be paper or other suitable medium on which programs may be printed, since paper or other medium may be optically scanned, and then edited, interpreted or processed in other suitable ways if necessary to obtain electronically programs and store the programs in a computer memory. It should be understood that all parts of the present disclosure may be implemented with a hardware, a software, a firmware and their combination. In the above embodiments, a plurality of steps or methods may be stored in a memory and implemented by a software or a firmware executed by a suitable system for executing instructions. For example, if they are implemented with a hardware as in the another implementation, it may be implemented by any of the following technologies or their combinations known in the art: a discrete logic circuit with logic gate circuits configured to achieve logic functions on data signals, a special integrated circuit with appropriate combined logic gate circuits, a programmable gate array (PGA), a field programmable gate array (FPGA), etc. Those skilled in the art may understand that all or part of steps in the above method embodiments may be implemented by instructing relevant hardwares with computer programs. The programs may be stored in a computer readable storage medium, and when the programs are executed, one of the steps in the method embodiments or their combination is implemented. In addition, various functional units in the embodiments of the disclosure may be integrated in one processing module, or each of the units may be physically existed alone, or two or more units may be integrated in one module. The integrated module may be achieved by a form of a hardware, and also may be achieved by a form of a software functional module. The integrated module may be stored in a computer readable storage medium when it is implemented in a form of a software functional module and sold or used as an independent product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk. Even though embodiments of the disclosure have been illustrated and described above, it may be understood that, the above embodiments are exemplary and cannot be constructed as a limitation to the disclosure, and various changes, modifications, substitutions and alterations may be made by those skilled in the art for the embodiments within the scope of the disclosure.
49,749
11861814
DESCRIPTION OF THE PREFERRED EMBODIMENTS The advantages and features of the present invention and methods of achieving the same will be apparent from the exemplary embodiments to be described below in more detail with reference to the accompanying drawings. However, it should be noted that the present invention is not limited to the following exemplary embodiments, and may be implemented in various forms. Accordingly, the exemplary embodiments are provided only to disclose the present invention and to let those skilled in the art know the category of the present invention, and the present invention is to be defined based only on the claims. The same reference numerals or the same reference designators denote the same elements throughout the specification. It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements are not intended to be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element discussed below could be referred to as a second element without departing from the technical spirit of the present invention. The terms used herein are for the purpose of describing particular embodiments only, and are not intended to limit the present invention. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,”, “includes” and/or “including,” w % ben used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless differently defined, all terms used herein, including technical or scientific terms, have the same meanings as terms generally understood by those skilled in the art to which the present invention pertains. Terms identical to those defined in generally used dictionaries should be interpreted as having meanings identical to contextual meanings of the related art, and are not to be interpreted as having ideal or excessively formal meanings unless they are definitively defined in the present specification. Hereinafter, an apparatus and method for sensing an image based on an event according to an embodiment will be described in detail with reference toFIGS.1to6. First, a dynamic vision sensor to which an apparatus for sensing an image based on an event according to an embodiment is applied will be briefly described. FIG.1is a comparison view illustrating an image acquisition method of a standard camera and that of a dynamic vision sensor to which an embodiment is applied. Referring toFIG.1, it is assumed that a black point is present on a circular plate and that the circular plate rotates. Here, the standard camera acquires image signals for all of the areas in the direction of a time axis and outputs the same. However, the dynamic vision sensor (DVS) selectively extracts only the point on the circular plate that corresponds to the area, the brightness of which changes along the time axis, and transmits location data pertaining thereto. This image acquisition method enables microsecond resolution on the time axis, thereby realizing time resolution higher than that provided by a high-speed camera capable of capturing thousands of frames per second. Furthermore, because power consumption and required data storage can be significantly reduced, there is an advantage in that the dynamic range of a sensor, which is the range of brightness capable of being sensed by the sensor, may be significantly increased. The disclosed embodiment proposes an apparatus and method for sensing an image based on an event, which may minimize the effects of noise generated due to a change in illumination into an unwanted form or due to the performance limits of an image acquisition device when brightness/color information is acquired in such a general dynamic vision sensor. Also, in order to overcome the limitation in which a general dynamic vision sensor is not able to use color information, the disclosed embodiment proposes an apparatus and method for sensing an image based on an event, the apparatus and method enabling event pixels to be selectively extracted depending on the color or pattern of an object of interest without the use of an additional device by effectively using the color information of an image signal. Also, the disclosed embodiment proposes an apparatus and method for sensing an image based on an event, through which the degradation of performance of neural-network-based object detection, which is caused due to a decrease in the number of extracted pixels when the speed of movement of an object is low or when there is little change in brightness in an image signal in a general dynamic vision sensor, may be prevented. FIG.2is a schematic block diagram of an apparatus for sensing an image based on an event according to an embodiment,FIG.3is a schematic block diagram of a change detection unit according to an embodiment, andFIG.4is a view illustrating the relationship between a quantized difference and an output bit according to an embodiment. Referring toFIG.2, an apparatus for sensing an image based on an event according to an embodiment may include an image acquisition unit110, an image conversion unit120, a change detection unit130, a bitstream generation unit140, and a converted image storage unit150. The image acquisition unit110may acquire image information including at least one of brightness information and color information from an input image signal. Here, the image acquisition unit110includes an optical lens and a photosensitive device, and shape information in a digital form may be acquired thereby. That is, for the area acquired through the optical lens, an image configured with2D pixels may be acquired using a photosensitive semiconductor device, such as a Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS). Here, the acquired image information may be a simple monochrome brightness image or a color image configured with RGB values. The image conversion unit120may perform conversion, including at least one of filtering of at least one of the acquired brightness information and color information, color conversion, and brightness conversion. That is, an embodiment may reduce the effects of a rapid change in brightness by further applying conversion in addition to logarithmic conversion, rather than applying only logarithmic conversion in order to represent only an increase or decrease in a change in the brightness according to the conventional art, and may minimize the number of event pixels generated due to noise. Also, an embodiment may enable event pixels to be selectively extracted depending on the color or pattern of an object of interest by performing color conversion. Here, the image conversion unit120includes multiple conversion units, and may consecutively operate one or more selected therefrom. According to an embodiment, the image conversion unit120may include at least one of a conversion unit for converting color information into monochrome brightness information, a conversion unit for converting original color information into other color information, a conversion unit for brightness-based conversion including gamma correction, increasing a brightness value, and decreasing a brightness value, a conversion unit for frequency-based conversion including at least one of a low-pass filter, a high-pass filter, and a band-pass filter, a conversion unit for conversion based on a wavelet filter, and a conversion unit for conversion based on a nonlinear filter including a bilateral filter and a morphological filter. The change detection unit130may calculate a quantized difference for each pixel between a first converted image, converted from the currently input image signal, and a second converted image, converted from the previously input image signal. That is, a change in each pixel between images on the time axis is calculated. Here, the first converted image may be output from the image conversion unit120, and the second conversion image may be extracted from the converted image storage unit150. Here, the converted image storage unit150may store the image converted by the image conversion unit120along with time information pertaining thereto. Accordingly, over time, the converted images may be sequentially stored in the converted image storage unit150at a predetermined time interval. Here, the predetermined time interval may be, for example, one second. By storing the converted images as described above, it may be expected that an image event will be acquired and processed robustly in spite of momentarily occurring noise or camera movement. Referring toFIG.3, the change detection unit130may include a difference calculation unit131, a quantization unit133, and a filter unit135. The difference calculation unit131may calculate a difference between the first converted image and the second converted image for each pixel. Here, one or more converted images may be extracted as the second converted image. Here, when multiple second converted images are extracted, the difference calculation unit131may compute a weighted sum of the second converted images based on respective weights assigned to the second converted images, and may then calculate the difference from the first converted image. That is, the difference D(t) may be calculated as shown in the following Equation (1): D⁡(t)=T⁡(t)-∑i=1m⁢ωi·T⁡(t-i),(0≤ωi≤1,∑i=1m⁢ωi=1)(1) In Equation (1), T(t) denotes the first converted image value, T(t−1), T(t−2), . . . , T(t−m) denote the multiple second converted image values, and aw is the weight assigned to the i-th second converted image. Here, the sum of the weights may be ‘1’. Here, when multiple second converted images are extracted, the difference calculation unit131may perform at least one of binary operations including an ‘AND’ operation and an ‘OR’ operation, which are operations for binary images, on the multiple second converted images. That is, when an ‘OR’ operation is performed, the difference calculation unit131may select all of the pixels changed at least once on the time axis from the multiple second converted images, and may calculate the difference between the first converted image and the second converted images for each of the selected pixels. Also, when an ‘AND’ operation is performed, the difference calculation unit131may select only pixels that always change on the time axis from the multiple second converted images, and may calculate the difference between the first converted image and the second converted images for each of the selected pixels. Meanwhile, the quantization unit133quantizes the difference for each pixel, which is calculated by the difference calculation unit131. This serves to represent the difference using a limited number of bits while minimizing information loss. Here, the quantization unit133may quantize the difference, the absolute value of which is equal to or greater than a predetermined threshold. That is, when the range of the converted image value is Tmin≤T(t)≤Tmax, the range of the difference D(t) may be defined as shown in the following Equation (2): Tmin−Tmax≤D(t)≤Tmax−Tmin(2) The quantization unit133excludes a dead zone, which is defined as a range of values, from which the distance to zero is less than the predetermined threshold, from the range of the difference D(t) specified in Equation (2), and quantizes only the difference, the absolute value of which is equal to or greater than the predetermined threshold. Here, the predetermined threshold may be adjusted depending on the speed of movement of an object included in the image signal or a change in brightness. When the speed of movement of an object is low or when a change in brightness is small, the number of generated event pixels may decrease. This decrease in the number of pixels may significantly degrade performance when this technology is combined with object detection technology using a neural network, which is receiving a lot of attention these days. This is because, when object detection using a neural network is attempted, the trustworthiness of the result output from the neural network can be guaranteed only when more than a certain amount of image information is provided as the input for the neural network. Therefore, according to an embodiment, the threshold is adjusted depending on the speed of movement of an object or a change in brightness, whereby an image having a number of event pixels sufficient to guarantee trustworthiness may be generated. Meanwhile, the quantization unit133may perform uniform quantization having a fixed quantization interval or non-uniform quantization having a variable quantization interval. The filter unit135filters the quantized difference for each pixel, which is output from the quantization unit133. The filter unit135deletes or copies the quantized difference of a specific pixel, thereby making the value similar to neighboring values. Also, the filter unit135may perform morphological filtering such that a cluster of pixels has a simple shape. Through the operation of the filter unit135, even when there is little motion or only a small change in brightness, event information required in various application fields may be appropriately adjusted. The bitstream generation unit140generates information about pixels having a change on the time axis as a bitstream based on the quantized difference. Here, information about a pixel having a change on the time axis may include information about the time at which the image signal is input, information about the location of a pixel, the quantized difference of which is nonzero, and binarized information of the quantized difference. For example, the information about the location of the pixel may be represented as an image frame configured with “0”s and “1”s by representing a pixel, the value of which is not specified, as “0” and representing a pixel, the value of which is specified, as “1”. For example, referring toFIG.4, when the quantized difference falls within the dead zone, binarized information therefor is not generated, when the quantized difference falls within a positive number range, a bit “1” may be assigned as the binarized information of the quantized difference, and when the quantized difference falls within a negative number range, a bit “0” may be assigned as the binarized information of the quantized difference. FIG.5is a flowchart for explaining a method for sensing an image based on an event according to an embodiment. Referring toFIG.5, the method for sensing an image based on an event according to an embodiment may include acquiring at least one of brightness information and color information from an input image signal at step S210, performing conversion including at least one of filtering of at least one of the acquired brightness information and color information, color conversion, and brightness conversion at step S220, calculating a quantized difference between a first converted image converted from the currently input image signal and a second converted image converted from a previously input image signal at step S230, and generating a bitstream for pixels having a change on the time axis based on the quantized difference at step S240. Here, performing the conversion at step S220may be configured to consecutively perform at least one of conversion of the color information into monochrome brightness information, conversion of original color information into other color information, brightness-based conversion including gamma correction, increasing a brightness value, and decreasing a brightness value, frequency-based conversion including conversion using at least one of a low-pass filter, a high-pass filter, and a band-pass filter, conversion based on a wavelet filter, and conversion based on a nonlinear filter including a bilateral filter and a morphological filter. Here, calculating the quantized difference at step S230is configured to calculate a change in each pixel between images on the time axis, and may include calculating a difference for each pixel between the first converted image and the second converted image at step S231, quantizing the calculated difference for each pixel at step S223, and filtering the quantized difference for each pixel at step S235. Here, the second converted image may be previously stored at a predetermined time interval. Here, one or more images may be extracted as the second converted image. Here, when multiple second converted images are extracted, calculating the difference for each pixel at step S231may be configured to compute the weighted sum of the second converted images based on respective weights assigned to the second converted images and to calculate the difference from the first converted image. Here, the sum of the weights may be ‘1’. That is, the difference D(t) may be calculated as shown in the above Equation (1). Also, quantizing the calculated difference for each pixel at step S233may be configured to quantize the difference, the absolute value of which is equal to or greater than a predetermined threshold. That is, when the difference D(t) falls within the range specified in the above Equation (2), a dead zone, which is defined as a range of values, from which the distance to zero is less than a predetermined threshold, is excluded therefrom, and the difference, the absolute value of which is equal to or greater than the predetermined threshold, is quantized. Here, the predetermined threshold may be adjusted depending on the speed of movement of an object included in the image signal or a change in brightness. Also, quantizing the calculated difference for each pixel at step S223may be configured to perform uniform quantization having a fixed quantization interval or non-uniform quantization having a varying quantization interval. Meanwhile, filtering the quantized difference for each pixel at step S235is configured such that pixels having a relatively small quantized difference are deleted, or the quantized difference of a specific pixel is deleted or copied, whereby the value may be made similar to neighboring values. Also, filtering the quantized difference for each pixel at step S235may be configured to perform morphological filtering such that a cluster of pixels has a simple shape. Here, generating the bitstream at step S240may include generating information about the time at which the image signal is input at step S241, generating information about the location of the pixel, the quantized difference of which is not 0, at step S243, and generating binarized information of the quantized difference for the pixel, the quantized difference of which is not 0, at step S245. FIG.6is a view illustrating a computer system configuration according to an embodiment. The apparatus for sensing an image based on an event according to an embodiment may be implemented in a computer system1000including a computer-readable recording medium. The computer system1000may include one or more processors1010, memory1030, a user-interface input device1040, a user-interface output device1050, and storage1060, which communicate with each other via a bus1020. Also, the computer system1000may further include a network interface1070connected with a network1080. The processor1010may be a central processing unit or a semiconductor device for executing a program or processing instructions stored in the memory1030or the storage1060. The memory1030and the storage1060may be storage media including at least one of a volatile medium, a nonvolatile medium, a detachable medium, a non-detachable medium, a communication medium, and an information delivery medium. For example, the memory1030may include ROM1031or RAM1032. According to an embodiment, the effect of noise, which is generated due to a change in illumination into an unwanted form or due to the performance limits of an image acquisition device when brightness/color information is acquired, may be minimized. According to an embodiment, event pixels may be selectively extracted depending on the color or pattern of an object of interest without the use of an additional device by effectively using the color information of an image signal. According to an embodiment, the degradation of performance of object detection based on a neural network, which results from a decrease in the number of extracted pixels when the speed of movement of an object is low in an image signal or when there is little change in brightness in the image signal, may be prevented. Although embodiments of the present invention have been described with reference to the accompanying drawings, those skilled in the art will appreciate that the present invention may be practiced in other specific forms without changing the technical spirit or essential features of the present invention. Therefore, the embodiments described above are illustrative in all aspects and should not be understood as limiting the present invention.
21,368
11861815
DETAILED DESCRIPTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the present description. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list. Various modifications may be applied to the present embodiments, and particular embodiments will be illustrated in the drawings and described in the detailed description section. The effect and features of the present embodiments, and a method to achieve the same, will be clearer referring to the detailed descriptions below with the drawings. However, the present embodiments may be implemented in various forms, not by being limited to the embodiments presented below. Hereinafter, exemplary embodiments will be described in detail with reference to the accompanying drawings, and in the description with reference to the drawings, the same or corresponding constituents are indicated by the same reference numerals and redundant descriptions thereof are omitted. In the following embodiment, it will be understood that although the terms “first,” “second,” etc. may be used herein to describe various components, these components should not be limited by these terms. The expression of singularity in the specification includes the expression of plurality unless clearly specified otherwise in context. Furthermore, when a part may “include” or “have” a certain constituent element, unless specified otherwise, it may not be construed to exclude another constituent element but may be construed to further include other constituent elements. Sizes of elements in the drawings may be exaggerated for convenience of explanation. For example, since sizes and thicknesses of elements in the drawings are arbitrarily illustrated for convenience of explanation, the following embodiments are not limited thereto. In the following embodiment, it will be understood that when a layer, region, or component is referred to as being “formed on” another layer, region, or component, it can be directly or indirectly formed on the other layer, region, or component. That is, for example, intervening layers, regions, or components may be present. It will be understood that when a layer, region, or component is referred to as being “connected to” another layer, region, or component, it can be directly connected to the other layer, region, or component or indirectly connected to the other layer, region, or component via intervening layers, regions, or components. In the following description, the embodiment of the disclosure is described with reference to the accompanying drawings so that one skilled in the art to which the disclosure pertains can work the disclosure. FIG.1is a block diagram of the configuration and operation of a composite image creating apparatus100according to an embodiment, andFIG.2is a block diagram of the configuration of a processor of the composite image creating apparatus100, according to an embodiment. First, referring toFIG.1, the composite image creating apparatus100according to an embodiment may include a memory110, a processor120, a communication module130, and a camera140. However, the disclosure is not limited thereto, and the composite image creating apparatus100may further include other constituent elements or some constituent elements may be omitted. A constituent element of the composite image creating apparatus100may be divided into a plurality of apparatuses, or a plurality of constituent elements may in incorporated into one apparatus. The memory110, as a computer-readable recording medium, may include a permanent mass storage device, such as a random access memory (RAM), a read only memory (ROM), and a disk drive. Furthermore, the memory110may temporarily or permanently store a program code for controlling the composite image creating apparatus100and data for creating a composite image. The processor120may identify information of an input image by obtaining the input image, generate a projected image by projecting the input image based on information about a position in units of sub-pixels for a target image of the input image by using the information of the input image, generate a reduced image by reducing the projected image at a ratio corresponding to the target image, and synthesize the reduced image to the target image. The communication module130may provide a function to communicate with an external apparatus through a network. As an example, a request generated by the processor120of the composite image creating apparatus100according to a program code stored in a recording apparatus such as the memory110may be transmitted to an external apparatus through a network under the control of the communication module130. Reversely, control signals, instructions, contents, files, and the like provided under the control of a processor of the external apparatus may be received by the composite image creating apparatus100via the communication module130through a network. For example, control signals, instructions, and the like of the external server received via the communication module130may be transmitted to the processor120or the memory110, and contents, files, and the like may be stored in a storage medium that the composite image creating apparatus100may further include. A communication method is not limited, and may include not only a communication method using a communication network (e.g., a mobile communication network, wired Internet, wireless Internet, and a broadcast network) that a network may include, but also short-range wireless communication between devices. For example, the network may include one or more networks among a personal area network (PAN), a local area network (LAN), a campus area network (CAN), a metropolitan area network (MAN), a wide area network (WAN), a broadband network (BBN), and the Internet. Furthermore, the network may include one or more network from a network topology including a bus network, a start network, a ring network, a mesh network, a star-bus network, a tree or hierarchical network, and the like, but the disclosure is not limited thereto. Furthermore, the communication module130may communicate with the external apparatus through a network. A communication method is not limited, but the network may be a short-range wireless communication network. For example, the network may be Bluetooth, Bluetooth Low Energy (BLE), and a WiFi communication network. The camera140may be an apparatus that obtains data of target and background images. For example, the camera140may be an infrared camera. For example, the camera140may obtain an image of a target. Furthermore, the camera140may obtain a background image of a target. Furthermore, the composite image creating apparatus100according to an embodiment may include an input/output interface. The input/output interface may be a device for interfacing with input/output devices. For example, the input device may include a device such as a keyboard, a mouse, or the like, and the output device may include a device such as a display to display a communication session of an application. In another example, the input/output interface may be a device for interfacing with a device such as a touch screen having a single incorporated function for input and output. In a detailed example, the processor120of the composite image creating apparatus100, when processing instructions of a computer program loaded in the memory110, may display a service screen or content formed by using data provided by the external apparatus, on a display, through the input/output interface. Furthermore, in some embodiments, the composite image creating apparatus100may include more constituent elements than the constituent elements ofFIG.1. For example, the composite image creating apparatus100may be implemented to include at least some of the input/output device described above or other constituent elements, such as a battery for supplying power to internal constituent elements, a charging apparatus, various sensors, a database, and the like. The internal configuration of the processor120of the composite image creating apparatus100according to an embodiment is described below in detail with reference toFIG.2. The processor120described below is assumed to be the processor120of the composite image creating apparatus100illustrated inFIG.1, for ease of understanding. The processor120of the composite image creating apparatus100according to an embodiment may include a target generator121, an infrared image generator122, a target tracker123, and a camera controller124. For example, the target generator121may generate a target according to the type, size, and position of a target. The infrared image generator122may generate a background according to the properties of an infrared camera and reproduce a target. The target tracker123may track the target based on an image. The camera controller124may change the position of a camera according to a target tracking result. For example, when the target generator121generates a target, the size of the target may vary depending on not only a size change of a real target, but also a distance between the target and a camera and the properties of the camera. Furthermore, the position of a target may vary depending on a distance between the target and a camera and the position of the camera. The composite image creating apparatus100may also represent not only a short-range target, but also a long-range target. In some embodiments, the constituent elements of the processor120may be selectively included or excluded. Furthermore, in some embodiments, the constituent elements of the processor120may be separated or incorporated for the representation of a function of the processor120. The processor120and the constituent elements of the processor120may control the composite image creating apparatus100to perform operations (S110to S140) included in the composite image creating method ofFIG.3. For example, the processor120and the constituent elements of the processor120may be implemented to execute instructions according to code of an operating system and code of at least one program included in the memory110. The constituent elements of the processor120may be difference function of the processor120performed by the processor120according to the instructions provided by the program code stored in the composite image creating apparatus100. The internal configuration and detailed operation of the processor120is described with reference to the flowchart of the composite image creating method ofFIG.3. FIG.3is a flowchart of a composite image creating method according to an embodiment. Referring toFIG.3, in operation S110, the processor120may identify information of an input image by obtaining the input image. For example, the processor120may obtain an input image including a target, and identify information about the minor-axis size and major-axis size of the input image, information about the size of the target, information about the position of the target in the input image, and the like. The processor120according to an embodiment may obtain an input image having a minor-axis size and a major-axis size of a preset threshold value or more. For example, the processor120may obtain an input image having a minor-axis size and a major-axis size of a preset threshold value or more, based on the size information of the obtained input image. Furthermore, the processor120may enlarge the size of an input image to a minor-axis size and a major-axis size of a preset threshold value or more, based on the size information of the obtained input image. Furthermore, the processor120may identify information about the size and position of the target included in the input image. For example, the processor120may identify the minor-axis size and major-axis size of the target included in the input image, and position information about the position of the target with respect to the target. For example, target's position information may include coordinates information about the position of the target with respect to the input image. In operation S120, the processor120may generate a projected image by projecting an input image based on information about the position in units of sub-pixels for a target image of an input image, by using the information of an input image. For example, the processor120may project the input image to a frame of the same size as that of the input image based on information about the position in units of sub-pixels where the input image is located in the target image, by using the minor-axis size and major-axis size of an input image, and generate a projected image. For example, the information about the position in units of sub-pixels may include coordinates information. The processor120according to an embodiment may generate a projected image by matching the input image with a position corresponding to the information about the position in units of sub-pixels may generate a projected image, in a frame having the same size as that of the input image. For example, the processor120may generate a projected image by matching the input image with a position corresponding to the coordinates information in units of sub-pixels of a frame having the same size as that of the input image. The processor120according to an embodiment may generate a projected image that reflects a blur image by reflecting, in the input image, the blur image due to a blur phenomenon. For example, the processor120may generate a projected image that reflects a blur image by reflecting the blur image generated with respect to a target of the input image in the input image. For example, a blur image may represent an image in which the boundary of a target is blurred by light spreading. In operation S130, the processor120may generate a reduced image by reducing the projected image at a ratio corresponding to the target image. For example, the processor120may generate a reduced image by reducing the projected image based on a relative ratio of an input image to be synthesized in the target image and the projected image. The processor120according to an embodiment may generate a reduced image by reducing the projected image based on a ratio corresponding to the target image indicating a size ratio of the input image to the target image. For example, the ratio corresponding to the target image indicating a size ratio of the input image to the target image may be predetermined and stored in a memory. In operation S140, the processor120may synthesize the reduced image to the target image. For example, the processor120may synthesize the reduced image to the target image based on the position of a target in the target image according to the position of the target. According to an embodiment, a composite image creating apparatus according to an embodiment may represent the state (size, position, blur, etc.) of a target in units of sub-pixels during the creation of a composite image. FIG.4illustrates a comparison between a composite image creating method according to an embodiment and a composite image creating method according to the related art. Referring toFIG.4, in the creation of a composite image with respect to a moving target, as illustrated inFIG.4, when a target10exists on a boundary between a pixel30and a pixel50, it is difficult to represent the target10. In this case, when an image is generated in a pixel unit, a composite image410is generated while a change in units of sub-pixels is ignored. However, the state of the target10is an item that greatly affects target tracking performance so that a precise description for a sub-pixel is needed for accurate verification of an algorithm. According to an embodiment, a composite image420may be created by reflecting the state (size, position, blur, etc.) of the target10in units of sub-pixels. FIG.5illustrates a composite image creating method according to the related art. Referring toFIG.5, to explain the composite image creating method according to the related art, for a certain real number, an integer value may be defined as and a decimal (sub-pixel) value may be defined as. Furthermore, as illustrated inFIG.5, projecting an image (a) to an image (b) with a size of B×B at coordinates 510 of X (x-axis coordinate) and Y (y-axis coordinate) by using an image processing technique such as perspective projection and the like may be defined as <B, X, Y>. In the perspective projection, first, a transformation matrix is obtained from the positions of the four corners of a rectangular area to be transformed and the positions of the four corners after transformation, and after determining which position of the original image the position of each pixel after transformation corresponds to, from the obtained transformation matrix, projection is performed by filling pixel values of the original image at the corresponding positions in pixels of the transformed image. For example, assuming that the pixel position of an image corresponding to an n-th pixel of the transformed image is m, the value of the n-th pixel may be a value of an m-th pixel of the original image. However, when the position m is accurately an integer, the pixel value of the position may be used. Otherwise, values of surrounding pixels of the position may be used. A linear interpolation (inter-linear) is one of the most widely used methods. The linear interpolation is a weighted sum of the four pixel values closest to position m. In this case, a weight is inversely proportional to the distance between the position m and each pixel, and the sum of the four weights is 1. However, when the size of an object is reduced with a high magnification as illustrated inFIG.5, as each pixel of a transformed image (b) is a value representing a value of a large area of an original image (a), using the linear interpolation that uses only four surrounding pixel values may cause aliasing. FIG.6illustrates a composite image creating method according to an embodiment. Referring toFIG.6, the composite image creating method according to an embodiment may include a method of projecting the original image through two separate operations of projection processing using linear interpolation and reduction processing using area interpolation (inter-area) that is widely used for image reduction. For example, as illustrated inFIG.6, an image1(a) includes an input image, that is, the shape of a target to be projected, and an image2(b) is a projected image of reflecting decimal points of the positions to which the image1(a) is to be projected. Furthermore, an image3(c) is a reduced image by reflecting the size of the input image to be projected to a target image, and an image4(d) is the target image to which the input image is projected. InFIG.6, Ā denotes the size (e.g., the minor-axis or x-axis size) of the original image, B denotes the size of a reduced image to be projected, and X and Y denote coordinates of the position of a reduced image to be projected with respect to the target image. Here, B, X, and Y may be expressed by separating an integer part from a decimal part as shown in Equation 1. B=B+{dot over (B)}X=X+{dot over (X)}Y=Y+{dot over (Y)}<Equation 1> In an operation from the image1(a) to the image2(b) of the composite image creating method, to reflect the position of a decimal point in units of sub-pixels, as illustrated in the image2(b) ofFIG.6, a projected image may be generated by projecting (<a, c{dot over (X)}, c{dot over (Y)}>) an input image to a position corresponding to a decimal point within a frame having the same size Ā. Here, {dot over (X)} and {dot over (Y)} are decimal parts of X and Y as described above, a is c×B, and c is Ā/b. In this state,bthat determines the value of c is a value greater thanBthat is an integer part of B, by an integerM. (Mwill be described below in detail). In summary, the above description may be expressed by Equation 2 below. a=c×B<Equation⁢2>x=A_b_b_=B_+M_ Next, in an operation from the image2(b) to the image3(c) of the composite image creating method, the image3(c) may be generated as a reduced image having a sizebby reducing the image2(b) ofFIG.6by c times. Here, the size of the target (i.e., a dashed box in the image3(c) ofFIG.6) is B, and the position of the target (i.e., the coordinates in the upper left corner of the dashed box in the image3(c) ofFIG.6) is {dot over (X)} and Ż. Next, in an operation from the image3(c) to the image4(d) of the composite image creating method, as illustrated in the image4(d) ofFIG.6, the target of the image1(a) is projected to the coordinates (X, Y) with the size of B through a paste work to paste the reduced image at the coordinates (X,Y). Here, the distance between the position to which the image3(c) is pasted (i.e., in the upper left corner of a solid box in the image4(d) ofFIG.6) and the position of the target (i.e., in the upper left corner of the dashed box in the image4(d) ofFIG.6) is (X, Y), and the maximum distance in each axis is 1 or less. Accordingly, whenb(i.e., the size of the solid box in the image4(d) ofFIG.6) is greater than B (i.e., the size of the dashed box in the image4(d) ofFIG.6) only by 1 or more, the decimal point position may be sufficiently represented. Accordingly, in the composite image creating method according to an embodiment, the conditions of Equation 3 below may be satisfied. b≥B+1B+M≥B+1M≥B−B+1={dot over (B)}+1  <Equation 3> In the Equation above, as the maximum value of {dot over (B)} is less than 1, whenMis two or more, the conditions of Equation 3 above may be satisfied. As such, in performing projection, by separating reduction and projection and performing projection reflecting a decimal point and selecting a size of reduction that represents by including the same, a target image may be projected to a decimal point (i.e., a sub-pixel) without distortion of an image. In the embodiment, the following method may be considered to reflect the target state change in units of sub-pixels.a) A target image pre-processing operation is added before projecting a target to a background to represent a change even when a target state changes in units of sub-pixels.b) In the target image pre-processing operation, an image having a relatively large size is received as a target image. Then, a process of reducing through a reduction interpolation is added after reflecting an area corresponding to the units of decimal points of the coordinates and size of a background image to be actually projected, in a state of a large-sized target. As the large-sized image is used in reduction, resolution is not lowered even when the target image is enlarged.c) A target generated through the target image pre-processing is projected to the background image by using perspective projection transform. FIG.7illustrates a composite image creating method according to another embodiment. Referring toFIG.7, the composite image creating method according to an embodiment may also represent blur of an image in units of sub-pixels. Generally, when light comes into a camera, due to the properties of light, the properties of a lens and a detector, and the like, image may be focused on a screen to be blur. When such a phenomenon is represented in a composite image generated inFIG.5, the size and position of blur is also distorted as much as the image is distorted. As a method to address the above issue, in the composite image creating method according to an embodiment described above, blur may be applied in the image2(b) ofFIG.6in which the position of a decimal point is reflected through projection. For example, as illustrated inFIG.7, when blur is applied to the image2(b), blur may be represented in the units of decimal points. In this case, as the size of an image to which blur is applied is increased compared with the existing original image, by reflecting the same, theMvalue may be set to be greater than the existing minimum reference2by the blur size. According to an embodiment, for a long-range small target, a state of the target may be generated in units of sub-pixels. The long-range small target has a size within one pixel in an infrared image. In this state, a state of target focused between pixels may be represented. Target tracking algorithm performance is greatly dominated depending on a case in which a small target is all present in one pixel and a case in which a small target is focused on a boundary between pixels, which may be analyzed. Furthermore, according to an embodiment, changes in size and movement of a target may be seamlessly generated. When a target is gradually enlarged or continuously moved, the presentation in the units of pixels is expressed such that an image is discontinuous, whereas the presentation in units of sub-pixels is expressed such that an image is smoothly connected. Furthermore, according to an embodiment, the vibration of a system, such as a gimbal, a camera, and the like, may be represented. For a gimbal system, slight tremor is caused by a sensor and the like used therein. For a long-range target, a signal change increases even in slight tremor so as to affect tracking performance, which may be represented. The apparatus and/or system described above may be implemented by a hardware component, a software component, and/or a combination of a hardware component and a software component. The apparatus and constituent elements described in the above embodiments may be implemented by using one or more general purpose computers or special purpose computers, for example, a processor, a controller, an arithmetic logic unit (ALU), a digital signal processor, a microcomputer, a field programmable gate array (FPGA), a programmable logic unit (PLU), or any other apparatus capable of executing instructions and responding. The processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular, however, one skilled in the art will appreciated that a processing device may include a plurality of processing elements and a plurality of types of processing elements. For example, a processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such as parallel processors. The software may include a computer program, a piece of code, an instruction, or some combination thereof, for independently or collectively instructing or configuring the processing device to operate as desired. Software and data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, computer storage medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. In particular, the software and data may be stored by one or more computer readable recording mediums. The example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on a media may be those specially designed and constructed for the purposes, or they may be of the kind well-known and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD ROM disks and DVD, magneto-optical media such as floptical disks, and hardware devices that are specially to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of program instructions include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be to act as one or more software modules in order to perform the operations of the above-described example embodiments. In the above, although the present disclosure has been described by specific matters such as specific constituent elements or the like, limited embodiments, and the drawings, those skilled in the art to which the present disclosure pertains could make various modifications and changes from these descriptions. For example, even when the described techniques may be performed in an order different from the method described, and/or the components of the described system, structure, device, circuit, etc. may be coupled or combined in a different form than the method described, or replaced or substituted by other components or equivalents, appropriate results may be achieved. Therefore, other implementations, some embodiments, and equivalents of the claims are within the scope of the claims described below. According to an embodiment configured as described above, a method and apparatus for creating a composite image, by which the state of a target may be effectively expressed, and a computer program stored in a recording medium to execute the method may be implemented. The scope of the disclosure is not limited by the above effects. It should be understood that embodiments described herein should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in other embodiments. While one or more embodiments have been described with reference to the figures, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the following claims.
30,871
11861816
DESCRIPTION OF EXEMPLARY EMBODIMENTS Since the present disclosure may be modified in various ways and may provide various embodiments, specific embodiments will be depicted in the appended drawings and described in detail with reference to the drawings. The effects and characteristics of the present disclosure and a method for achieving them will be clearly understood by referring to the embodiments described later in detail together with the appended drawings. However, it should be noted that the present disclosure is not limited to the embodiment disclosed below but may be implemented in various forms. In the following embodiments, the terms such as first and second are introduced to distinguish one element from the others, and thus the technical scope of the present disclosure should not be limited by those terms. Also, a singular expression should be understood to indicate a plural expression unless otherwise explicitly stated. The term include or have is used to indicate existence of an embodied feature or constituting element in the present specification; and should not be understood to preclude the possibility of adding one or more other features or constituting elements. Also, constituting elements in the figure may be exaggerated or shrunk for the convenience of descriptions. For example, since the size and thickness of each element in the figure has been arbitrarily modified for the convenience of descriptions, it should be noted that the present disclosure is not necessarily limited to what has been shown in the figure. In what follows, embodiments of the present disclosure will be described in detail with reference to appended drawings. Throughout the specification, the same or corresponding constituting element is assigned the same reference number, and repeated descriptions thereof will be omitted. <Terminal> The neural network structure of a system for detecting image forgery through a convolutional neural network according to embodiments of the present disclosure is constructed via a computer language. A terminal may perform the detection of the image forgery in a way that a processor reads out the computer language from a memory such as a RAM memory, and the processor detects the image forgery by using the convolutional neural network. In the same or similar way, a method for providing a manipulation or non-manipulation detection service according to embodiments of the present disclosure may also be implemented by the processor through execution of a manipulation or non-manipulation detection program. Therefore, in certain embodiments of the present disclosure, the subject that drives the system for detecting the image forgery through the convolutional neural network may be the processor of the terminal. The terminal of the system for detecting the image forgery through the convolutional neural network according to an embodiment may include, for example, but not limited to, a processor processing data, information and signals and a memory storing a program or instructions for driving image deep learning for executing image deep learning. The processor reads out the program or instructions for driving the image deep learning and performs the image deep learning described below according to the constructed neural network system. The processor may be implemented by using at least one of Application Specific Integrated Circuits (ASIC), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, and electric units for performing other functions. Also, the memory may be implemented in various forms of storage devices including, for example, but not limited to, ROM, RAM, EPROM, flash drive, and hard drive. Also, the terminal may further include an image capturing device such as a camera for capturing an image that may be a target of the image forgery or a communication unit. More specifically, the image capturing device may obtain an image from an image sensor (for example, CMOS or CCD sensor). Also, a communication unit of the terminal may obtain images from the Internet through a wired or wireless network. The communication unit may include an RF communication interface which includes a communication port and a transmitter transmitting an RF signal. Depending on the embodiments, the system for detecting the image forgery through the convolutional neural network may be configured to include a main processor controlling the entire units and a graphic processing unit (GPU) which processes computations needed to operate a neural network for image deep learning. The terminal may include a server computer, computer, smart phone, digital broadcast terminal, mobile phone, personal digital assistant (PDA), portable multimedia player (PMP), navigation terminal, tablet pc, and wearable device in which the program for detecting image forgery through a convolutional neural network is installed. A program for detecting the image forgery through the convolutional neural network and a program for providing the manipulation or non-manipulation detection service may be provided through a network, and a final service or output may be provided by a terminal through data exchange between the terminal and a server. FIG.1illustrates a neural network structure based on deep learning, which is characterized in that a Bayar filter specialized to image forensic techniques is included in the front layer of the convolutional neural network. The Bayar filter enables the convolutional neural network to better adapt to and learn image variations based only on a layered structure without involving any pre-processing or feature extraction step, thereby improving the performance of image forgery detection. {wk(1)⁡(0,0)=-1∑l,m≠0⁢wk(1)⁡(l,m)=1[Equation⁢⁢1] More specifically, as indicated in Equation 1, the central weight of the Bayar filter is fixed to −1, and the learning is progressed so that a sum of its neighboring weight values is forced to be 1. Here,], m denote the coordinates of a pixel. The neural network of the embodiment ofFIG.1employing the Bayar filter is very effective in detecting changes such as blur, noise, median, or resizing the uncompressed image. However, in the embodiment ofFIG.1, it may be difficult to track traces of forgery of compressed images due to data loss caused during a compression process. Also, the embodiment ofFIG.1specializes in grayscale images and may not be effective for detecting traces of forgery in color images. First Embodiment—System for Detecting Image Forgery Through Convolutional Neural Network FIG.2illustrates a block diagram of a system for detecting image manipulation through a convolutional neural network according to a first embodiment of the present disclosure. Referring toFIG.2, the system10for detecting image manipulation through a convolutional neural network according to the first embodiment may be built on a two-stream neural network structure. The system10may include a convolutional neural network100, a neural network200, a manipulated feature refining unit310, and a manipulation classifying unit320. The convolutional neural network100may be specialized to image forensic techniques so as to detect image manipulation attempted in various compression environments. The neural network200may be based on Markov statistics which take into account image compression. The manipulated feature refining unit310and manipulation classifying unit320may refine features of a corrected or manipulated image extracted from the neural network200and determining whether an image has been manipulated. 1. Two Stream Neural Network Structure and Input Data The structure of a two stream neural network of the system10for detecting image manipulation through the convolutional neural network according to an embodiment is shown inFIG.2. More specifically, the system10for detecting image manipulation through the convolutional neural network may be built on a two-stream network structure which combines the constrained convolutional neural network100detecting image manipulation and the neural network200based on Markov statistics taking into account compression. According to certain embodiments of the present disclosure, to solve the problem that existing convolutional neural networks are not be applied directly to the digital image forensic techniques, the two-stream neural network structure may deal with image manipulation even in the frequency domain different from previous researches that consider image manipulation only in the pixel domain and thus may be specialized to detect the manipulation of an image which has been compressed at least more than once. In the embodiment, an input image can be examined for image manipulation. In other words, the embodiment may distinguish a normal image block within a JPEG-compressed image from a manipulated image block within the JPEG-compressed image. In a real situation of image manipulation, a JPEG-compressed image is retrieved through image editing software, the image is manipulated locally, and then re-saved for distribution. Since the manipulated image again undergoes the JPEG compression process when it is re-saved, both of a manipulated and normal areas are doubly compressed, and in this regard, it is necessary to distinguish the two areas from each other. Among the types of image forgery techniques described above, copy-move and splicing employ blurring or median filtering to conceal traces of image manipulation and also involve resampling to make the pasted object look natural with the other portions of the image. Therefore, in what follows, descriptions will be given with respect to detection of image forgery in a JPEG-compressed image. Constrained Convolutional Neural Network (CNN)100 Referring toFIGS.2to3, a constrained convolutional neural network100according to an embodiment may include an image block unit110, manipulated feature pre-processing unit120, manipulated feature extraction unit130, and first feature refining unit140. More specifically, the image block unit110may partition an input image into a plurality of blocks with a size suitable to be used as inputs to the convolutional neural network or a predetermined size. For example, the image block unit110may divide the input image into the blocks so that the size of each individual image block can be sixty four by sixty four (64×64). Next, the image blocks are input to the manipulated feature pre-processing unit120and the manipulated feature pre-processing unit120pre-process the partitioned image blocks. More specifically, the manipulated feature pre-processing unit120includes a constrained convolution layer. For example, the constrained convolution layer may operate according to Equation 1 in the same or similar way as or to the Bayar filter of the embodiment ofFIG.1. Next, the image pre-processed and output from the manipulated feature pre-processing unit120may be input to the manipulated feature extraction unit130. The manipulated feature extraction unit130may include a plurality of layers, and each layer may include a convolutional layer, a batch normalization, rectified linear unit (ReLU), and a max pooling layer. More specifically, referring toFIG.3, the manipulated feature extraction unit130includes a first layer131, a second layer132, and a third layer133. Each of the layers131to133may include a convolutional layer135, a batch normalization136, and a rectified linear unit function137. The first layer131and the second layer132may further include a max pooling layer138. In other words, the feature extraction unit130comprises a plurality of convolutional layers. For instance, in order to stack a plurality of layers, the kernel size of a convolutional layer belonging to the manipulated feature extraction unit130is less than five-by-five (5×5). It is preferable that the kernel size of convolutional layers after the second one is less than three-by-three (3×3). Also, the stride of each convolutional layer may be fixed to one (1) not to miss traces of image manipulation, and only the pooling layer may be set to an integer of two (2) or more. More specifically, the reason for using a small sized filter for a convolutional layer is that several rectified linear units (ReLUs) may be used. This scheme may allow to replace a layer which uses a single large filter with a plurality of layers employing small sized filters. Additionally, the use of small sized filters may reduce the number of weights to be learned. A smaller number of weights are required when three three-by-three (3×3) convolutional layers are used in comparison to the case of using a single seven-by-seven (7×7) convolutional layer as described in the embodiment ofFIG.1. A small number of weights to be learned may give a great advantage in terms of normalization. Also, batch normalization may be used to prevent overfitting of a proposed network. The manipulated features extracted through the manipulated feature extraction unit130may be output or delivered to the first feature refining unit140. The first feature refining unit140may include one or more fully connected layers and may be trained to refine and distinguish the image manipulated features extracted by the manipulated feature extraction unit130. In the embodiment, the first feature refining unit140may comprise two fully connected layers (FC layer)141,142where a rectified linear unit (ReLU) is connected to the output of each fully connected layer. The manipulated features rectified by the first feature refining unit140as well as manipulated features extracted by the Markov statistics-based network200are input or delivered to an integrated feature refining unit310. 3. Markov Statistics-Based Network200 A Markov statistics-based network200according to an embodiment may distinguish single JPEG compression from double JPEG compression and detect whether an image has been manipulated during double JPEG compression. The Markov statistics-based network200may be used to detect or examine image forgery even in the frequency domain, thereby detecting image manipulation effectively for JPEG compressed images. Referring toFIGS.2and4, the Markov statistics-based network200according to the embodiment includes a domain conversion unit210, a pixel difference calculating unit220, a threshold function230, a matrix transition unit240, a second feature refining unit250. More specifically, the domain conversion unit210processes the image blocks generated by the image block unit110in the frequency domain by applying discrete cosine transform (DCT) to the image blocks. For example, the domain conversion unit210processes the original image blocks of the original image input to the network100in the frequency domain by applying the DCT to each eight-by-eight (8×8) image block. The pixel difference calculating unit220applies a block Bx,y, for which discrete cosine transform has been applied, to Equation 2 below to obtain arrays of pixel differences from neighboring pixels in the horizontal and vertical directions, Bh, By. Bh=Bx,y−Bx+1,y,(x∈[0,63],y∈[0,64]) Bv=Bx,y−Bx,y+1,(x∈[0,64],y∈[0,63])  [Equation 2] Next, the threshold function230maps the array values into the threshold range of [−4, 4]. Next, the matrix transition unit240calculates a nine-by-nine (9×9) transition probability matrix (TPM) in the horizontal direction by using Equation 3 below from the values of the corresponding block, and combines two transition probability matrices in the horizontal and vertical directions into one-dimensional vector of [1, 9×9×2]. TPMh⁡(m,n)=Pr⁢{Bx+1,y=n|Bx,y=m}=∑y=063⁢∑x=062⁢δ⁢{Bx,y=m,Bx+1,y=n}∑y=063⁢∑x=062⁢δ⁢{Bx,y=m}[Equation⁢⁢3]where m,n∈[−4,4] and δ{A}=1 if A holds, otherwise 0 Afterwards, the combined one-dimensional vector is provided to the second feature refining unit250. The second feature refining unit250may be built on fully connected layers. The Markov statistics-based network200may be a network tailored to be suitable for two stream neural network handling image forensic techniques, which may accurately detect image manipulation even in a compressed image by transforming a single, double JPEG compressed image into the frequency domain and analyzing the compressed image therein. 4. Integrated Feature Refining Unit310and Manipulation Classifying Unit320 The manipulated feature information output respectively from the constrained convolutional neural network100and the Markov statistics-based network200is provided to the integrated feature refining unit310. The integrated feature refining unit310comprises fully connected layers311,312and attempts or operates learning for final classification by combining the feature information. In order to prevent loss of minute data for image forensic, the integrated feature refining unit310according to an embodiment employs a structure of combining vectors of two fully connected layers without modification and delivering the combined vectors to the classifier instead of using an operation or method for determining a feature as an average value of the vector elements. The manipulated feature information combined in the integrated feature refining unit310is output or delivered to the manipulation classifying unit320. The manipulation classifying unit320may detect pixels that might have been probably modified, calculate manipulation probabilities of the pixels, and extract the calculated probabilities in the form of a forgery confirmation map. For example, as shown inFIG.5, pixels with a high probability of forgery may be identified via a forgery confirmation map in which different colors are assigned to the respective pixels according to their probability values. The manipulation classifying unit320may include, for example, but not limited to, a softmax function and/or an Adam optimizer that may escape from a local minimum faster than the conventional stochastic gradient descent method. More specifically, referring toFIG.5, the system10for detecting image forgery through a convolutional neural network according to the embodiment may output a forgery detection result in a way that when an image altered from the original image (right images ofFIGS.5(a), (c), and (e)) is input, the pixels of the altered image are displayed with manipulation probability values (FIGS.5(b), (d), and (f)). The system10for detecting image manipulation may readily detect the image forgery even in a compressed image environment by combining the convolutional neural network100capable of detecting image forgery occurred in various compression environments and the Markov statistics-based neural network200that is able to take into account or analyze compression. TABLE 1Q1: 70Q1: 80ManipulationsQ2Bayar'sProposedBayar'sProposedGaussian blurring6070.35%71.98%94.25%95.04%(α = 0.4)70——84.30%86.50%8088.23%91.23%——9085.32%91.80%82.19%89.38%Gaussian noise6052.30%53.34%86.94%90.76%(α = 1)70——57.23%63.25%8076.83%86.11%——9079.94%88.34%73.84%83.27%Median filtering6088.49%90.46%96.59%97.08%(3 × 3)70——95.86%96.69%8064.55%96.85%——9097.04%98.10%97.33%98.58%Resampling6091.58%93.41%96.41%97.13%(120%)70——96.34%97.23%8098.10%98.94%——9099.01%99.23%98.91%99.30% More specifically, Table 1 shows detection rates of image forgery via a deep learning network of the embodiment ofFIG.1(Bayas) and a network according to the first embodiment ofFIGS.2-4. As shown in Table 1, the two networks have been used for experiments for detecting a total of four types of image alteration (Gaussian blurring, Gaussian noise, median filtering, and resampling) in a doubly compressed image. As shown in Table 1, for all the alterations and various compression qualities (Q1=70, 80/Q2=60, 70, 80, 90), the network of the first embodiment ofFIGS.2-4shows superior performance to the Bayar method of the embodiment ofFIG.1. Also, referring toFIGS.6ato6d, the detection accuracy graphs show that the network of the first embodiment ofFIGS.2-4shows a higher detection rate than the embodiment ofFIG.1for each learning period according to the respective image alteration types. Second Embodiment—System for Detecting Image Forgery Through Convolutional Neural Network A system20for detecting image forgery through a convolutional neural network according to a second embodiment will be described, and descriptions repeating those of the first embodiment described above will be omitted. Referring toFIGS.7and8, the system20for detecting the image forgery through the convolutional neural network according to the second embodiment includes an image block unit410, an image manipulated feature pre-processing unit420, an image manipulated feature extraction unit430, and an image manipulated feature refining and classifying unit440including an image manipulated feature refining unit441, and an image manipulation classifying unit442. More specifically, referring toFIGS.7to8, the image block unit410may adjust or change the size of an input image to be suitable as an input to the convolutional neural network. For instance, the suitable size of the input image may be preset. The system20for detecting the image forgery through the convolutional neural network according to the second embodiment may be suitable for analyzing color images, where an input image is a color image and may comprise a plurality of layers, each of which is configured for the corresponding color component. For example, the image block unit410may partition the original image into image blocks of two hundred fifty six by two hundred fifty six by three (256×256×3). Next, the image blocks are input to the manipulated feature pre-processing unit420. The manipulated feature pre-processing unit420may enhance manipulated features of the image by using, for example, but not limited to, a high-pass filter. For instance, the manipulated feature pre-processing unit420may enhance resize trace features by using the high-pass filter. The manipulated feature pre-processing unit420may use an operation for finding information hidden in an image which is similar to the image forensic technique. Image blocks with the enhanced resize feature may be input to the image manipulated feature extraction unit430. The manipulated feature extraction unit430may extract image manipulated features by using a pre-trained deep learning convolutional neural network model. More specifically, the manipulated feature extraction unit430may include a plurality of convolutional layers, batch normalization, rectified linear unit (ReLU), and a plurality of max pooling layers. The manipulated feature extraction unit430according to an embodiment may be, for instance, but not limited to, a pre-trained Visual Geometry Group (VGG) 19 convolutional neural network model. For example, the manipulated feature extraction unit430may have a structure in which two convolutional layers, a pooling layer, two convolutional layers, a pooling layer, four convolutional layers, a pooling layer, and two convolutional layers are stacked sequentially. VGG19 is a convolutional neural network that is trained on more than a million images from the ImageNet database (http://www.image-net.org). However, in various embodiments, any type of a pre-trained convolutional neural network model may be used. At this time, the manipulated feature extraction unit430may have a pre-trained convolutional neural network model and/or a model in which weights are varied according to an input image. For example, the manipulated feature extraction unit430may be trained to extract VGG features by modifying weights for a pre-trained VGG19 convolutional neural network according to the input image. Also, because a plurality of convolutional layers belonging to the manipulated feature extraction unit430are stacked one after another, it is preferable that the kernel size of all of the convolutional layers is less than three-by-three (3×3). Also, the stride of each convolutional layer may be fixed to one (1) not to miss traces of image manipulation, and only the pooling layer may be set to an integer of two (2) or more. And, the manipulated feature information extracted from the manipulated feature extraction unit430is input to the feature refining unit441. The feature refining unit441may include fully connected layers and may be trained to refine and distinguish the extracted image manipulated features. More specifically, the feature refining unit441may include at least one or more fully connected layers and may be trained to refine and distinguish the extracted image manipulated features. In the embodiment, the feature refining unit441may comprise two fully connected layers F where a rectified linear unit is connected to the output of each fully connected layer F. The manipulated feature information combined in the feature refining unit441is input or delivered to the manipulation classifying unit442. The manipulation classifying unit442may detect pixels that might have been probably modified and calculate manipulation probabilities of the pixels. And, the manipulation classifying unit442may calculate and represent a probability of manipulation of an image block ranging from 0 to 1 through the softmax function. More specifically, the manipulation classifying unit442may include the softmax function and the Adam optimizer that may escape from a local minimum faster than the conventional stochastic gradient descent method. The system20for detecting image forgery through a convolutional neural network as designed above may not only detect image manipulation frequently occurred in various image compression environments but also detect forgery of color images fast and reliably. In what follows, to check performance of a network according to the second embodiment ofFIGS.7and8, accuracy of forgery extraction is calculated by providing an image with a resize ratio different from the embodiment ofFIG.1. Referring toFIGS.9ato9c, it may be seen that for all resize ratios, the network according to the second embodiment ofFIGS.7and8is trained and operated faster than the network based on the embodiment ofFIG.1, and the accuracy of forgery extraction is also higher. <Non-Manipulation Detection Service> In what follows, described in detail will be a method for providing a non-manipulation service by using a system10,20for detecting image forgery through a convolutional neural network according to the embodiment described above. For example, the service may be provided through the user's terminal, and manipulation detection may be conducted by a non-manipulation detection service providing server. In other words, the user may receive a service in such a way that the user accesses the service providing server through the user's terminal, uploads an image in question, and receives an image confirmed by the service providing server. Referring toFIG.10, the user may provide an input image (I) suspected for forgery through the terminal. More specifically, the user may request confirmation of image forgery by transmitting an image stored in the terminal to the service providing server. The service providing server may partition the input image (I) to generate image blocks (IBs) and provide the generated image blocks (IBs) to the system10,20for detecting the image forgery through the convolutional neural network. The system10,20for detecting the image forgery through the convolutional neural network may perform deep learning of the input image blocks (IBs) and extract a probability map (PM) for confirming image forgery which displays pixels suspected for forgery and forgery probability of the pixels. The service providing server may again perform deep learning of the probability map (PM) for confirming image forgery through the convolutional neural network and output a label of Yes or No to indicate image forgery of the input image (I). At this time, if the image forgery is determined to be “Yes”, the convolutional neural network may determine which forgery type has been found from positions and arrays of pixels having a high probability of the image forgery and generate an image having highlighted regions at a high possibility of image forgery. More specifically, referring toFIG.11, if the image forgery is determined to be “No”, the service providing server may synthesize a stamp (Y) which confirms that image forgery has not been found in the input image (I) and provide the input image synthesized with the stamp to the user. Also, referring toFIG.12, if the image forgery is determined to be “Yes”, the service providing server may highlight regions (N) of the input image (I) with a high possibility for the image forgery and further display forgery type information. As described above, according to some embodiments of the present disclosure, the method for providing a non-manipulation detection service may be capable of accurately determining image forgery even when an input image is a color image, and enable the user to intuitively recognize the determination result of the image forgery by providing the probability map. According to some embodiments of the present disclosure, a system for detecting image forgery through a convolutional neural network may detect image manipulation with less resource and better performance even in a compressed image environment by combining a convolutional neural network detecting image manipulation frequently occurred in a various image compression environments and a neural network based on Markov statistics which takes into account compression. Also, according to certain embodiments of the present disclosure, a system for detecting image forgery through a convolutional neural network may detect image manipulation frequently occurred in various image compression environments as well as detect forgery of color images fast and reliably. Additionally, according to some embodiments of the present disclosure, a method for providing a non-manipulation detection service may provide an advantage that it is capable of accurately determining image manipulation even when an input image is a color image, and enable the user to intuitively recognize the determination result. In the present disclosure, a “unit” may refer to a hardware based unit, a software based unit or a combination of hardware and software. The hardware based unit may include self-contained components such as chipsets, specialized circuitry and one or more memory devices, while the software-based unit may be part of a program code or linked to the program code containing specific programed instructions, which may be loaded in memory. The “unit” (whether hardware, software, or a combination thereof) may be designed to implement or execute one or more particular functions or routines. The embodiments of the present disclosure may be implemented in the form of program commands which may be executed through various constituting elements of a computer and so may be recorded in a computer-readable recording medium. The computer-readable recording medium may include program commands, data files, and data structures separately or in combination thereof. The program commands recorded in the computer-readable recording medium may be those designed and composed specifically for the present disclosure or may be those commonly available for those skilled in the field of computer software. Examples of a computer-readable recoding medium may include magnetic media such as hard-disks, floppy disks, and magnetic tapes; optical media such as CD-ROMs and DVDs; and hardware devices specially designed to store and execute program commands such as ROM, RAM, and flash memory. Examples of program commands include not only machine codes such as those generated by a compiler but also high-level language codes which may be executed by a computer through an interpreter and the like. The hardware device may be composed to be operated by one or more software modules to perform the operations of the present disclosure, and vice versa. Specific implementation of the present disclosure are embodiments, which does not limit the technical scope of the present disclosure in any way. For the clarity of the specification, descriptions of conventional electronic structures, control systems, software, and other functional aspects of the systems may be omitted. Also, connection of lines between constituting elements shown in the figure or connecting members illustrate functional connections and/or physical or circuit connections, which may be replaceable in an actual device or represented by additional, various functional, physical, or circuit connection. Also, if not explicitly stated otherwise, “essential” or “important” elements may not necessarily refer to constituting elements needed for application of the present disclosure. Also, although detailed descriptions of the present disclosure have been given with reference to preferred embodiments of the present disclosure, it should be understood by those skilled in the corresponding technical field or by those having common knowledge in the corresponding technical field that the present disclosure may be modified and changed in various ways without departing from the technical principles and scope specified in the appended claims. Therefore, the technical scope of the present disclosure is not limited to the specifications provided in the detailed descriptions of this document but has to be defined by the appended claims.
33,604
11861817
FIG.1(not to scale) is a highly schematic depiction of an embodiment of a charged-particle microscope M according to an embodiment of the invention. More specifically, it shows an embodiment of a transmission-type microscope M, which, in this case, is a TEM/STEM (though, in the context of the current invention, it could just as validly be a SEM (seeFIG.2), or an ion-based microscope, for example). InFIG.1, within a vacuum enclosure2, an electron source4produces a beam B of electrons that propagates along an electron-optical axis B′ and traverses an electron-optical illuminator6, serving to direct/focus the electrons onto a chosen part of a specimen S (which may, for example, be (locally) thinned/planarized). Also depicted is a deflector8, which (inter alia) can be used to effect scanning motion of the beam B. The specimen S is held on a specimen holder H that can be positioned in multiple degrees of freedom by a positioning device/stage A, which moves a cradle A′ into which holder H is (removably) affixed; for example, the specimen holder H may comprise a finger that can be moved (inter alia) in the XY plane (see the depicted Cartesian coordinate system; typically, motion parallel to Z and tilt about X/Y will also be possible). Such movement allows different parts of the specimen S to be illuminated/imaged/inspected by the electron beam B traveling along axis B′ (in the Z direction) (and/or allows scanning motion to be performed, as an alternative to beam scanning). If desired, an optional cooling device (not depicted) can be brought into intimate thermal contact with the specimen holder H, so as to maintain it (and the specimen S thereupon) at cryogenic temperatures, for example. The electron beam B will interact with the specimen S in such a manner as to cause various types of “stimulated” radiation to emanate from the specimen S, including (for example) secondary electrons, backscattered electrons, X-rays and optical radiation (cathodoluminescence). If desired, one or more of these radiation types can be detected with the aid of analysis device22, which might be a combined scintillator/photomultiplier or EDX or EDS (Energy-Dispersive X-Ray Spectroscopy) module, for instance; in such a case, an image could be constructed using basically the same principle as in a SEM. However, alternatively or supplement ally, one can study electrons that traverse (pass through) the specimen S, exit/emanate from it and continue to propagate (substantially, though generally with some deflection/scattering) along axis B′. Such a transmitted electron flux enters an imaging system (projection lens)24, which will generally comprise a variety of electrostatic/magnetic lenses, deflectors, correctors (such as stigmators), etc. In normal (non-scanning) TEM mode, this imaging system24can focus the transmitted electron flux onto a fluorescent screen26, which, if desired, can be retracted/withdrawn (as schematically indicated by arrows26′) so as to get it out of the way of axis B′. An image (or diffractogram) of (part of) the specimen S will be formed by imaging system24on screen26, and this may be viewed through viewing port28located in a suitable part of a wall of enclosure2. The retraction mechanism for screen26may, for example, be mechanical and/or electrical in nature, and is not depicted here. As an alternative to viewing an image on screen26, one can instead make use of the fact that the depth of focus of the electron flux leaving imaging system24is generally quite large (e.g. of the order of 1 meter). Consequently, various other types of analysis apparatus can be used downstream of screen26, such as:TEM camera30. At camera30, the electron flux can form a static image (or diffractogram) that can be processed by controller/processor20and displayed on a display device14, such as a flat panel display, for example. When not required, camera30can be retracted/withdrawn (as schematically indicated by arrows30′) so as to get it out of the way of axis B′.STEM camera32. An output from camera32can be recorded as a function of (X,Y) scanning position of the beam B on the specimen S, and an image can be constructed that is a “map” of output from camera32as a function of X,Y. Camera32can comprise a single pixel with a diameter of e.g. 20 mm, as opposed to the matrix of pixels characteristically present in camera30, although camera32can be an Electron Microscope Pixel Array Detector (EMPAD) as well. Moreover, camera32will generally have a much higher acquisition rate (e.g. 106points per second) than camera30(e.g. 102images per second). Once again, when not required, camera32can be retracted/withdrawn (as schematically indicated by arrows32′) so as to get it out of the way of axis B′ (although such retraction would not be a necessity in the case of a donut-shaped annular dark field camera32, for example; in such a camera, a central hole would allow flux passage when the camera was not in use).As an alternative to imaging using cameras30or32, one can also invoke spectroscopic apparatus34, which could be an EELS module, for example. It should be noted that the order/location of items30,32and34is not strict, and many possible variations are conceivable. For example, spectroscopic apparatus34can also be integrated into the imaging system24. In the embodiment shown, the microscope M further comprises a retractable X-ray Computed Tomography (CT) module, generally indicated by reference40. In Computed Tomography (also referred to as tomographic imaging) the source and (diametrically opposed) detector are used to look through the specimen along different lines of sight, so as to acquire penetrative observations of the specimen from a variety of perspectives. Note that the controller (computer processor)20is connected to various illustrated components via control lines (buses)20′. This controller20can provide a variety of functions, such as synchronizing actions, providing setpoints, processing signals, performing calculations, and displaying messages/information on a display device (not depicted). Needless to say, the (schematically depicted) controller20may be (partially) inside or outside the enclosure2, and may have a unitary or composite structure, as desired. The controller comprises, as shown in this embodiment, a data processing apparatus P that is arranged for carrying out the method as defined herein. The skilled artisan will understand that the interior of the enclosure2does not have to be kept at a strict vacuum; for example, in a so-called “Environmental TEM/STEM”, a background atmosphere of a given gas is deliberately introduced/maintained within the enclosure2. The skilled artisan will also understand that, in practice, it may be advantageous to confine the volume of enclosure2so that, where possible, it essentially hugs the axis B′, taking the form of a small tube (e.g. of the order of 1 cm in diameter) through which the employed electron beam passes, but widening out to accommodate structures such as the source4, specimen holder H, screen26, camera30, camera32, spectroscopic apparatus34, etc. Now referring toFIG.2, another embodiment of an apparatus according to the invention is shown.FIG.2(not to scale) is a highly schematic depiction of a charged-particle microscope M according to the present invention; more specifically, it shows an embodiment of a non-transmission-type microscope M, which, in this case, is a SEM (though, in the context of the current invention, it could just as validly be an ion-based microscope, for example). In the Figure, parts which correspond to items inFIG.1are indicated using identical reference symbols, and will not be separately discussed here. Additional toFIG.1are (inter alia) the following parts:2a: A vacuum port, which may be opened so as to introduce/remove items (components, specimens) to/from the interior of vacuum chamber2, or onto which, for example, an ancillary device/module may be mounted. The microscope M may comprise a plurality of such ports2a, if desired;10a,10b: Schematically depicted lenses/optical elements in illuminator6;12: A voltage source, allowing the specimen holder H, or at least the specimen S, to be biased (floated) to an electrical potential with respect to ground, if desired;14: A display, such as a FPD or CRT;22a,22b: A segmented electron detector22a, comprising a plurality of independent detection segments (e.g. quadrants) disposed about a central aperture22b(allowing passage of the beam B). Such a detector can, for example, be used to investigate (the angular dependence of) a flux of output (secondary or backscattered) electrons emerging from the specimen S. Here also, a controller20is present. The controller is connected to the display14, and the display14may be connectable to a data processing apparatus P that is arranged for carrying out the method as defined herein. In the embodiment shown, the data processing apparatus P is a separate structure that does not form part of the controller, and does not even form part of the microscope P. The data processing apparatus P may be local or cloud based, and is in principle not limited to any location. Now turning toFIG.3, a flow chart of the method100as defined herein is shown. The method, which is implemented by a data processing apparatus P, comprises the steps of:receiving101an image;providing111a set-point for a desired image quality parameter of said image;processing102said image using an image analysis technique for determining a current image quality parameter of said image103;comparing103said current image quality parameter with said desired set-point111; andgenerating104, based on said comparison, a modified image by using an image modification technique, wherein said generating104comprises the steps of:Improving104asaid image in terms of said image quality parameter in case said current image quality parameter is lower than said set-point; andDeteriorating104bsaid image in terms of said image quality parameter in case said current image quality parameter exceeds said set-point; andoutputting105said modified image. Said step of generating104a modified image may comprise the step of using an artificial neural network (ANN) and/or a convolutional neural network (CNN). Other image modification techniques may be used as well. FIG.4shows a further embodiment of the method as defined herein. This embodiment is similar to the embodiment shown inFIG.3but includes the further step of analysing106the modified image. The analysing can be done using an ANN and/or CNN, and may include segmentation of the modified image, for example, and/or identifying of one or more objects in said modified image. Analysis may include image reconstruction techniques as well. The image received by the data processing apparatus P may be provided by a charged particle microscope M as shown inFIG.1orFIG.2. Other ways of providing images to the data processing apparatus P are conceivable as well. FIG.5shows an example of how the method as defined herein operates, in an example. Here, three input images201-203are shown. The left image201has a low image quality (illustrated by lower contrast, lower sharpness, low detail, which can also be referred to as data where desired information can hardly be extracted with standard image processing techniques), the middle image202has an medium image quality (medium contrast, medium sharpness and medium details, which may also be referred to as data of sufficient quality to extract desired information in a complicated way with standard image processing techniques), and the right image203has a high image quality (high contrast, high sharpness, and high detail, which may also be referred to as data of sufficient quality to easily extract desired information with standard image processing techniques). The method as defined herein is able to determine one or more image parameters from the input images201-203, and then compare these one or more image parameters to a desired, targeted image quality parameter. In the method as defined herein, the desired image quality parameter corresponds to a targeted, more moderate image quality parameter value. Each of the images201-203is processed by the data processing apparatus and compared to a desired quality, and then an image modification technique is applied to generate an image that has the targeted image quality, for example. In the embodiment shown, the method is arranged for transforming the left input image201to a moderate quality image211by increasing the quality with respect to contrast, sharpness and detail. The method is also arranged for transforming the right input image203to a moderate quality image211by deteriorating the quality with respect to contrast, sharpness and detail. For the middle image202, where the determined quality parameter may not deviate much from the desired quality parameter, it is conceivable that no image transformation technique is applied. Hence, in an embodiment, the method comprises the step of maintaining the input image202as the output image211in case the determined image quality parameter is equal to, or within a limited threshold of, said desired image quality parameter. In other embodiments, the middle image202may nevertheless be transformed with respect to said quality parameter. In any event, the input images201-203will be processed, and may be improved, deteriorated, and/or passed through, eventually leading to (virtually) the same image211. Once the output image211is formed, a further analysis may be performed on the output image211, using a ANN and/or CNN, for example. InFIG.5, the ANN and/or CNN may be used to identify particles231-234and corresponding regional boundaries241-244. It is noted that this can be done for each of the three input images201-203. The resulting output image211should not be considered to be an averaged image211of the three input images201-203. It is noted that the method as defined herein is described in reference to images. The method as defined herein is in principle applicable to any 2D or 3D representation. The images as defined herein may relate in one embodiment to images that are obtainable by charged particle microscopy, including EM images, BSE images, spectral images such as EELS, etcetera. The method has been described above by means of several non-limiting examples. The desired protection is determined by the appended claims.
14,434
11861818
DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. The present application discloses a system and method for performing cascade inspection of defects. In particular, the disclosed system may employ different software modules to detect defects with different properties. When it is desirable to detect defects with multiple properties, the system may use the point-of-interest (POI) output of a first module as input for a second module. As used in the present disclosure, “POI” refers to a region or sub-region on a surface of a wafer that may contain a defect of certain property. In this way, the second module only inspects the POTS in which the first module reports defects. Therefore, the efficiency and accuracy of defect inspection can be improved. FIG.1illustrates an exemplary electron beam inspection (ERI) system100consistent with embodiments of the present disclosure. As shown inFIG.1, ERI system100includes a main chamber101, a load/lock chamber102, an electron beam tool104, and an equipment front end module (EFEM)106. Electron beam tool104is located within main chamber101. EFEM106includes a first loading port106aand a second loading port106h. EFEM106may include additional loading port(s). First loading port106aand second loading port106hreceive wafer cassettes that contain wafers (e.g., semiconductor wafers or wafers made of other material(s)) or samples to be inspected (wafers and samples are collectively referred to as “wafers” hereafter). One or more robot arms (not shown) in EFEM106transport the wafers to load/lock chamber102. Load/lock chamber102is connected to a load/lock vacuum pump system (not shown) which removes gas molecules in load/lock chamber102to reach a first pressure below the atmospheric pressure. After reaching the first pressure, one or more robot arms (not shown) transport the wafer from load/lock chamber102to main chamber101. Main chamber101is connected to a main chamber vacuum pump system (not shown) which removes gas molecules in main chamber101to reach a second pressure below the first pressure. After reaching the second pressure, the wafer is subject to inspection by electron beam tool104. FIG.2illustrates exemplary components of electron beam tool104, consistent with embodiments of the present disclosure. As shown inFIG.2, electron beam tool104includes a motorized stage200, and a wafer holder202supported by motorized stage200to hold a wafer203to be inspected. Electron beam tool104further includes an objective lens assembly204, electron detector206(which includes electron sensor surfaces206aand206b), an objective aperture208, a condenser lens210, a beam limit aperture212, a gun aperture214, an anode216, and a cathode218. Objective lens assembly204, in one embodiment, can include a modified swing objective retarding immersion lens (SORIL), which includes a pole piece204a, a control electrode204b, a deflector204c, and an exciting coil204d. Electron beam tool104may additionally include an energy dispersive X-ray spectrometer (EDS) detector (not shown) to characterize the materials on the wafer. A primary electron beam220is emitted from cathode218by applying a voltage between anode216and cathode218. Primary electron beam220passes through gun aperture214and beam limit aperture212, both of which can determine the size of electron beam entering condenser lens210, which resides below beam limit aperture212. Condenser lens210focuses primary electron beam220before the beam enters objective aperture208to set the size of the electron beam before entering objective lens assembly204. Deflector204cdeflects primary electron beam220to facilitate beam scanning on the wafer. For example, in a scanning process, deflector204ccan be controlled to deflect primary electron beam220sequentially onto different locations of top surface of wafer203at different time points, to provide data for image reconstruction for different parts of wafer203. Moreover, deflector204ccan also be controlled to deflect primary electron beam220onto different sides of wafer203at a particular location, at different time points, to provide data for stereo image reconstruction of the wafer structure at that location. Further, in some embodiments, anode216and cathode218can be configured to generate multiple primary electron beams220, and electron beam tool104can include a plurality of deflectors204cto project the multiple primary electron beams220to different parts/sides of the wafer at the same time, to provide data for image reconstruction for different parts of wafer203. Exciting coil204dand pole piece204agenerate a magnetic field that begins at one end of pole piece204aand terminates at the other end of pole piece204a. A part of wafer203being scanned by primary electron beam220can be immersed in the magnetic field and can be electrically charged, which, in turn, creates an electric field. The electric field reduces the energy of impinging primary electron beam220near the surface of the wafer before it collides with the wafer. Control electrode204b, being electrically isolated from pole piece204a, controls an electric field on the wafer to prevent micro-arching of the wafer and to ensure proper beam focus. A secondary electron beam222can be emitted from the part of wafer203upon receiving primary electron beam220. Secondary electron beam222can form a beam spot (e.g., one of beam spots240aand240b) on sensor surfaces206aand206bof electron detector206. Electron detector206can generate a signal (e.g., a voltage, a current, etc.) that represents an intensity of the beam spot, and provide the signal to a processing system (not shown inFIG.2). The intensity of secondary electron beam222, and the resultant beam spot, can vary according to the external and/or internal structure of wafer203. Moreover, as discussed above, primary electron beam220can be projected onto different locations of the top surface of the wafer, and/or different sides of the wafer at a particular location, to generate secondary electron beams222(and the resultant beam spot) of different intensities. Therefore, by mapping the intensities of the beam spots with the locations of wafer203, the processing system can reconstruct an image that reflects the internal and/or external structures of wafer203. AlthoughFIG.2shows electron beam tool104as a single-beam inspection tool that uses only one primary electron beam to scan one location of wafer203at a time, it is contemplated that electron beam tool104may also be a multi-beam inspection tool that employs multiple primary electron beamlets to simultaneously scan multiple locations on wafer203. The present application does not limit the number of electron beams used in electron beam tool104. FIG.3is a block diagram of an exemplary defect inspection system300, consistent with embodiments of the present disclosure. Referring toFIG.3, defect inspection system300includes a wafer inspection system310and a controller320. Wafer inspection system310can be electron beam inspection (EBI) system100described in connection withFIG.1. It is appreciated that controller320can be part of and/or remote from EBI system100. Wafer inspection system310can be any inspection system that can generate inspection data representing an image of a wafer. The wafer can be a semiconductor wafer substrate, or a semiconductor wafer substrate having one or more epi-layers and/or process films. Wafer inspection system310can be any currently available or developing wafer inspection system. The embodiments of the present disclosure do not limit the specific type for wafer inspection system310as long as it can generate a wafer image having a resolution high enough to observe key features on the wafer (e.g., less than 20 nm). Controller320has a communication interface322that is electrically coupled to the wafer inspection system310to receive the inspection data. Controller320also includes a processor324that is configured to construct an image of the wafer based on the inspection data, analyze the wafer image, and detect wafer defects that appear on the wafer image. Processor324may include one or more of a central processing unit (CPU), an image processing unit, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), etc. In some embodiments, processor324may be one or more known or custom processing devices designed to perform functions of the disclosed defect inspection methods, such as a single core or multiple core processors capable of executing parallel processes simultaneously. For example, processor324may be a single core processor configured with virtual processing technologies. In certain embodiments, processor324may use logical processors to simultaneously execute and control multiple processes. Processor324may implement virtual machine technologies, or other known technologies to provide the ability to execute, control, run, manipulate, store, etc. multiple software processes, applications, programs, etc. In some embodiments, processor324may include a multiple-core processor arrangement (e.g., dual core, quad core, etc.) configured to provide parallel processing functionalities to execute multiple processes simultaneously. It is appreciated that other types of processor arrangements could be implemented that provide for the capabilities disclosed herein. Controller320may also include memory326that includes instructions to enable processor324to execute one or more applications, such as the disclosed defect inspection processes, and any other type of application or software known to be available on computer systems. Alternatively or additionally, the instructions, application programs, etc. may be stored in an internal database or an external storage (not shown) in direct communication with controller320. The internal database and/or external storage may be a volatile or non-volatile, magnetic, semiconductor, tape, optical, removable, non-removable, or other type of storage device or tangible and/or non-transitory computer-readable medium. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. Consistent with the disclosed embodiments, memory326may include instructions that, when executed by processor324, perform one or more processes consistent with the functionalities disclosed herein. Moreover, processor324may execute one or more programs located remotely from controller320. For example, controller320may access one or more remote programs, that, when executed, perform functions related to disclosed embodiments. Consistent with the disclosed embodiments, memory326may include instructions implemented as a plurality of modules, which can be hardware modules, software modules, and/or a combination of both. Each of the plurality of modules may be called by processor324to detect defects having a different property. For example, the plurality of modules may include a first module configured to detect bridges between lines, a second module configured to detect broken lines, a third module configured to detect certain type of critical-dimension (CD) errors, etc. To detect defects of a specified property, processor324may call the corresponding module and input the inspection data to the module, such that the module may output POIs that include defects of the specified property. Controller320may also include a user interface328. User interface328may include a display, such as a cathode ray tube (CRT), a liquid crystal display (LCD), or a touch screen, for displaying information to a computer user. For example, the display may be used to present the defect inspection result to a user. Interface328may also include an input device, including alphanumeric and other keys, for communicating information and command selections to processor324. Another type of user input device is a cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor328and for controlling cursor movement on the display. The input device typically has two degrees of freedom in two axes, a first axis (for example, x) and a second axis (for example, y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. For example, a user may use the input device to select an inspection area of a wafer and/or enter the defect properties to be examined. In some embodiments, user interface328may be configured to implement a graphical user interface (GUT) that can be stored in a mass storage device as executable software codes that are executed by the one or more computing devices. This and other modules can include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, fields, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. FIG.4is a flowchart illustrating a process400for cascade defect inspection, consistent with embodiments of the present disclosure. Referring toFIG.4, process400may be performed by a controller, such as controller320described in connection withFIG.3. When it is desired to detect defects with multiple properties, the controller may call multiple modules, each of which is configured to detect defects with a different property. The controller may call the multiple modules in a serial manner, and use the defect output of a preceding module as the input for a subsequent module. This way, the surface area of a wafer can be inspected in a cascading style to detect multiple defect properties. Specifically, as shown inFIG.4, the controller may first call Module1and input inspection data received from a wafer inspection system, e.g., inspection system310(FIG.3), to Module1. As such, Module1outputs a first set of POIs, each of which is a sub-region of the wafer that includes the defects having a first property. The controller then calls Module2and inputs the first set of POIs to Module2, such that Module2outputs a second set of POIs that includes defects having a second property. Because Module2inspects only the areas in which Module1reports defects, i.e., the first set of POIs, the controller avoids from applying Module2to the entire inspection area of the wafer. Thus, the inspection throughput can be improved. The controller may similarly call Module3and input the second set of POIs to Module3, such that the calling of Module3outputs a third set of POIs that includes defects with a third property. The controller may proceed in a similar manner to Module N, wherein N is an integer equal to or larger than 2. In this way, a subsequent module inspects only the POIs in which the preceding module reports defects. Finally, Module N outputs a Nth set of POIs that includes defects with the N properties corresponding to Modules1,2, . . . , N, respectively. Next, various embodiments of the disclosed cascade defect inspection method will be described.FIG.5Ais a flowchart illustrating an exemplary cascade defect inspection. Referring toFIG.5A, a controller may input inspection data of a wafer, e.g., inspection data received from an inspection tool or outputted by a preceding module, to Module M1. The controller calls Module M1to output a first set of POIs that includes defects having a first property, which is then used as input for Module M2. Subsequently, Module M2outputs a second set of POIs that includes defects having a second property. The controller may report the second set of POIs as defects having both the first and second properties. The controller may additionally use the second set of POIs as input for Module M3, which then outputs a third set of POIs that includes defects having a third property. The controller may report the third set of POIs as defects having the third property. In some embodiments, the controller may report the defects to a user via a user interface, such as the display in user interface328. FIG.5Bis a flowchart illustrating another exemplary cascade defect inspection. UnlikeFIG.5A, in the embodiment shown inFIG.5B, the first set of POIs outputted by Module M1is not used as input for Module M2or M3, but rather reported by the controller as defects having the first property. Moreover, the controller inputs inspection data other than the data outputted by Module M1to Module M2, which outputs the second set of POIs that includes defects having the second property. The second set of POIs is reported as defects having both the second property, and further used as input for Module M3. Finally, Module M3outputs the third set of POIs that includes defects having the third property, which is reported by the controller as defects having both the second and third properties. FIG.5Cis a flowchart illustrating another exemplary cascade defect inspection. Referring toFIG.5C, inspection data of the wafer is input to Modules M1and M2separately. The output of Module M1is not used as input for Module M2. Rather, the outputs of Modules M1and M2are combined and used as input for Module M3. The output of Module M1is reported as defects having the first property. The output of Module M2is reported as defects having the second property. And the output of Module M3is reported as defects having i) both the first and third properties, and/or ii) both the second and third properties. FIG.5Dis a flowchart illustrating another exemplary cascade defect inspection. Referring toFIG.5D, the output of Module M1is reported as defects having the first property and used as input for Module M2. The output of Module M2is reported as defects having both the first and second properties, and used as input for Module M6. Similarly, the output of Module M3is reported as defects having the third property and used as input for Module M4. The output of Module M4is reported as defects having both the third and fourth properties, and used as input for Module M6. Moreover, Module M5is called to output POIs that include defects having the fifth property. The outputs of Modules M2, M4, and M5are then combined as input for Module M6, which has an output reported as defects having i) the first, second, and sixth properties, ii) the third, fourth, and sixth properties, and/or iii) the fifth and sixth properties. Next, to further illustrate the application of the disclosed cascade defect inspection methods, two examples are described. In the first example,FIG.6Ais a schematic diagram illustrating an exemplary process for cascade defect inspection, andFIG.6Bis a schematic diagram illustrating the process shown inFIG.6A. For example, users may be interested in the sizes of via holes only within a small area in which the line gap is less than 30 nm. As such, referring toFIG.6A, the controller may first call a Module M1configured to measure line gaps, and input the inspection image of the wafer to Module M1. Module M1outputs a set of POIs that includes line gaps less than 30 nm. For example,FIG.6Bshows a part of a pattern printed on the wafer. The pattern includes multiple via holes and conductor lines. The designed line gap is 40 nm. Module M1outputs a POT centered at Point A, where the line gap is less than 30 nm. Referring toFIG.6A, the controller subsequently calls a Module M2configured to measure via hole sizes, and inputs the output of Module M1to Module M2. For example, referring toFIG.6B, the controller calls Module M2to inspect only those via holes in the POT(s) outputted by Module M1. In this way, the controller avoids from applying Module M2to the entire inspection image, and thus the inspection throughput can be improved. In the second example,FIG.7Ais a schematic diagram illustrating an exemplary process for cascade defect inspection, andFIG.7Bis a schematic diagram illustrating the process shown inFIG.7A. For example, users may be interested in the sizes of via holes only within a small area in which i) the line gap is less than 30 nm or ii) the line width is less than 10 nm. As such, referring toFIG.7A, the controller calls Module M1, which is configured to measure line gaps, and input the inspection image of the wafer to Module M1. Module M1outputs a set of POIs that includes line gaps less than 30 nm. The controller also calls a Module M3configured to measure line widths, and input the inspection image of the wafer to Module M3. Module M3outputs a set of POIs that includes line widths less than 10 nm. For example,FIG.7Bshows a part of a pattern printed on the wafer. The pattern includes multiple via holes and conductor lines. The designed line gap is 40 nm. Module M1outputs a POI centered at Point A, where the line gap is less than 30 nm. Moreover, the designed line width is 20 nm. Module M3outputs a POI centered at Point B, where the line width is less than 10 nm. Referring toFIG.7A, the controller subsequently combines the outputs of Modules M1and M3, ad use them as the input for Module M2, which is configured to measure via hole sizes. For example, referring toFIG.7B, Module M2is called to inspect only those via holes in the POIs outputted by Modules M1and M3. In this way, the controller avoids from applying Module M2to the entire inspection image. According to the above disclosed embodiments, defects of a first property are used as POIs for detecting defects of a second property. Compared to a typical defect inspection method that inspects defects of different properties separately, the disclosed cascade defect inspection method can show the correlations of different defect properties. Moreover, because the disclosed method does not need to repeatedly inspect the entire wafer image, the throughput is improved. Moreover, a defect that is recognized by multiple modules is less likely to be a false positive. Therefore, the nuisance rate is lowered. The embodiments may further be described using the following clauses:1. A computer system comprising:a memory storing instructions implemented as a plurality of modules, each of the plurality of modules being configured to detect defects having a different property; and a controller configured to cause the computer system to:receive inspection data representing an image of a wafer;input the inspection data to a first module of the plurality of modules, the first module outputs a first set of points of interests (POIs) having a first property;input the first set of POIs to a second module of the plurality of modules, the second module outputs a second set of POIs having a second property; andreport the second set of POIs as defects having both the first property and the second property.2. The computer system of clause 1, wherein the controller is further configured to cause the computer system to:report the first set of POIs as defects having the first property.3. The computer system of any one of clauses 1 and 2, wherein the controller is further configured to cause the computer system to:input the inspection data to a third module of the plurality of modules, the third module outputs a third set of POIs having a third property;input the first and third sets of POIs to the second module, the second module outputs a fourth set of POIs having the second property; andreport the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.4. The computer system of any one of clauses 1-3, wherein the controller is further configured to cause the computer system to:input the second set of POTs to a fourth module of the plurality of modules, the fourth module outputs a fifth set of POTs having a fourth property; andreport the fifth set of POTs as defects having the first, second, and fourth properties.5. The computer system of any one of clauses 1-4, wherein POTs outputted by each of the plurality of modules are sub-regions of the wafer that include possible defects.6. The computer system of any one of clauses 1-5, wherein the property of a defect includes at least one of a defect size and a defect type.7. The computer system of any one of clauses 1-6, wherein the computer system is coupled with an electron-beam inspection tool configured to scan the wafer with one or more primary electron beams and to generate the inspection data based on one or more sets of secondary electrons reflected from the wafer, wherein the controller is configured to cause the computer system to receive the inspection data from the electron-beam inspection tool and generate the inspection image based on the inspection data.8. A computer system comprising:a memory storing instructions; anda processor electronically coupled to the memory and configured to execute the instructions to cause the computer system to:receive inspection data representing an image of a wafer;determine, in the inspection image, a first set of points of interests (POIs) having a first property;determine, in the first set of POIs, a second set of POIs having a second property; andreport the second set of POIs as defects having both the first property and the second property.9. The computer system of clause 8, wherein the processor is further configured to execute the instructions to cause the computer system to:report the first set of POIs as defects having the first property.10. The computer system of any one of clauses 8 and 9, wherein the processor is further configured to execute the instructions to cause the computer system to:determine, in the inspection image, a third set of POTs having a third property;determine, in the first and third set of POIs, a fourth set of POTs having the second property; and report the fourth set of POTs as defects having i) the first property or the third property, and ii) the second property.11. The computer system of any one of clauses 8-10, wherein the processor is further configured to execute the instructions to cause the computer system to:determine, in the second set of POIs, a fifth set of POIs having a fourth property; and report the fifth set of POIs as defects having the first, second, and fourth properties.12. The computer system of any one of clauses 8-11, wherein the POIs determined by the processor are sub-regions of the wafer that include possible defects.13. The computer system of any one of clauses 8-12, wherein the property of a defect includes at least one of a defect size and a defect type.14. The computer system of any one of clauses 8-13, wherein the computer system is coupled with an electron-beam inspection tool configured to scan the wafer with one or more primary electron beams and to generate the inspection data based on one or more sets of secondary electrons reflected from the wafer, wherein the processor is configured to cause the computer system to receive the inspection data from the electron-beam inspection tool and generate the inspection image based on the inspection data.15. A defect inspection system comprising:an inspection tool for inspecting a wafer;a memory storing instructions implemented as a plurality of modules, each of the plurality of modules being configured to detect defects with a different property; anda controller electronically coupled to the inspection tool and memory, the controller being configured to cause the defect inspect system to:receive, from the inspection tool, inspection data representing an image of the wafer; input the inspection data to a first module of the plurality of modules, the first module outputs a first set of points of interests (POTs) having the first property;input the first set of POTs to a second module of the plurality of modules, the second module outputs a second set of POTs having the second property; andreport the second set of POTs as defects having both the first property and the second property.16. The defect inspection system of clause 15, wherein the controller is further configured to cause the defect inspect system to:report the first set of POIs as defects having the first property.17. The defect inspection system of any one of clauses 15 and 16, wherein the controller is further configured to cause the defect inspect system to:input the inspection data to a third module of the plurality of modules, the third module outputs a third set of POIs having a third property;input the first and third sets of POIs to the second module, the second module outputs a fourth set of POIs having the second property; andreport the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.18. The defect inspection system of any one of clauses 15-17, wherein the controller is further configured to cause the defect inspect system to:input the second set of POIs to a fourth module of the plurality of modules, the fourth module outputs a fifth set of POIs having a fourth property; andreport the fifth set of POIs as defects having the first, second, and fourth properties.19. The defect inspection system of any one of clauses 15-18, wherein POIs outputted by each of the plurality of modules are sub-regions of the wafer that include possible defects.20. The defect inspection system of any one of clauses 15-19, wherein the property of a defect includes at least one of a defect size and a defect type.21. The defect inspection system of any one of clauses 15-20, wherein:the inspection tool is configured to scan the wafer with one or more primary electron beams and to generate the inspection data based on one or more sets of secondary electrons reflected from the wafer; andthe controller is configured to cause the defect inspect system to generate the inspection image based on the inspection data.22. A defect inspection system comprising:an inspection tool for inspecting a wafer;a memory storing instructions; anda processor electronically coupled to the memory and the inspection tool, the processor being configured to execute the instructions to cause the defect inspect system to:receive, from the inspection tool, inspection data representing an image of the wafer;determine, in the inspection image, a first set of points of interests (POIs) having a first property;determine, in the first set of POIs, a second set of POIs having a second property; andreport the second set of POIs as defects having both the first property and the second property.23. The defect inspection system of clause 22, wherein the processor is further configured to execute the instructions to cause the defect inspect system to:report the first set of POIs as defects having the first property.24. The defect inspection system of any one of clauses 22 and 23, wherein the processor is further configured to execute the instructions to cause the defect inspect system to:determine, in the inspection image, a third set of POIs having a third property;determine, in the first and third set of POIs, a fourth set of POIs having the second property; andreport the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.25. The defect inspection system of any one of clauses 22-24, wherein the processor is further configured to execute the instructions to cause the defect inspect system to:determine, iii the second set of POTs, a fifth set of POTs having a fourth property; and report the fifth set of POTs as defects having the first, second, and fourth properties.26. The defect inspection system of any one of clauses 22-25, wherein the POIs determined by the processor are sub-regions of the wafer that include possible defects.27. The defect inspection system of any one of clauses 22-26, wherein the property of a defect includes at least one of a defect size and a defect type.28. The defect inspection system of any one of clauses 22-27, wherein:the inspection tool is configured to scan the wafer with one or more primary electron beams and to generate the inspection data based on one or more sets of secondary electrons reflected from the wafer; andthe controller is configured to cause the defect inspect system to generate the inspection image based on the inspection data.29. A method comprising:receiving inspection data representing an image of a wafer;inputting the inspection data to a first module of a plurality of modules, each of the plurality of modules being configured to detect defects with a different property, the first module outputting a first set of points of interests (POIs) having a first property;inputting the first set of POIs to a second module of the plurality of modules, the second module outputting a second set of POIs having a second property; and reporting the second set of POIs as defects having both the first property and the second property.30. The method of clause 29, further comprising:reporting the first set of POIs as defects having the first property.31. The method of any one of clauses 29 and 30, wherein the method further comprises: inputting the inspection data to a third module of the plurality of modules, the third module outputting a third set of POTs having a third property;inputting the first and third sets of POIs to the second module, the second module outputting a fourth set of POIs having the second property; andreporting the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.32. The method of any one of clauses 29-31, wherein the method further comprises:inputting the second set of POTs to a fourth module of the plurality of modules, the fourth module outputting a fifth set of POTs having a fourth property; andreporting the fifth set of POIs as defects having the first, second, and fourth properties.33. The method of any one of clauses 29-32, wherein POIs outputted by each of the plurality of modules are sub-regions of the wafer that include possible defects.34. The method of any one of clauses 29-33, wherein the property of a defect includes at least one of a defect size and a defect type.35. The method of any one of clauses 29-34, further comprising:receiving the inspection data from an electron-beam inspection tool that scans the wafer with one or more primary electron beams and generates the inspection data based on one or more sets of secondary electrons reflected from the wafer; andgenerating the inspection image based on the inspection data.36. A method comprising:receiving inspection data representing an image of a wafer;determining, in the inspection image, a first set of points of interests (POIs) having a first property;determining, in the first set of POIs, a second set of POIs having a second property; andreporting the second set of POIs as defects having both the first property and the second property.37. The method of clause 36, further comprising:reporting the first set of POIs as defects having the first property.38. The method of any one of clauses 36 and 37, wherein the method further comprises: determining, in the inspection image, a third set of POTs having a third property;determining, in the first and third set of POTs, a fourth set of POTs having the second property; andreporting the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.39. The method of any one of clauses 36-38, wherein the method further comprises:determining, in the second set of POIs, a fifth set of POIs having a fourth property; and reporting the fifth set of POIs as defects having the first, second, and fourth properties.40. The method of any one of clauses 36-39, wherein the POIs are sub-regions of the wafer that include possible defects.41. The method of any one of clauses 36-40, wherein the property of a defect includes at least one of a defect size and a defect type.42. The method of any one of clauses 36-41, further comprising:receiving the inspection data from an electron-beam inspection tool that scans the wafer with one or more primary electron beams and generates the inspection data based on one or more sets of secondary electrons reflected from the wafer; andgenerating the inspection image based on the inspection data.43. A non-transitory computer-readable medium storing a set of instructions that is executable by one or more processors of one or more devices to cause the one or more devices to perform a method comprising:receiving inspection data representing an image of a wafer;inputting the inspection data to a first module of a plurality of modules, each of the plurality of modules being configured to detect defects with a different property, the first module outputting a first set of points of interests (POIs) having a first property;inputting the first set of POIs to a second module of the plurality of modules, the second module outputting a second set of POTs having the second property; andreporting the second set of POTs as defects having both the first property and the second property.44. The medium of clause 43, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:reporting the first set of POIs as defects having the first property.45. The medium of any one of clauses 43 and 44, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:inputting the inspection data to a third module of the plurality of modules, the third module outputting a third set of POIs having a third property;inputting the first and third sets of POIs to the second module, the second module outputting a fourth set of POIs having the second property; andreporting the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.46. The medium of any one of clauses 43-45, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:inputting the second set of POIs to a fourth module of the plurality of modules, the fourth module outputting a fifth set of POIs having a fourth property; andreporting the fifth set of POIs as defects having the first, second, and fourth properties.47. The medium of any one of clauses 43-46, wherein POIs outputted by each of the plurality of modules are sub-regions of the wafer that include possible defects.48. The medium of any one of clauses 43-47, wherein the property of a defect includes at least one of a defect size and a defect type.49. The medium of any one of clauses 43-48, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:receiving the inspection data from an electron-beam inspection tool that scans the wafer with one or more primary electron beams and generates the inspection data based on one or more sets of secondary electrons reflected from the wafer; andgenerating the inspection image based on the inspection data.50. A non-transitory computer-readable medium storing a set of instructions that is executable by one or more processors of one or more devices to cause the one or more devices to perform a method comprising:receiving inspection data representing an image of a wafer;determining, in the inspection image, a first set of points of interests (POIs) having a first property;determining, in the first set of POIs, a second set of POIs having a second property; and reporting the second set of POIs as defects having both the first property and the second property.51. The medium of clause 50, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform: reporting the first set of POIs as defects having the first property.52. The medium of any one of clauses 50 and 51, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:determining, in the inspection image, a third set of POIs having a third property;determining, in the first and third set of POIs, a fourth set of POIs having the second property; andreporting the fourth set of POIs as defects having i) the first property or the third property, and ii) the second property.53. The medium of any one of clauses 50-52, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:determining, in the second set of POIs, a fifth set of POTs having a fourth property; and reporting the fifth set of POIs as defects having the first, second, and fourth properties.54. The medium of any one of clauses 50-53, wherein the POIs are sub-regions of the wafer that include possible defects.55. The medium of any one of clauses 50-54, wherein the property of a defect includes at least one of a defect size and a defect type.56. The medium of any one of clauses 50-55, wherein the set of instructions that is executable by the one or more processors of the one or more devices to cause the one or more devices to further perform:receiving the inspection data from an electron-beam inspection tool that scans the wafer with one or more primary electron beams and generates the inspection data based on one or more sets of secondary electrons reflected from the wafer; andgenerating the inspection image based on the inspection data. It will be appreciated that the present invention is not limited to the exact construction that has been described above and illustrated in the accompanying drawings, and that various modifications and changes can be made without departing from the scope thereof. It is intended that the scope of the invention should only be limited by the appended claims.
42,917
11861819
DETAILED DESCRIPTION The disclosure presented in the following written description and the various features and advantageous details thereof, are explained more fully with reference to the non-limiting examples included in the accompanying drawings and as detailed in the description, which follow. Descriptions of well-known components have been omitted to not unnecessarily obscure the principal features described herein. The examples used in the following description are intended to facilitate an understanding of the ways in which the disclosure can be implemented and practiced. A person of ordinary skill in the art would read this disclosure to mean that any suitable combination of the functionality or exemplary embodiments below could be combined to achieve the subject matter claimed. The disclosure includes either a representative number of species falling within the scope of the genus or structural features common to the members of the genus so that one of ordinary skill in the art can recognize the members of the genus. Accordingly, these examples should not be construed as limiting the scope of the claims. FIG.1illustrates an example system100for field calibrating an image capturing module140. The system100or portions thereof can be associated with an entity, which can include any entity, such as a business, company (e.g., a railway company, a transportation company, etc.), or a government agency (e.g., a department of transportation, a department of public safety, etc.) that field calibrates image capturing module140. The elements of the system100can be implemented using any suitable combination of hardware, firmware, and software. For example, the elements of the system100can be implemented using one or more components of the computer system ofFIG.5. The system100includes a vehicle110, a vehicle encoder120, a beam130, one or more image capturing modules140, a computer150, a network160, a target170, and end plates180. The vehicle110can include any machine capable of automated movement. Vehicle110can be a car, a rail vehicle, a truck, a bus, an aircraft, or any other machine suitable for mobility. The vehicle110can operate at any speed that allows one or more components (e.g., sensors, cameras, etc.) of beam130to capture images. For example, the vehicle110can be a rail bound vehicle that travels at 65 miles per hour (mph). The roadway112can include any path that accommodates the vehicle110. For example, the vehicle110can travel along the roadway112. The roadway112can include a road, a highway, a railroad track, a water way, and the like. The vehicle encoder120can include a rotary encoder or other timing device used to measure axle rotation. The vehicle encoder120can measure the number of times an axle makes a revolution. The vehicle encoder120can be attached to an axle of the vehicle110. The vehicle encoder120can be physically and/or logically connected to one or more components of the system100. For example, the vehicle encoder120can be physically and/or logically connected to one or more cameras and/or sensors of the image capturing module140. As another example, the vehicle encoder120can be physically and/or logically connected to the computer150. The vehicle encoder120can communicate with a camera controller of the image capturing module140to ensure that a camera captures images of the same perspective and proportion regardless of the speed of travel of the vehicle110. For example, the vehicle encoder120can be synchronized with multiple cameras of the image capturing modules140to ensure that all cameras are taking images at the same time. As another example, the vehicle encoder120can be synchronized with a camera of the image capturing module140to ensure that a camera traveling with the vehicle110at a first speed (e.g., 10 miles per hour) captures images that are the same perspective and proportion of a camera traveling with the vehicle110at a second speed (e.g., 65 miles per hour). In another embodiment, the vehicle encoder120can couple with the vehicle110in a mechanical manner to reduce or eliminate lost motion resulting in undesirable artifacts in images generated from the image capturing module140. For example, the lost motion can include slack in the mechanical coupling resulting in distortion in the images. In another embodiment, the mechanical manner can reduce the lost motion using components machined specifically for the vehicle encoder. For example, the components machined specifically for the vehicle encoder can ensure flexible and rigid fitting to minimize vibration and other mechanical interference resulting in the lost motion. In another embodiment, the vehicle encoder120can couple with the image capturing module140in an electrical manner including an electronic filter. For example, the electronic filter can filter trigger signals sent to the camera of the image capturing module140smoothing the trigger filter signal to compensate for asynchronous signal elements. In one embodiment, the asynchronous signal elements can be smoothed from using an averaging filter to pass the trigger signal values over a user-defined time frame. For example, the averaging filter can recreate a smoothed trigger signal to distribute to the camera of the image capturing module140. In another embodiment, the electronic filter is executed on an encoder controller and receives user-defined number of pulses from the vehicle encoder120. In one embodiment, the electronic filter is executed on an encoder and receives a variable number of pulses over a user-defined time frame. The beam130can include a structure that contains and orients components (e.g., the image capturing modules140) used to capture images. In certain embodiments, the beam130operates similar to a flatbed document scanner with the exception that the beam130is in motion while capturing images of stationary physical objects. The beam130can engage with the vehicle110. For example, the beam130can be bolted to a sub-frame attached to the vehicle110. In the illustrated embodiment ofFIG.1, the beam130has three sections that include two end sections and a center section. The beam130has a gullwing configuration such that the center section bends inward toward the center of the beam130. The gullwing configuration allows the image capturing components (e.g., sensors, cameras, etc.) of the image capturing modules140within the beam130to be properly oriented within with respect to the physical objects being captured. In certain embodiments, the center section of the beam130is omitted, and each end section is connected to vehicle110. The beam130can be made of metal (e.g., steel or aluminum), plastic, or any other material suitable for housing components of the beam130and for attaching the beam130to the vehicle110. The beam130can include one or more openings. Openings can provide for the placement of the image capturing modules140within the beam130. Openings can allow for installation, adjustment, and maintenance of the image capturing modules140. While the beam130is illustrated inFIG.1as having a particular size and shape, the beam130can have any size and shape suitable to house and orient the image capturing modules140. Other factors that can contribute to the design of the beam130include shock resistance, vibration resistance, weatherproofing considerations, durability, ease of maintenance, calibration considerations, and ease of installation. In another embodiment, the beam130can include a plurality of sub-beams. For example, the beam130can include two separate sub-beams, each including a plurality of cameras. In one embodiment, the system100with the plurality of sub-beams can reduce complexity of maintenance and simplify construction of each of the sub-beams. In another embodiment, the system100with the plurality of sub-beams can reduce complexity of maintenance by reducing a number of personnel needed resulting in the maintenance of control in construction tolerances. For example, the sub-beams can include 33% fewer welds and cuts to construct compared to a full beam. The image capturing modules140of system100are used to capture images while the vehicle110is in motion. Each the image capturing module140can include one or more sensors, one or more cameras, and the like. One or more the image capturing modules140can be attached to the vehicle110at any location that allows the image capturing modules140to capture images of the environment surrounding the vehicle110. In the illustrated embodiment ofFIG.1, the image capturing modules140are located within the beam130. In certain embodiments, each end section of the beam130houses one or more the image capturing modules140. For example, a first end section of the beam130can house the image capturing module140that includes two downward facing cameras that capture images of tie and ballast areas of a rail. The first end section of the beam130can house the two downward facing cameras in a portion of the first end section that is substantially horizontal to the rail. The second end section of the beam130opposite the first end section can house two of the image capturing modules140that each include two angled cameras that capture images of both sides of the rail and rail fastening system. The second end section of the beam130can house the four angled cameras in portions of the second end section that are at an angle (e.g., a 45 degree angle) to the rail. The image capturing modules140can include various types of sensors depending on sensing and/or measuring requirements. In one embodiment, sensors housed by the image capturing modules140can include optical sensors (e.g., cameras for visible light (mono and color), infrared, UltraViolet, and/or thermal), motion sensors (e.g., gyroscopes and accelerometers), light detection and ranging (LIDAR) sensors, hyperspectral sensors, Global Positioning System (GPS) sensors, and the like. Optical sensors and lasers can be used together for laser triangulation to measure deflection or profile. LIDAR sensors can be used for generating three-dimensional (3D) point-cloud data. Hyperspectral sensors can be used for specific wavelength responses. An example of the image capturing module140is described inFIG.2below. The computer150can represent any suitable computing component that can be used to process information for system100. In one embodiment, the computer150can coordinate one or more components of system100. In another embodiment, the computer150can receive data from the image capturing modules140and/or the vehicle encoder120. The computer150can monitor inputs and/or outputs of the image capturing modules140and/or the vehicle encoder120. In another embodiment the computer150can include a communications function that allows users (e.g., a technician) to engage the system100directly. For example, a user can access the computer150through an interface (e.g., a screen, a graphical user interface (GUI), or a panel) of the computer150. The computer150can be a laptop computer, a desktop computer, a smartphone, a tablet, a personal digital assistant, a wearable device, and the like. The computer150can be located inside or external to the vehicle110. The computer150can communicate with one or more components of the system100via the network160. The network160can include any type of network that facilitates communication between components of the system100. One or more portions of the network160can include an ad-hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a 3G network, a 4G network, a 5G network, a Long Term Evolution (LTE) cellular network, a combination of two or more of these, or other suitable types of networks. One or more portions of the network160can include one or more access (e.g., mobile access), core, and/or edge networks. The network160can be any communications network, such as a private network, a public network, a connection through Internet, a mobile network, a Wi-Fi network, a Bluetooth network, etc. One or more components of system100can communicate over the network160. For example, the computer150can communicate over the network160, including receiving information from the image capturing modules140and/or the vehicle encoder120. The target170can include an object used to calibrate the image capturing module140and/or the vehicle encoder120. For example, the target170can include a calibration bar. In another embodiment, the calibration bar can include a cylindrical object made of a sturdy material. For example, the sturdy material can include aluminum (or some other metal), PVC, wood, or some other material suitable for stabilizing the calibration bar. In another embodiment, the target170can include a calibration pattern, which can be any suitable size, shape, and/or design. For example, the calibration pattern design can include alternating solid colors, a checkerboard pattern, a chessboard pattern, a circle grid pattern, a CharucoBoard pattern, and the like. For example, the calibration pattern can be a printed black-and-white alternating pattern that includes multiple black and white sections. In another embodiment, the calibration pattern can include units with an unequal length to width ratio. For example, the length of each unit can be twice as long as the width of each unit. The end plates180can include at least one object to which the target170attaches. For example, the end plates180can include structures to stabilize a position of the target170. In one embodiment, the structures can be metallic (e.g., aluminum), plastic, wooden, or some other sturdy material for stabilizing the target170. In another embodiment, the end plates180can be used to stabilize the target170for purposes of calibrating the image capturing module140. In one embodiment, the end plates180are placed along the rail by the operator. For example, the end plates180can include components small and light enough to be movable for the operator. Alternatively, the end plates180can be stationary, as part of the rail. In operation, a vehicle encoder rate is programmed into the vehicle encoder120. The vehicle encoder rate is a number of electrical pulses generated by vehicle encoder120in one revolution of a shaft of the vehicle encoder120. The vehicle encoder rate can be determined from calibration data previously generated during calibration procedures, as described inFIGS.3and5below. If calibration data is not available, an arbitrary initial value for the vehicle encoder rate can be programmed into the vehicle encoder120. In certain embodiments, the vehicle encoder rate that is programmed into the vehicle encoder120is an integer. In certain embodiments, an operator programs the vehicle encoder rate into the vehicle encoder120. The vehicle encoder120and the image capturing module140of the system100are secured to the vehicle110. The target170can be secured to the roadway112in view of the camera of the image capturing module140to be calibrated. The target170is located perpendicularly to the axis of the camera of the image capturing module140. The camera of the image capturing module140is activated, and an operator observes the current focus of the camera under constant lighting conditions. If the contrast between two pixels identifying the boundary of light and dark portions of the target170is less than a maximum obtainable contrast (or less than observed during bench calibration procedures), the operator unlocks the focus mechanism of the camera and adjusts the focus until a maximum contrast is achieved. The focus mechanism is then locked. The image capturing module140is connected to the computer150via the network160. The computer150includes image capturing software. The image capturing module140captures a first image of the target170, which is displayed on the computer150. The operator determines a number of lateral pixels in a lateral pitch distance of the first image of the target170and determines a lateral object pixel size (OPS) by dividing the pitch of the target170by the number of lateral pixels in the pitch region. A trial vehicle encoder rate is then determined by dividing the wheel circumference of the vehicle110by the lateral OPS. If the trial vehicle encoder rate is different than the initial vehicle encoder rate programmed into the vehicle encoder120, the trial vehicle encoder rate is programmed into the vehicle encoder120. The image capturing software of the computer150is triggered off of the vehicle encoder120and the vehicle110is moved forward or backward over the target170. The image capturing device140captures second images of the target170while the vehicle110is moved over the target170. An operator of the computer150determines (e.g., counts) a number of light or dark longitudinal pixels in one longitudinal pitch distance of each of the second images and compares the number of lateral pixels to the number of longitudinal pixels. If the number of lateral pixels matches the number of longitudinal pixels, the image capturing module140and the vehicle encoder120are calibrated. If the number of lateral pixels is different from the number of longitudinal pixels, the vehicle encoder rate is adjusted until number of lateral pixels matches the number of longitudinal pixels. As such, the system100can be used to calibrate the image capturing module140and the vehicle encoder120to ensure sufficient images are captured by the system100that can be used to accurately identify objects in the environment surrounding the vehicle110. AlthoughFIG.1illustrates a particular arrangement of the vehicle110, the vehicle encoder120, the beam130, the image capturing modules140, the computer150, the network160, and the target170, this disclosure contemplates any suitable arrangement of the vehicle110, the vehicle encoder120, the beam130, the image capturing modules140, the computer150, the network160, the target170, and the end plates180. For example, the computer150can be located inside the vehicle110. The vehicle110, the vehicle encoder120, the beam130, the image capturing modules140, and the computer150can be physically or logically co-located with each other in whole or in part. AlthoughFIG.1illustrates a particular number of the vehicles110, vehicle encoders120, beams130, image capturing modules140, computers150, networks160, and targets170, this disclosure contemplates any suitable number of the vehicles110, vehicle encoders120, beams130, image capturing modules140, computers150, networks160, targets170, and end plates180. For example, the system100can include a first beam at a front end of the vehicle110and a second beam at a rear end of the vehicle110. As another example, the system100can include multiple computers150. One or more components of the system100can be implemented using one or more components of the computer system ofFIG.5. FIG.2illustrates an example image capturing module140that can be used by the system100. Image capturing module140includes a camera210, a lens220, a top plate230, a base plate240, a cover plate250, bolts260, and an opening270. Camera210is any device that captures images. For example, camera210can capture images of the target170and end plates180ofFIG.1. As another example, camera210can capture images of a rail component (e.g., a rail joint, a switch, a frog, a fastener, ballast, a rail head, and/or a rail tie). In certain embodiments, camera210includes one or more sensors. One or more cameras210can capture images from different angles. For example, one or more cameras210can capture images of both rails of a railway system at any given location. Each beam (e.g., beam130ofFIG.1) can include multiple cameras210. The beam can include first camera210aimed straight down to capture an overhead image of a target (e.g., target170ofFIG.1), a physical object, etc. The beam can include second camera210aimed downward and outward to capture an angled image of the target, a physical object, etc. Camera210can be a line scan camera. A line scan camera includes a single row of pixels. Camera210can be a dual line scan camera. A dual line scan camera includes two rows of pixels that can be captured and/or processed simultaneously. As camera210moves over a physical object, camera210can capture images such that a complete image of the physical object can be reconstructed in software line by line. Camera210can have a capture rate up to 140 kilohertz. Camera210can have a resolution and optics to detect physical objects of at least 1/16 inches in size. In one or more embodiments, camera210includes lens220that focuses and directs incident light to a sensor of camera210. Lens220can be a piece of glass or other transparent substance. Lens220can be made of any suitable material (e.g., steel, aluminum, glass, plastic, or a combination thereof). Top plate230and base plate240are structural elements used to position, support, and/or stabilize one or more components of image capturing module140(e.g., camera210or a sensor). Top plate230and bottom plate540can be made of any suitable material (e.g., steel, aluminum, plastic, glass, and the like). Top plate230can be connected to base plate240with one or more bolts260. Bolts260(e.g., jack bolts) can be used to alter a pitch and/or roll orientation of camera210. For example, bolts260can be used to change an effective height between top plate230and base plate240. Top plate230and/or base plate240can be adjusted to reduce vibration and/or shock of image capturing module140. Top plate230and/or base plate240can include resistive heating elements to provide a warm environment for camera210and lens220to operate during cooler weather. Cover plate250can be a plate that covers base plate240. Cover plate250can be made of any suitable material (e.g., glass, steel, aluminum, and the like). Cover plate250includes an opening270. Opening270can serve as an aperture through which a lens of camera210views the physical object. Opening270allows for transmission of a sensed signal from the surrounding environment to reach a sensor of camera210. Opening270can be any suitable size (e.g., oval, rectangular, and the like) to accommodate views of camera210. Lens220of camera210can be positioned directly over opening270. AlthoughFIG.2illustrates a particular arrangement of camera210, lens220, top plate230, base plate240, cover plate250, bolts260, and opening270, this disclosure contemplates any suitable arrangement of camera210, lens220, top plate230, base plate240, cover plate250, bolts260, and opening270. AlthoughFIG.2illustrates a particular number of cameras210, lenses220, top plates230, base plates240, cover plates250, bolts260, and openings270, this disclosure contemplates any suitable number of cameras210, lenses220, top plates230, base plates240, cover plates250, bolts260, and openings270. For example, image capturing module140can include multiple cameras210. As another example, in certain embodiments, image capturing module140cannot include certain components (e.g., base plate240) illustrated inFIG.2. One or more components of image capturing module140can be implemented using one or more elements of the computer system ofFIG.5. FIG.3illustrates an example system300for an adaptable calibration target. System300includes a roadway (e.g., roadway112ofFIG.1) moving under a rail vehicle. System300or portions thereof can be associated with an entity, which can include any entity, such as a business, company (e.g., a railway company, a transportation company, etc.), or a government agency (e.g., a department of transportation, a department of public safety, etc.) that calibrates an image capturing module in the field. System300ofFIG.3includes the target170, the end plates180, a screw caps302, a marker strip304, an attachment apparatus306, and fastener hole308. The screw caps302can couple the target170to the end plates180. For example, the screw caps302can include a mechanical coupler, such as a screw, bolt, cotter pin, or another mechanical coupler. In one embodiment, the operator of the rail vehicle will exit the rail vehicle, attach the end plates180to a rail, and couple each end of the target170to each of the end plates180using the screw caps302. In another embodiment, the operator can rotate the screw caps302to attach and detach the system in the field. The marker strip304can include a solid black strip on a top of the end plates. For example, the marker strip304can include a strip of known length for calibration purposes. In one embodiment, the marker strip304can be used to calibrate an image system on the rail vehicle by providing a known lateral distance. For example, the marker strip304can include a length of 5 inches. In another embodiment, the image system can capture an image of the marker strip304and analyze the image to determine whether the image system is calibrated. In another embodiment, the image with the marker strip304can provide a number of lateral pixels for analysis. The attachment apparatus306can couple the end plates180to the rail. For example, the attachment apparatus306can couple the end plates180to the rail by a mechanical, electrical, or magnetic manner. In one embodiment, the attachment apparatus306can include a mechanical component to couple the end plates180to the rail. For example, the mechanical component can include a clamp, bolt, screw, cotter pin, or some other mechanical coupler. In another embodiment, the attachment apparatus306can include an electrical component to couple the end plates180to the rail. For example, the electrical component can include an electromechanical clamp, electromagnetic coupler, or some other electrical coupler. In another embodiment, the attachment apparatus306can include a magnetic component to couple the end plates180to the rail. For example, the magnetic component can include a magnetic disc, strip, or paint manually placed by the operator. In another embodiment, the attachment apparatus306can be removable from the end plates180. Alternatively, the attachment apparatus306can be permanently attached to the end plates180. The fastener hole308can couple the target170to the end plates180. For example, the fastener hole308can interconnect the target170to the screw caps302. In another embodiment, the fastener hole308can be part of the end plates180or another base to which the target170is applied. In operation, a user (e.g., an operator) installs an image capturing module (e.g., image capturing module140or portions thereof such as camera210ofFIG.2) on an undercarriage of a rail vehicle and connects one or more components of the image capturing module to a computer (e.g., computer150). The computer can include image capturing software. The user turns (e.g., switches) on the power of the image capturing module. The user unlocks the focus locking mechanism of the image capturing module and focuses a camera of the image capturing module on target170under constant lighting conditions. In an embodiment, the operator can perform a field calibration assessment discussed below. For example, the field calibration assessment can include a successful focus achieved when maximum contrast is obtained between two pixels identifying the boundary of the light and dark portion of calibration pattern of the target170(e.g., alternating colors or a checkerboard pattern). In one embodiment, the user then locks the focusing mechanism of the image capturing module. In another embodiment, the operator can identify an image displayed on the computer, the user observes a black or white region on the target170in the middle of a field of view of the camera. For example, the field of view can represent an angle through which the camera of the image capturing module picks up electromagnetic radiation. In one embodiment, the field of view can be limited by the area of the image displayed on the computer. In another embodiment, the operator of the computer can count the number of light or dark pixels in a first direction for a lateral pitch distance of the end plates180. In one embodiment, the first direction is parallel to an axis of the end plates180. In another embodiment, a lateral OPS is calculated by dividing the lateral pitch distance by the number of pixels in the lateral pitch distance. For example, if the lateral pitch distance equals one inch and the number of pixels for the one-inch pitch distance is 52, the lateral OPS equals one inch divided by 52, which equals 0.01923 inches per pixel. In one embodiment, the lateral OPS can indicate a true physical dimension represented by one pixel at a prescribed working distance. For example, the working distance can include a distance between the camera and the target. In another embodiment, the lateral OPS can be determined based on a field calculation as follows: OPSlateral=Pt⁢a⁢r⁢g⁢e⁢tnpixels. Where Ptargetis the pitch of the target in units of length, and npixelsis a determined number of pixels. For example, the determined number of pixels can include a number of pixels counted by the operator. Alternatively, the determined number of pixels can include a number of pixels based on characteristics of the camera, such as image size, lens dimensions, and image resolution. In one embodiment, measuring and calibrating the lateral OPS ensures that the objects depicted in images captured by the image capturing module are properly proportioned and that no data is lost between pixels when the image capturing module is in field operation. In another embodiment, the pixels are square or approximately square (e.g., having an equal length and width within a two percent tolerance). For example, an allowance can be permitted due the limitations of the camera of the image capturing module and/or a vehicle encoder (e.g., vehicle encoder120). In another embodiment, the field calibration assessment can include determining a vehicle encoder rate for the vehicle encoder based on the lateral OPS. In one embodiment, the vehicle encoder rate can equal the number of electrical pulses generated by the vehicle encoder in one revolution of the shaft of the wheel. For example, the vehicle encoder rate can be calculated as the circumference of the wheel divided by the lateral OPS. In another embodiment, the vehicle encoder rate for the vehicle encoder is determined based on the lateral OPS. For example, the encoder rate is based on the following equation: Rencoder,⁢wheel=(kf⁢g)*(cw⁢h⁢e⁢e⁢l)O⁢P⁢Sl⁢a⁢t⁢e⁢r⁢a⁢l. Where kfgis a triggering factor set in the camera or in software and cwheelis the circumference of the wheel. In certain embodiments, the encoder rate is programmed into the vehicle encoder as an integer value. For example, the vehicle encoder rate is programmed into the vehicle encoder as an integer value. In one embodiment, the vehicle encoder can be programmed to 1715 or 1716 pulses per revolution. For example, an operator can operate a vehicle (e.g., the vehicle110) over the target170at a low speed. In one embodiment, the low speed can be within a range of five to twenty mph (e.g., 10 mph). In another embodiment, the image capturing module captures images while the rail vehicle is traveling the low speed and communicates the collected images to the computer. In one embodiment, the operator of the computer determines (e.g., counts) the number of light or dark pixels in a second direction in one longitudinal pitch distance on the target170. For example, in the illustrated embodiment ofFIG.3, the second direction is parallel to an axis of the target170. The operator then operates the vehicle at a high speed. The high speed can be within a range of fifty to eighty miles per hour (mph) (e.g., 65 mph). The high speed can represent the maximum speed of the vehicle. The mage capturing module collects images while the vehicle is at the high speed and communicates the collected images to the computer. The operator of the computer determines (e.g., counts) the number of light or dark pixels in one pitch distance on the target170in the second direction. The high and low speed longitudinal pixel counts are compared to the lateral pixel counts to determine if the camera pixels are representing physical space equally in the lateral and longitudinal directions. If the longitudinal pixel counts are different than the lateral pixel counts, a different encoder rate can be programmed into the vehicle encoder, and the above process can be repeated to compare the effects of the new encoder rate on the pixel counts in the lateral and longitudinal directions. FIG.4illustrates a flowchart exemplifying field calibration control logic400, in accordance with one or more embodiments of the present disclosure. The field calibration control logic400can be implemented as an algorithm on a server, a machine learning module, a client, a database, or other suitable system. Additionally, the field calibration control logic400can implement or incorporate one or more features of the image capturing module140. The field calibration control logic400can be achieved with software, hardware, an application programming interface (API), a network connection, a network transfer protocol, HTML, DHTML, JavaScript, Dojo, Ruby, Rails, other suitable applications, or a suitable combination thereof. The field calibration control logic400can leverage the ability of a computer platform to spawn multiple processes and threads by processing data simultaneously. The speed and efficiency of the field calibration control logic400can be greatly improved by instantiating more than one process to implement data lifecycle management. However, one skilled in the art of programming will appreciate that use of a single processing thread can also be utilized and is within the scope of the present disclosure. In one embodiment, commands or data can be received via user input generated on a client or server, such as a screen tap, swipe, mouse click, key press, voice command, or other suitable mechanism. In another embodiment, the inspection commands or data can include inspection data having one or more fields, parameters, characteristics, or metadata, related to an inspection. The field calibration control logic400then proceeds to step410. At step410, in an embodiment, the control logic400can capture a first image of a target. For example, a camera of an image capturing module (e.g., camera210of image capturing module140ofFIG.2) captures a first image of a target (e.g., target170ofFIG.1). In one embodiment, the image capturing module can be secured to a vehicle (e.g., vehicle110ofFIG.1) and the target can be secured to a roadway (e.g., roadway112ofFIG.1). In another embodiment, the target is perpendicular to the axis of the camera of the image capturing module. In one embodiment, the image captured by the camera of the image capturing module can be displayed on a computer (e.g., computer150ofFIG.1) communicatively coupled to the image capturing module. The control logic400proceeds to step415. At step415, in an embodiment, the control logic400can determine a number of lateral pixels in a lateral pitch distance of the image of the target. For example, the number of lateral pixels can correspond to a known distance, such as a length of the marker strip304. In one embodiment, the control logic400can include a determination by an operator of a number of lateral pixels in the lateral pitch distance of the image of the target. Alternatively, In one embodiment, the number of lateral pixels is automatically determined using a software tool. For example, the software tool can identify a characteristic of the camera to determine a width of pixels of the image and calculate the number of lateral pixels based on a ratio of the lateral pitch distance to the width of the image. In one embodiment, the characteristic of the camera is the resolution of the camera, such as a number of pixels in the image. In one embodiment, the operator can observe the current focus of the camera under constant lighting conditions. In another embodiment, if the contrast between two pixels identifying the boundary of light and dark portions of the focus target is less than observed in bench testing, the operator can unlock the focus mechanism and adjust the focus until a satisfactory result is obtained. The focus mechanism is then locked. In another embodiment, the operator can count the number of light or dark pixels in a lateral pitch distance of the target at the center of the field of view for the camera. The control logic400proceeds to step420. At step420, in an embodiment, the control logic400can determine a lateral OPS. For example, the lateral OPS can be determined using the determined number of lateral pixels. In one embodiment, the control logic400can include the operator calculating the lateral OPS by dividing the pitch (e.g., one inch) of the target170by the number of lateral pixels in the pitch region. In one embodiment, the computer can calculate the vehicle encoder rate based on an OPS of the image. In another embodiment, the vehicle encoder rate can be calculated based on the operator observing a black or white region on the target170in the middle of a field of view of the camera. For example, the field of view can represent an angle through which the camera of the image capturing module picks up electromagnetic radiation. The field of view can be limited by the area of the image displayed on the computer. In one embodiment, the operator of the computer can count the number of light or dark pixels in a first direction for a lateral pitch distance of the end plates180. For example, the first direction is parallel to an axis of the end plates180. In one embodiment, the lateral OPS is calculated by dividing the lateral pitch distance by the number of pixels in the lateral pitch distance. For example, if the lateral pitch distance equals one inch and the number of pixels for the one-inch pitch distance is 52, the OPS equals one inch divided by 52, which equals 0.01923 inches per pixel. In one embodiment, the lateral OPS indicates the true physical dimension represented by one pixel at the prescribed working distance. In another embodiment, the working distance can include a distance between the camera and the target. In another embodiment, the lateral OPS can be determined based on a field calculation as follows: OPSlateral=Pt⁢a⁢r⁢g⁢e⁢tnpixels. Where Ptargetis the pitch of the target in units of length, and npixelsis a determined number of pixels. For example, the determined number of pixels can include a number of pixels counted by the operator. Alternatively, the determined number of pixels can include a number of pixels based on characteristics of the camera, such as image size, lens dimensions, and image resolution. In one embodiment, measuring and calibrating the lateral OPS ensures that the objects depicted in images captured by the image capturing module are properly proportioned and that no data is lost between pixels when the image capturing module is in field operation. In another embodiment, the pixels are square or approximately square (e.g., having an equal length and width within a two percent tolerance). For example, an allowance can be permitted due the limitations of the camera of the image capturing module and/or a vehicle encoder (e.g., vehicle encoder120). The control logic400proceeds to step425. At step425, in an embodiment, the control logic400can determine a number of longitudinal pixels in a longitudinal pitch distance of the image. For example, the number of longitudinal pixels can correspond to a known distance, such as a length of at least one section of the target170. In one embodiment, the control logic400can include a determination by an operator of the number of longitudinal pixels in the longitudinal pitch distance of the image of the target. Alternatively, In one embodiment, the number of longitudinal pixels is automatically determined using a software tool. For example, the software tool can identify a characteristic of the camera to determine a length of pixels of the image and calculate the number of longitudinal pixels based on a ratio of the longitudinal pitch distance to the length of the image. In one embodiment, the characteristic of the camera is the resolution of the camera, such as a number of pixels in the image. In one embodiment, the operator can observe the current focus of the camera under constant lighting conditions. In another embodiment, if the contrast between two pixels identifying the boundary of light and dark portions of the focus target is less than observed in bench testing, the operator can unlock the focus mechanism and adjust the focus until a satisfactory result is obtained. The focus mechanism is then locked. In another embodiment, the operator can count the number of light or dark pixels in the longitudinal pitch distance at the center of the field of view for the camera. The control logic400proceeds to step430. At step430, in an embodiment, the control logic400can compare the number of lateral pixels to the number of longitudinal pixels. For example, the control logic400can include a computer to compare the number of lateral pixels to a number of longitudinal pixels. In one embodiment, the computer can determine whether the number of lateral pixels is larger than the number of longitudinal pixels. Alternatively, an operator can count the pixels in the image to determine the number of lateral pixels and the number of longitudinal pixels. In another embodiment, the camera is formatted to capture the image with an equal number of lateral pixels to longitudinal pixels. The control logic400proceeds to step435. At step435, in an embodiment, the control logic400can compare the number of lateral pixels to the number of longitudinal pixels. For example, the control logic400can include a computer to compare the number of lateral pixels to a number of longitudinal pixels. In one embodiment, the computer can determine whether the number of lateral pixels is larger than the number of longitudinal pixels. Alternatively, the operator can count the number of lateral pixels and the number of longitudinal pixels to determine whether the pixel numbers are different. In one embodiment, the image can include a difference between the number of lateral pixels and the number of longitudinal pixels. For example, the difference in the number of pixels can indicate further calibration procedures are needed. If the number of lateral pixels does not equal the number of longitudinal pixels, the control logic400proceeds to step460. If the number of lateral pixels is equal to the number of longitudinal pixels, the control logic400proceeds to step440. At step440, in an embodiment, the control logic400can calculate a vehicle encoder rate. For example, the control logic400can include the computer to calculate a vehicle encoder rate. In one embodiment, the vehicle encoder rate can equal the number of electrical pulses generated by the vehicle encoder in one revolution of the shaft of the wheel. For example, the vehicle encoder rate can be calculated as the circumference of the wheel divided by the lateral OPS. In another embodiment, the vehicle encoder rate for the vehicle encoder is determined based on the lateral OPS. For example, the encoder rate is based on the following equation: Rencoder,⁢wheel=(kf⁢g)*(cw⁢h⁢e⁢e⁢l)O⁢P⁢Sl⁢a⁢t⁢e⁢r⁢a⁢l. Where kfgis a triggering factor set in the camera or in software and cwheelis the circumference of the wheel. The control logic400proceeds to step445. At step445, in an embodiment, the control logic400can program the vehicle encoder rate into a vehicle encoder. For example, the control logic400can include the computer to program the vehicle encoder rate into the vehicle encoder. For example, the vehicle encoder rate is programmed into the vehicle encoder as an integer value. In one embodiment, the vehicle encoder can be programmed to 1715 or 1716 pulses per revolution. For example, an operator can operate a vehicle (e.g., the vehicle110) over the target170at a low speed. In one embodiment, the low speed can be within a range of five to twenty mph (e.g., 10 mph). In another embodiment, the image capturing module captures images while the rail vehicle is traveling the low speed and communicates the collected images to the computer. In one embodiment, the operator of the computer determines (e.g., counts) the number of light or dark pixels in a second direction in one longitudinal pitch distance on the target170. For example, in the illustrated embodiment ofFIG.3, the second direction is parallel to an axis of the target170. The control logic400proceeds to step450. At step450, in an embodiment, the control logic400can focus the camera on the target. For example, the control logic400can include the computer to focus the camera on the target under constant lighting conditions. In one embodiment, independent lighting sources can be included under the rail vehicle to illuminate the target. In another embodiment, the computer can focus the camera in a manual or an automatic manner. For example, the manual manner of focusing the came can include the operator to generate a virtual focal point on a display of the computer, controlling the camera. Alternatively, the automatic manner of focusing the camera can include a software tool to assess the image and identify optimum focal distances for the lighting environment. For example, the software tool can identify the optimum focal distances based on physical factors of the camera and various environment settings. In one embodiment, the physical factors can include the lens dimensions of the camera, the resolution of the camera, among other camera factors. In another embodiment, the various environment settings can include low lighting environment, supplementing with software filters to increase the contrast of the pixels of the image. The control logic400proceeds to step455. At step455, in an embodiment, the control logic400can obtain a maximum contrast between two pixels of the image. For example, the control logic400can include the computer to obtain the maximum contrast between the two pixels. For example, a successful focus is achieved when maximum contrast is obtained between two pixels identifying a boundary of light and dark portions of a calibration pattern of the target170(e.g., alternating colors or a checkerboard pattern). In one embodiment, an operator can lock a focusing mechanism of the image capturing module. For example, the operator can lock the focusing mechanism manually or automatically. In one embodiment, the operator can lock the focusing mechanism manually using various mechanical and electrical components, such as a torque-driven cap to the image capturing module, or other mechanical and electrical locking components. At step460, in an embodiment, the control logic400can resolve a difference in pixel values between the number of lateral pixels and the number of longitudinal pixels. For example, the control logic400can include a computer to resolve the difference between the number of lateral pixels and the number of longitudinal pixels. In one embodiment, the computer can resolve the difference by adjusting the vehicle encoder rate to a new value to compensate for the difference in pixel values. In one embodiment, the rail vehicle can repeat the calibration process to capture images at various speeds and compare pixel vales of the images subsequently to determine whether the image capturing module is appropriately calibrated. In another embodiment, an object of measuring and calibrating an OPS is to ensure the objects depicted in the image are properly proportioned and that no real-world space is lost between pixels of the image capturing module when the image capturing module is in operation. For example, the pixels can be generally square, or slightly larger in the lateral direction. In one embodiment, by making the pixels square or slightly larger in the lateral direction, no real-world space is lost. In another embodiment, some small allowance in the lateral to longitudinal pixel size is permitted given the desired field-of-view, actual working distance, and limitations of the camera and vehicle encoder. The control logic400proceeds to step465. At step465, in an embodiment, the control logic400can transmit new calibration information. For example, the control logic400can include a computer to transmit the new calibration information. For example, the new calibration information can correspond to the adjusted vehicle encoder rate. In another embodiment, the computer transmits the new calibration information over a network. Modifications, additions, or omissions can be made to method400depicted inFIG.4. Method400can include more, fewer, or other steps. For example, method400can include programming the initial vehicle encoder rate into the vehicle encoder. As another example, method400can include activating the camera of the image capturing module. Steps can be performed in parallel or in any suitable order. While discussed as specific components completing the steps of method400, any suitable component can perform any step of method400. FIG.5shows an example computer system that can be used by the systems and methods described herein. For example, one or more components (e.g., computer150) of system100ofFIG.1and/or system300ofFIG.3can include one or more interface(s)510, processing circuitry520, memory(ies)530, and/or other suitable element(s). Interface510receives input, sends output, processes the input and/or output, and/or performs other suitable operation. Interface510can comprise hardware and/or software. Processing circuitry520performs or manages the operations of the component. Processing circuitry520can include hardware and/or software. Examples of a processing circuitry include one or more computers, one or more microprocessors, one or more applications, etc. In certain embodiments, processing circuitry520executes logic (e.g., instructions) to perform actions (e.g., operations), such as generating output from input. The logic executed by processing circuitry520can be encoded in one or more tangible, non-transitory computer readable media (such as memory530). For example, the logic can comprise a computer program, software, computer executable instructions, and/or instructions capable of being executed by a computer. In particular embodiments, the operations of the embodiments can be performed by one or more computer readable media storing, embodied with, and/or encoded with a computer program and/or having a stored and/or an encoded computer program. Memory530(or memory unit) stores information. Memory530can comprise one or more non-transitory, tangible, computer-readable, and/or computer-executable storage media. Examples of memory530include computer memory (for example, RAM or ROM), mass storage media (for example, a hard disk), removable storage media (for example, a Compact Disk (CD) or a Digital Video Disk (DVD)), database and/or network storage (for example, a server), and/or other computer-readable medium. FIG.6illustrates an example field calibration system600for calibrating an image capturing module140. The system600can include the vehicle110, the beam130, one or more image capturing modules140(not shown), target170, and end plates180. In one embodiment, the beam130can include a structure that contains and orients components (e.g., the image capturing module140) used to capture images. The system600can provide a method for the vehicle to travel overtop the roadway112, the target170, and the end plates180. For example, the vehicle110can travel along the roadway112, where the target170and endplates180are located. In one embodiment, the endplates180can position the target170in place along the roadway112beneath a clearance level of the vehicle110. In one embodiment, the roadway112can include structures along the edge of the roadway112to provide support to hold the endplates180. In this manner, the endplates180can stabilize the target170to support the target170as the vehicle110passes over. For example, the endplates180can include fasteners to couple the endplates180to the support of the roadway112. The image capturing module140can capture a first image of the target170, which can be displayed on a computer. An operator can determine a number of lateral pixels in a lateral pitch distance of the first image of the target170and determines a lateral OPS by dividing the pitch of the target170by the number of lateral pixels in the pitch region. A trial vehicle encoder rate is then determined by dividing the wheel circumference of the vehicle110by the lateral OPS. If the trial vehicle encoder rate is different than the initial vehicle encoder rate programmed into a vehicle encoder, the trial vehicle encoder rate is programmed into the vehicle encoder. Image capturing software of the computer is triggered based on a signal from the vehicle encoder and the vehicle110is moved forward or backward over the target170. The image capturing device140can capture second images of the target170while the vehicle110is moved over the target170. An operator of the computer can determine (e.g., counts) a number of light or dark longitudinal pixels in one longitudinal pitch distance of each of the second images and can compare the number of lateral pixels to the number of longitudinal pixels. If the number of lateral pixels matches the number of longitudinal pixels, the image capturing module140and the vehicle encoder are calibrated. If the number of lateral pixels is different from the number of longitudinal pixels, the vehicle encoder rate can be adjusted until number of lateral pixels matches the number of longitudinal pixels. As such, the system600can be used to calibrate the image capturing module140to ensure sufficient images are captured by the system600that can be used to accurately identify objects in the environment surrounding the vehicle110. In one embodiment, the end plates180can include components small and light enough to be movable for the operator. For example, the end plates180can couple to the roadway112by a mechanical, electrical, or magnetic manner. In one embodiment, the end plates180can include a mechanical component such as a clamp, bolt, screw, cotter pin, or some other mechanical coupler. In another embodiment, the end plates180can include an electrical component such as an electromechanical clamp, electromagnetic coupler, or some other electrical coupler. In another embodiment, the end plates180can include a magnetic component such as a magnetic disc, strip, or paint manually placed by the operator. In another embodiment, the end plates180can be removable from the roadway112. In one embodiment, the endplates180can provide sufficient structure such that the target170remains parallel with the direction of travel of the roadway112. For example, the image capture module140can capture images of the roadway112as the vehicle110travels over the target170and the endplates180. In this manner, the image capture module140can capture at least one image of the target170and the endplates180along the roadway112. The vehicle110can travel over the target170and endplates180to capture subsequent images providing a quantity of images for calibrating the image capture module140. The present disclosure achieves at least the following advantages:1. enables accurate calibration of an image capturing module in the field of use;2. enables modular calibration based on a calibration bar attachable and detachable to a railway; and3. provides a portable system for calibration simplifying current calibration techniques. Persons skilled in the art will readily understand that advantages and objectives described above would not be possible without the particular combination of computer hardware and other structural components and mechanisms assembled in this inventive system and described herein. Additionally, the algorithms, methods, and processes disclosed herein improve and transform any general-purpose computer or processor disclosed in this specification and drawings into a special purpose computer programmed to perform the disclosed algorithms, methods, and processes to achieve the aforementioned functionality, advantages, and objectives. It will be further understood that a variety of programming tools, known to persons skilled in the art, are available for generating and implementing the features and operations described in the foregoing. Moreover, the particular choice of programming tool(s) can be governed by the specific objectives and constraints placed on the implementation selected for realizing the concepts set forth herein and in the appended claims. The description in this patent document should not be read as implying that any particular element, step, or function can be an essential or critical element that must be included in the claim scope. Also, none of the claims can be intended to invoke 35 U.S.C. § 112(f) with respect to any of the appended claims or claim elements unless the exact words “means for” or “step for” are explicitly used in the particular claim, followed by a participle phrase identifying a function. Use of terms such as (but not limited to) “mechanism,” “module,” “device,” “unit,” “component,” “element,” “member,” “apparatus,” “machine,” “system,” “processor,” “processing device,” or “controller” within a claim can be understood and intended to refer to structures known to those skilled in the relevant art, as further modified or enhanced by the features of the claims themselves, and can be not intended to invoke 35 U.S.C. § 112(f). Even under the broadest reasonable interpretation, in light of this paragraph of this specification, the claims are not intended to invoke 35 U.S.C. § 112(f) absent the specific language described above. The disclosure can be embodied in other specific forms without departing from the spirit or essential characteristics thereof. For example, each of the new structures described herein, can be modified to suit particular local variations or requirements while retaining their basic configurations or structural relationships with each other or while performing the same or similar functions described herein. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the inventions can be established by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Further, the individual elements of the claims are not well-understood, routine, or conventional. Instead, the claims are directed to the unconventional inventive concept described in the specification.
59,109
11861820
MULTIPLE EMBODIMENTS AND ALTERNATIVES In some embodiments within the scope of subject matter claimed herein, a system is provided for evaluating equipment or civil structures undergoing motion on a periodic interval as defined by a PdM program. When sampled data is acquired, such as with a video acquisition device, the data may exist in a video recording having a plurality of video images of the moving object which are divisible into individual video image frames, and with each frame being divisible into a plurality of pixels. Such a system may comprise one or more video acquisition devices, such as but not limited to one or more video cameras, webcams, or digital cameras integral in cells phones. In this way, one or more video acquisition devices may be positioned with an unobstructed view of a selected portion of an object to obtain a video recording of the object in motion. Also, such a system or method may comprise or utilize a processor and a memory for storage of the individual video image frames as well as any that are modified through the processes described herein, and a computer program operating in the processor, as well as one or more video acquisition devices. Embodiments are not limited to a particular type of video acquisition device, but may include one or more video cameras, webcams, or digital cameras sensitive to other wavelengths in the electromagnetic spectrum. A video acquisition device in the embodiments herein may be configured with an adjustable frame rate that allows the video images to be acquired at a sampling rate that is sufficient to capture a plurality of frequencies present in the periodic motion. That is, video images are acquired by a video acquisition device at a rate expressed in frames per second (fps), wherein for example at 120 fps there would be 1200 frames acquired in 10 sec. A computer program in the embodiments herein comprises computer-readable program instructions executed by the processor and may be configured to operate on a subset of pixels from the plurality of pixels in a field of view of the video recording. A system in accordance with present embodiments, when collecting PdM, may be augmented with data input from other sensors including infrared cameras, airborne or contact ultrasonic sensors, accelerometers, or force sensors, electric current or voltage sensors, flux coils, electrical discharge sensors or Hall effect probes, tachometers, and other process or environmental measurements appropriate to specific applications and arranged as a sensor payload on a particular data acquisition unit. These sensors may represent data that is desired in combination with the vibration information collected by the cameras to detect incipient fault conditions, or they may serve as trigger sources which determine if data is to be collected and reviewed. In other cases they may be used to establish operational states of the equipment under test to be used by the fault detection algorithms to improve accuracy under variable operating conditions. Installed wired sensors may be used when the camera is mounted at a fixed location, but noncontact sensors or wireless installed would be the preferred choice for mobile collection units. When video data is collected from a fixed location with a fixed orientation with a fixed lens and stable lighting, then the data acquisition portion of the monitoring process is simplified. However in the general case, which certainly includes mobile applications, variable lighting, focus and aperture of the camera, distance to the equipment under test, and camera shake must be carefully controlled or determined to make repeatable measurements. As illustrated inFIGS.1and2robots and UAVs are commercially available to transport the sensor monitoring payload to the test location. For installed data acquisition units, panoramic rotational and pan/tilt mounts are available to allow the sensor payload to be pointed in the proper direction under computer control. The technology that enables robots or UAVs to follow programmed routes is not part of this invention and is provided by the vendors of such equipment. Examples of such companies are Boston Dynamics which manufactures the quadruped shown inFIG.1or DJI which sells the Spreading Wings S1000 shown inFIG.2. The robot and UAV vendors provide sophisticated tools to program their devices to follow specific routes and pause at predefined locations to collect data. Some survey sites may have landing structures that the mobile DAUs will locate and position themselves upon the structure during the survey measurements. The landing structure could be a simple flat platform, or a mechanical structure formed to mate with the one of more sections of body of the mobile transport vehicle. In some embodiments, the mobile transport vehicle might be secured magnetically to the landing structure, or a mechanical clamping mechanism might be used to secure the transport vehicle. In some embodiments, the landing stations might be intelligent and communicate with the mobile transport vehicle to engage and disengage the latching mechanism. Installations such as these would require power and a preferable embodiment would use solar panel or an energy harvesting mechanism to maintain the charge on a battery. When using a landing structure, the DAU may turn off all motors and navigation electronics that consume power and contribute to camera shake. This methodology offers potential to achieve higher accuracy measurements and reduce the power required to execute a survey or allow more equipment to be monitored during a survey. In some embodiments within the scope of the present disclosure, a PdM system is configured to monitor many locations in one or more geographically diverse plant sites as illustrated inFIG.4. The specification defining what measurements are to be acquired and how each location is to be screened is established by the PdM program manager or analyst, and this information is stored in a database on the PdM program server. This PdM server may be in a physical or cloud-based data center and the fleet of DAUs may be distributed at one or more physical sites. The DAUs may consist of mobile units attached transported by robots or UAVs or DAUs mounted at fixed sites which can be electronically positioned to perform surveys of multiple locations with equipment or structures to be evaluated. The DAUs will include an instrument package that supports multiple sensors at least one of those sensors being a video camera that can make dynamic measurements of the motion of the objects in the FoV at each test location. In some embodiments one or more of the optional sensors will be aligned such that line of view of the video camera such that data from these other sensors may be overlayed upon a visual image. In other alternate embodiments, a laser point may be used to identify the location of a fault detected by another sensor on a visual image recorded by the video camera. In one of the preferred embodiments, one or more fiducial marks are located in the field of view of the camera at each monitoring site as shown inFIG.4. The fiducial marks, labelled101through104, can serve several functions. They may contain equipment identifiers or QR codes, such as101, which confirms the identity of the test location. The fiducial marks also contain line segments of a known length that can establish the mm/pixel calibration from the exact testing position. The sides of the perimeter surrounding101or the sides of the squares in102-104are a known precise length and can provide precise calibration as the software exactly determines the pixels falling at the endpoints of the respective sides. Fiducial marks can also serve as test locations themselves or as reference points from which other spatial regions of interest (ROIs) are established for test measurements. The rectangular ROIs labelled as105and106are examples of these graphically defined spatial measurement locations. Measurement ROIs may be located on the test object or a nearby stationary structure to aid in stability corrections applied to the video data to remove camera shake. The target labelled103is attached a structural wall behind the equipment under test and can be used to determine the amount of camera shake present and to remove this motion by applying stabilization algorithms to the video recording. The fiducial marks also represent the maximum contrast available since they contain pure white and black colors on adjacent pixels. They can also be used in algorithms to determine if adequate lighting is present or to adjust the external lighting included as part of the mobile monitoring system or mounted supplemental lighting that can be controlled wirelessly to achieve an acceptable level of brightness in the recorded video. One preferred embodiment of the PdM survey process is outlined in the flowcharts shown inFIGS.5A-C. A database of predefined settings establishes how data is acquired at each test location as defined in step201. This PdM database will define the parameters that establish how data is collected, screened, and stored as outlined in Table 1. Table 1. Predefined Information in the PdM Monitoring Database1. How frequently data is collected2. Number of test locations to be collected in the FoV3. Whether test locations are established by fiducial marks, spatially located ROIs, or by object recognition and/or edge detection techniques4. Maximum Frequency of analysis for each test measurement5. Duration of data collection for each test measurement6. Triggered or non-triggered measurement for each test measurement7. Trigger source and specification, pre-trigger buffer for each test measurement8. Other data to be collected, including data that may be needed to establish operational states9. Screening/Analysis methods to applied10. What measurements to include in screening data summary11. Data storage/transmission criteria The data acquisition process employed by each DAU is outlined in the flowcharts provided inFIGS.5A-C. Once the mobile DAU arrives at the test location or the mounted DAU is activated for a data collection session, the data acquisition (DAC) process is controlled by an onboard DAC computer as indicated at step202. The DAC computer will set the sampling rate, sampling duration, and triggering parameters for each test measurement to be performed as outlined in step203. As outlined in steps204through206, the DAC computer adjusts the camera settings for the aperture, shutter speed, gain, and focus of the lens to obtain optimum focus and lighting. If the level of lighting is determined to be insufficient, then the auxiliary light source, if available, will be switched on to add sufficient lighting for the measurement to proceed. The measurement process is described in steps206through209. The DAC computer will make the measurements defined, stabilize or filter the video as needed to remove camera shake, screen the video data as specified, and store data per the data storage specifications. In some applications, a single video recording may be sufficient to capture a good video recording for all measurement locations. In other applications where measurement locations are located at significantly differing distances from the camera, camera settings may need to be adjusted and additional video recordings captured. In some embodiments, the data measured may include machine speeds, vibration waveforms, spectra, cross spectra, and then specific vibration parameters such as maximum peak values, symmetry of the waveform, phase, amplitudes in specific frequency intervals, or amplitudes at the set of N largest frequency peaks at specific measurement locations. Alternately, the DAU may search for objects in the FoV which match components of interest or have been graphically defined from a video frame or photo during an initial baseline measurement to spatially limit the areas in the FoV that will be screened. In some embodiments as described in step210, prior to screening the video data, a check may be executed to detect significant changes in the scene being monitored due to bad environmental conditions or obstructions in the FoV. This check can be accomplished by one of several techniques known to those skilled in the art. Typically, one frame or an averaged frame is compared against an equivalent frame collected during a baseline survey. If significant differences are present, then the FoV has been compromised and the survey measurements will not be meaningful. Another set of data may be obtained to repeat the check; however, if the integrity of the scene in the field of view cannot be established, then a note to this effect would be logged along with the compromised image frame and the DAU will move to the next equipment/structure to be tested. As persons skilled in the relevant art will appreciate, this check to prevent the collection of corrupted survey data could be performed at other points in sequence described in this flowchart, such as before the step217, and remain within the scope of the embodiments described herein. Motion waveforms will be constructed for individual or groups of pixels, edges, the most distinctive features inside the monitored objects, installed target marks, or spatially defined ROIs and auto or cross frequency spectra will be calculated. Features derived from the individual waveforms or spectra may be calculated or alternately, a composite spectrum constructed and the largest N peaks in the composite spectrum may be located. Data from waveforms with unrealistic amplitude values, such as 100 mils or greater, will be excluded from the measured parameters and/or the composite spectra. Additionally, the features of the waveform may be screened to identify features that indicate an invalid measurement such as a highly skewed, truncated, or step discontinuity characteristics. Waveforms such as these would be ignored but their presence logged to be reviewed by the PdM analyst. As described in step211, the features and techniques defined for each particular test site will be applied to screen the data for suspect conditions. In some embodiments, exceptional data may trigger that the measurement is repeated and data only retained if the detected suspect conditions persist for M measurements as described in steps212through214. When monitoring rotating or reciprocating equipment especially those which operated with variable speed, the speed of the machine will be measured, if possible. This can be done optically if there are areas of the shaft exposed for the machine under test. When this is not possible, then speed determination algorithms will be applied to the frequency spectrum of the machine to attempt to obtain an accurate value for the speed. This speed value will be used to determine the rotational order of the peaks present in the spectral data which is necessary to accurately screen data and diagnose faults. In other applications, such as monitoring piping, support structures, or stationary equipment such as tanks and vessels, the measurement of speed is not applicable, and the frequencies of interest do not need to be normalized before analysis. In this application, the DAC computer will screen the vibration waveforms or frequency spectra for displacement/velocity amplitudes that user defined or learned alarm limits or changes in other characteristics in the vibration waveform or the presence or emergence of significant peaks in the frequency spectrum. If exceptional values are detected in the screened vibration parameters, then the measurements may be repeated to satisfy a persistence criterion (steps212-214). In most embodiments, certain summary data will always be returned from test locations as outlined in step215; however, the video data would only be retained when establishing baseline conditions for new, replaced, or rebuilt equipment or the screening process reveals suspect conditions. Even if suspect conditions are determined to be present after any persistence requirement is met, some embodiments will compare the current measurement against the previous data stored to detect a significant change before the video data will be stored as described in step216and217. The stored data is retained in onboard memory in the DAC computer until a wireless network is available Some or all of the stored data may be transmitted after the measurements are completed at a test site, during a scheduled transmission, when the DAC computer reaches the limits of available memory, or at the completion of a survey. If there are more test sites for the mobile or mounted DAU, then the unit will move to the next test site as described in step218or mobile units return to their home station and mounted units suspend monitoring and wait for the next scheduled survey as described in step219. The home base for a DAU may be at a central monitoring location or may be located remotely in the field. Most of the vendors of the unmanned vehicles or robots provide docking stations which can be installed in a variety of environmental conditions. In some embodiments, the instrumentation mounted on the DAU may vary depending upon the type of equipment being monitored. In some embodiments, the instrumentation package as a single unit is attached to a simple motorized panoramic rotational, pan/tilt mount in order to have more flexibility for capturing data in three spatial dimensions such as illustrated inFIG.6and labelled305and306. In more complex embodiments, individual sensors or sensor groups may be individually mounted on mechanisms to allow independent motion between sensors. In other applications, this flexibility may not be needed, and the mobile DAU may provide enough range of motion such that a rigid mount is sufficient for the instrumentation package. In applications where the instrumentation package is mounted at a fixed spatial location, the motorized panoramic rotational, pan/tilt mount may be used to reduce the number of locations where an instrument package would be installed and reduce the cost and maintenance requirements of the PdM implementation. FIG.6illustrates an exemplary use of a broad range of sensors installed in the instrumentation package. In some embodiments, the specific sensors included in the instrumentation package may be fixed. In other embodiments the sensors included may vary based on the survey to be executed. In this situation, the PdM technician responsible for the mobile DAUs would be alerted by the central PdM server that the next survey scheduled will require an instrumentation change and what sensors or sensor payload is required. Once the Mobile DAU has completed its current survey and returned to its home base, the central server provides a notice to the PdM technician of the time window available for switching to the next set of sensors. The specific sensors needed will vary depending on the type of equipment or structures to be monitored. If the instrument package is being mounted on a few mobile DAU units, then it may be cost effective to have a full complement of sensors and only activate those measurements specified in the PdM database at each monitoring location. It is preferable to utilize non-contact sensors; however, a mobile DAU might access installed wireless sensors or connect to a wireless communication link provided by the process computer to acquire operational state data. These installed sensors or external process links may serve as triggers for data collection or may provide additional data to supplement the interpretation of the other survey measurements. The applications described herein rely on data from a video camera labelled303inFIG.6. In some embodiments, a second visual camera may also be used to provide stereo visual data as indicated by label311. The visual camera(s) will supply the dynamic motion data for the equipment or structure under test. The camera(s) should be aligned with other sensors to allow images to be overlayed such as between the IR camera labelled301or the optional ultraviolet/multispectral camera labelled312and an image from the visual camera. Also, a fault detected by other sensors could be located by the beam of the laser pointer (labelled302) superimposed on the FOV of the visual camera. In some embodiments, one or more airborne ultrasonic sensors, labelled304and310, are employed to provide sensitivity to high frequency phenomena such as impacting events, leaks, and electrical discharges, and are useful in monitoring process equipment, electrical equipment, pipes, and steam traps. Multiple ultrasonic sensors can help locate the source of the high frequency events more accurately. The IR camera can capture a single thermal image as well as video of the temperature variations present in its FoV. The detection of unusual temperature conditions is extremely valuable in almost all PdM applications. The magnetic flux probe, a partial discharge, or other electrical sensor labelled308, are valuable for detecting electrical faults in motors, generators, or power transmission equipment. The use of multiple sensors attached to the mobile DAU or even a variable position mounted DAU can be extremely advantages to accomplishing a comprehensive screening of the equipment or areas under test and minimizes the time involved in collecting PdM data and the number of survey trips required from mobile DAUs. Furthermore multiple sources of data collected at the same time provide a more comprehensive and interpretable evaluation of any fault conditions present and the severity of the degradation. In some embodiments, the data collected is based on triggering requirements. In these testing scenarios, the data is captured continuously in a circular buffer and only processed if a trigger occurs. The data collection process allows for the duration of pre- and post-trigger data to be specified in the PdM database. The trigger source can be any signal which can be accessed by the DAU such as a speed signal, process measurement, wired or wireless accelerometer, a sonic/ultrasonic sensor, or an IR temperature sensor. In other embodiments, the trigger source may come from the video signal. There are several types of triggers that can be defined based on the live video signal. One or more ROIs can be defined spatially in the FoV, or mounted targets may serve as the virtual sensors from which one or more triggers can be defined. There is a great deal of flexibility available when specifying a trigger event. The trigger event can be derived based on the overall displacement/velocity measurements calculated from a virtual sensor located in the FoV or the change in those measured parameters. Additionally, frequency-based triggers can be defined based on the displacement or velocity amplitude at a specified frequency or frequency interval. Multiple trigger criteria can be defined on the same virtual sensor or from other sensors. A third method of triggering could result from a change in the pattern of the motion in a ROI. This type of trigger is useful in applications where a repetitive process such as packaging or bottling is occurring, and there is a need to detect jams or other types of process upsets. Another type of trigger could result from a target mounted on a component or a distinctive feature on the component such as a robotic arm which is performing a repetitive operation. The transitory motion can be tracked and a trigger occur at the same point in each cycle or define a trigger based on deviations occurring in the path of the arm from one cycle to the next. In one of the preferred embodiments, the screening algorithms applied to the video data is user selectable and defined in the PdM database. The screening methodology may differ due to user preference, the type of equipment monitored, or the related production process. The screening methods may include combinations of time waveform parameters, phase readings, or amplitudes at selected frequencies or frequency intervals in a frequency spectrum at selected measurement locations or in a composite frequency spectrum. When screening rotating or reciprocating equipment, it is a preferred practice to screen spatially defined measurement locations. The spatially defined measurement locations are usually positioned close or on the bearing housing of the component machines. These locations will normally be established by a combination of fiducial targets attached to the machine or by ROIs established graphically by the user during the initial setup of the PdM monitoring database. The PdM analyst that defines the PdM surveys will define the motion parameters to be screened at each measurement location. These will include a combination of time waveform parameters, such as RMS, Peak, or PK-PK amplitude, measures of asymmetry, skewness, or kurtosis, or amplitude histograms. Other parameters will be extracted from frequency spectrum features, such as amplitudes at specific frequencies, the amplitude for a frequency interval, or the largest N peaks in the spectrum. The amplitude of vibration may be expressed in displacement or velocity units. Other monitored parameters might be deviations in phase relationships between components or phase differences between two or more locations or changes from a baseline value. Although rotating or reciprocating equipment might be the main focus at a test site on the survey, there will be additional support/protective structures, piping, valves, tanks, or gauges present in the FoV captured by the visual camera. In some situations, the PdM analyst may elect to define specific spatial locations to be monitored on these objects as described above. However, in other cases, the entire area or spatially limited areas around the rotating/reciprocating equipment may be monitored by screening individual pixels, motion in a pixel grid, the motion of the most distinctive features in the area. Spatial areas analyzed may be established by user definition, object recognition, edge detection, or a combination of these techniques. One or more of the time waveform parameters identified above may be selected as the features to be screened. Additionally, features as described above from the individual frequency spectra may be monitored. In some applications, the composite spectrum from the selected spatial areas may be constructed as defined in the PdM database and frequencies present in the composite spectrum screened to detect incipient problems. Some methods of construction for the composite spectrum will also provide an occurrence count for the number of pixels or spatial features exhibiting this defect frequency. Regardless of the features screened a data test summary will be defined to characterize the state of the equipment under test. In cases where no rotating or reciprocating equipment is present in the FoV at the test site, the method described above which does utilized specific test measurement locations may be applied to the scene. Certainly when screening rotating or reciprocating equipment, the temperature data from the IR camera and the high frequency information from the ultrasonic sensors is extremely valuable for detecting phenomena associated with antifriction bearings and gears, or loose components. Leaks and steam trap or valve problems are other faults that can be detected using the ultrasonic sensor and IR camera present on the mobile DAU. In some embodiments, a mobile DAU could use multiple ultrasonic sensors and modify the position of the mounting mechanism to pinpoint the location of the leak and use the laser pointer to identify the leak on a visual image. Objects with unreasonably high motion, such as an instrument tag flapping in the wind or objects moving through the field of view will be logged as a survey note and cause these localized areas to be omitted from analysis. In other instances, the presence of adverse environmental conditions such as weather, lighting, or camera shake may trigger a re-collection of the measured data or prevent the completion of data collection and analysis if severe conditions persist. The condition preventing completion of the survey at a test site will be logged as a survey note along with a visual image. In other monitoring applications such as robotic arms, stamp presses, or cranes, there is translational motion present which is designed to occur along a repeatable path or can be programmed to perform a repeated test operation. In this case, the path of the translation motion is separated from the vibratory motion which occurs along the path. The repeatability of the translational path can be compared against the original baseline path and features extracted from the waveform and spectrum of the vibratory motion measured over the entire path or for different sections of the path. These features can also be screened to detect deterioration in the operation of the equipment. This type of application may take advantage of triggered measurements based on one or more ROIs defined in the visual image or from one of the supplemental measurements such as an acoustic or ultrasonic sensor. Another monitoring application where triggered data capture would be important is in packaging or bottling process applications. In this application, the user may establish an amplitude- or frequency-based trigger or a pattern-based trigger from a spatially defined ROI in the FoV of the camera. The video data would be captured in a circular buffer until the pattern present in the ROI changes and then the specified amount of pre-trigger data is retained, and recording continues until the specified post-trigger data is acquired. Theses trigger could be established based on values specified by the user or learned from observing the process for a period to establish normal behavior. This approach would be necessary for processes that are subject to variable production speeds. In bridge monitoring applications, the data collection may be triggered using the visual camera on board the DAU or it may be from a mounted accelerometer or a camera that can transmit data wirelessly to the DAU. In these applications, it would be much more informative to collect data when vehicles are on or exiting the bridge. Additionally, it may be important to characterize the type and magnitude of the load being applied to the bridge in order to properly screen the measured data. Another PdM use case for the mobile DAU or a variable position mounted DAU is electrical switchyards or substations. The substation is an assembly of transformers, switches, power circuits, breakers, electrical lines, and auxiliary equipment to support the transmission of electricity. In this scenario, airborne ultrasonic sensors, electrical sensors, an IR camera, an ultraviolet or multispectral camera, and the visual camera would provide valuable information. The airborne ultrasonic sensors, electrical sensors, and the ultraviolet or multispectral camera would be sensitive to corona or other types of electrical discharge phenomena. The IR camera would provide the ability to detect hotspots on the equipment and the visual camera could detect excessive motion in lines or supporting structures as well as assist in identifying the location of fault detected by the other monitoring devices. All features extracted from the measured data must be evaluated against alarm limit values. In some cases these limits may be defined by the PdM analyst based on his experience, by previous measurements, or by industry established guidelines. For example, there are guidelines for overall vibration levels which have been established for piping and rotating/reciprocating equipment by industry groups. Similarly, there are guidelines which have been established for temperatures when screening plant equipment and electrical panels using an IR camera. In other cases, it is preferable or necessary to establish baseline values and alarm limits during an initial monitoring period and learn the normal variations that occur in the data. After the learning or training period, the screening will compare new measurements to see if they are outside of the range of normal variation. Learned alarm limits may be different for different operational states if those states can be established by onboard sensors, external wireless measurements, or through data links with production/operational control systems. If the screening begins to detect suspect behavior but it is determined that equipment is behaving normally, then additional learning sessions may be needed to account for variations in the data that have not previously been encountered by the monitoring system. There are statistical and artificial intelligence methods well known to those skilled in the art that may be employed to establish the limits of normal behavior from data collected during the learning or training period. As discussed herein, the PdM database will define the persistence requirements for each test site. This is an important feature toward reducing false alarms and storing unnecessary video recording. Something as simple as a person or a vehicle crossing the line of sight or obstructing the view between the DAU and the equipment under test will generate false alarms. The detection of fundamental changes in the scene being monitored can be accomplished by performing a correlation of a frame collected during baseline measures and the current scene available to the DAU. This check could be done before data is collected as a precursor to initiating the detailed survey of the equipment under test or as test quality check when suspect conditions are detected. In the event that the scene in the FoV shows significant deviation from the baseline scene, the test will not be completed and a note of this issue with the distorted image will be logged. Finally, when the suspect condition is found to persist as specified in the monitoring requirements, test data must be compared against the last survey collected to determine if there have been significant changes. If not, then the alarm conditions are noted as substantially unchanged and only summary data are stored, but no video data is retained. If significant changes are detected, then this is noted and all data including the video data is stored. The PdM database will normally have the monitored plant equipment broken up into many surveys with their own schedule and repeat interval. The large amount of storage required when retaining video recordings will mean that the complete PdM database cannot be retained onboard DAUs. However DAUs will reload the setup information and alarm limits in the PdM database as well as data from the baseline and latest survey measurements for the next scheduled survey. These mobile DAUs must return to their home station on a regular basis to recharge their batteries and data communications could occur at this point or in the field through wireless communications as the different test sites are completed or at the next opportunity when a communications link is available. Field transmissions are preferable since they provide real time status of the survey in progress. In some embodiments, it would be possible to retain all of the video data and other survey measurements onboard the DAU as storage options continue to improve. In this scenario, the processing to determine if the complete set of measurement data is to be retained in the PdM database could be done by the DAU prior to transfer at the home base or even on the central PdM server. The processing techniques to screen the data and determine if the full data set or a simplified summary is retained would be identical regardless of when and where this processing is applied. In situations where a fleet of mobile DAUs are available to perform the monitoring tasks, the central PdM operations system, i.e., central PdM server which is operatively connected to the master PdM database, maintains a status chart of each DAU and determines which survey tasks are assigned to a specific DAU based on various factors. These factors may include, but are not necessarily limited to, availability, amount of run time logged for the unit, and the current battery charge. If DAUs differ in the sensor payload mounted, then the central PdM operations system will select the DAU which has the correct sensor payload for the survey task under consideration. The central PdM operations system performs other tasks, including but not limited to scheduling DAUs for periodic maintenance and calibration checks; alerting technicians when DAUs have been damaged, stranded in the field, or have failed to complete their assigned PdM survey tasks; and applying optimization logic to the fleet of DAUs in an effort to the maximize the amount of survey tasks performed and the service life of the DAUs. An exemplary process for optimizing the use of the DAU fleet is outlined in a flowchart inFIGS.7A-B. The information required to manage the use of the DAU fleet is stored in the PdM database and the functions outlined in the flowchart are embodied in the operational software that runs on the central PdM server. The configuration of each of the DAUs in the fleet must be established, including mobile capability and range, sensor payload, runtime hours, current operational state, any previous maintenance actions required, battery capacity, battery level, and current state of readiness as defined in step401. The equipment sites to be surveyed, the data collected at each survey site, and all subsequent processing methods to be applied are initially defined in Step402and updated as the requirements of the PdM program are refined over time. The equipment sites to be surveyed are organized into survey routes, and a survey schedule and applicable DAU requirements are defined and stored in the PdM database in step403. This step may include using the vendor software for the appropriate type of DAU to construct a file that defines the geographical route to be followed to perform the survey and determine the time to navigate the route which is combined with the measurement times to establish an estimated duration for each survey. In step404, a particular DAU is assigned by the PdM server, generally by default, to a particular survey route to cover all defined test sites. Although a default assignment is made, step404may also allow for operational software executed by the PdM server to select a DAU for a particular survey based on various factors. These may include fleet availability, uniform usage of DAUs in the fleet, and other factors aimed at enabling the survey tasks to be performed in a manner that maximizes the likelihood that the mission requirements will be fulfilled on time, effectively, and efficiently. In step405, the selected DAU performs the assigned survey by loading the appropriate route file and moving to the initial or next survey test site. If the DAU does not arrive at the test site (406), then the problem is logged and reported to the central PdM server (407). The DAU will attempt to move to the next site (408) if more exist or return to its home base (412), if physically able to do so. If the DAU does arrive at the survey test site, then it will collect and process the survey measurements for this site in step409. If all survey measurements are completed (410), then the DAU will attempt to continue on the route or return to its home base as defined in step412. Otherwise, at step411if more monitoring is to occur, the series of steps beginning with step405will be repeated. If the DAU arrives at its home base, it will connect to its recharging station and transmit survey statistics, such as duration of survey, power used during survey, notes of bad test conditions, sites where measurements were incomplete, and the current runtime of the DAU as shown in step413. In final step414, the central PdM operations software will send out alerts for needed maintenance for the DAU and update survey performance parameters to refine criteria used by DAU scheduling software. Persons of ordinary skill in this art also will understand that certain conditions or situations may arise that will prevent the mobile DAUs from completing their survey. These may result from low battery power, malfunctions in attempting to position the mobile unit, obstructions, the loss of the ability to maneuver, or physical damage to the DAU. The presence of these conditions would generate a transmission to the central PdM server with location information and an attempt to return to the home base. In some cases, a unit may become stranded, and a distress signal will be generated on a periodic basis to assist with the process of locating its current location. In some embodiments, additional data may be needed to verify a fault condition or diagnose the specific fault condition. A PdM analyst will be notified that a suspect condition has been detected during or at the conclusion of the survey when the data has been transferred back to the PdM server or in other implementations by a communication directly to the analyst from the DAU as soon as the condition is detected. The analyst has the option of interrupting the survey in progress or sending the DAU back to a survey location and take control of the DAU to acquire additional measurements. The analyst can remotely control the position of the DAU and view the recordings from the camera in real time. The analyst may also take control of the DAU and direct additional measurements to be collected. Once the analyst has completed collecting the desired measurements, the DAU would continue with the survey in progress or return to its home base. It will be understood that the embodiments described herein are not limited in their application to the details of the teachings and descriptions set forth, or as illustrated in the accompanying figures. Rather, the present embodiments and alternatives, as described and claimed herein, are capable of being practiced or carried out in various ways. Also, it is to be understood that words and phrases used herein are for the purpose of description and should not be regarded as limiting. The use herein of such words and phrases as “including,” “such as,” “comprising,” “e.g.,” “containing,” or “having” and variations of those words is meant to encompass the items listed thereafter, and equivalents of those, as well as additional items. Accordingly, the foregoing descriptions of several embodiments and alternatives are meant to illustrate, rather than to serve as limits on the scope of what has been disclosed herein. The descriptions herein are not intended to be exhaustive, nor are they meant to limit the understanding of the embodiments to the precise forms disclosed. In terms of the descriptions, it will be understood by those having ordinary skill in the art that modifications and variations of these embodiments are reasonably possible in light of the above teachings and descriptions.
43,073
11861821
DETAILED DESCRIPTION FIG.1. Outlines general components of the invention and the basic process. Facing the front of the lumber pack1user prepares to take a picture of the bundle to be measured. Using his/her smart phone, tablet, or other mobile device with a built-in camera2, user snaps a picture of the bundle from the SnapTally application running on the device. If the picture is satisfactory, user confirms and the picture is saved for the detection process. Next, user initiates the “Detect” function in the application against the picture taken. The detection process initiates algorithms based on the artificial intelligence models trained for identifying individual objects, and detected items are displayed within bounding-boxes on the screen3. FIG.2. Illustrates the application screen1upon completion of detection process, containing the description of the product2(entered by user), all the detected items having bounding boxes drawn around them3. User may inspect and further edit, add, or remove boxes if necessary. User initiates the “Product” function to specify the type of product, its quality, and any other details related to this work. He/she then proceeds to run the actual measurement of the detections with the “Measure” function. In measuring each item, the pixels in the image are evaluated, and based on the boxes drawn around each object, the actual measurement value is computed and assigned to each of the pieces. The application displays product information and summary data resulting from the identification and measurement of the bundle4. All the tally details are shown with thickness, length, width, number of pieces, and volume5. The application finally computes all the individual volumes, and a summary of the lumber bundle. This data is saved on the device, and can be managed, edited, re-detected later if necessary. Captured data can be uploaded, transmitted to a server system for further processing. It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, steps or components but does not preclude the presence or addition of one or more other features, steps, components or groups thereof. Description of the Invention The foregoing disclosure and the claims describe the preferred embodiments of the present invention. Particularly with respect to the claims, it should be understood that changes may be made without departing from its essence. In this regard, it's intended that such changes would still fall within the scope of the present invention which may be accomplished. To the extent such revisions utilize the essence of the present invention, each naturally fall within the breadth of protection encompassed by this patent. The present invention SnapTally addresses the need to take measurements of products quickly and accurately. The system utilizes a high-definition camera to capture the image of the object; it processes the image, and produces data related to the measurements and counts within the image. Main objective is to provide measurement and count data of objects in the real-world by simply taking pictures. Benefits While a manual measuring operation is the most inefficient method, and other alternative solutions require apparatus and equipment that may not be practical to install, as well as being costly, the SnapTally invention can be employed by anyone with a smart phone, tablet, or another mobile device with a camera; it's practical to deploy and it produces results quickly. Architecture and Methods The SnapTally system is based on two major components; mobile device with a camera and SnapTally application. The application runs on smart phones, tablets, and other devices with Android operating systems; however other operating systems can also be available. A built-in camera, or an external camera, is required to snap pictures, and network connectivity enables data transmission and processing. General Specifications Hardware and Equipment: Smartphone, Tablet, Rugged Handheld Device, or Mobile DeviceBuilt-in or External CameraWi-Fi and/or GSM Operating System: Android Mobile Software: Sierra SnapTally to detect, measure, and manage related data Process The SnapTally mobile software captures an object's image using device's built-in camera, and saves the image on the device; a built-in flash or an external flash, as well as an external camera, can be used to aid in improving image quality. The object being measured is a package of lumber boards. The system recognizes and marks each individual board within the pack and measures its width. The lumber package for measurement is shown inFIG.1. The picture is snapped from the front end of the package of lumber, the face showing the widths of the boards are to be measured. The image is then submitted to be processed, either on the mobile device, or on a server running a model of a Machine Learning Library with neural networking and algorithms used in identifying pieces of objects contained in the picture. The model resolves each object and returns the data related to the detection performed. The image is displayed for the operator with all the object representations drawn as boxes around each piece. (SeeFIG.2) The user supplies primary data to convert and compute image information to actual measurement data; the system uses such data to compute product total volume and count. In most applications the length data for a package is a fixed value, and thickness is part of the product identification specified by user.FIG.2sections4and5show all measurements of individual pieces and totals displayed on the screen. The operator may also add or edit objects manually as required, enter product information, save and/or transfer the data for continued processing. Comparison to Other Inventions U.S. Pat. No. 5,307,294 “Automated End Tally System” is designed to perform the task of measuring lumber boards. This system requires a sophisticated mechanical and electronic equipment to be installed. In comparison, the present invention requires no such costly installations; the only equipment used primarily is a hand held device with a camera and the software for detections. The prior art also requires lifting and moving heavy bundles of lumber whereas the present invention performs all measuring on location without moving the objects. U.S. Pat. No. 7,431,212 “Practical mobile data collection and processing system with bar-coded tally ruler” is another invention that measures and counts products. The system uses a bar-coded ruler to scan each individual piece whereas the present invention performs the measuring by detecting all the objects at once. Development The mobile application that drives the process of detection and management of data is built for the Android® platform, and Java language has been used to develop the system. The application utilizes a local database management system SQLite to store and manage data related to the application. The application is easily portable to other platforms such as Apple iPhone®. On the server platform, a Machine Learning library model that has been specifically trained to process object detection is configured to service the mobile application. The server accepts images from the mobile application and provides coordinates of objects upon detection. A compact version of the Machine Learning model is also available to process images without the server. Object Detection One of the components of the invention is the object detection process to identify the items in the image presented. The methods utilized are explained in the following sections: Training a Custom Object Detection Model An Object Detection Model is part of an Artificial Intelligence system that includes a deep-learning network. The model must be trained to detect specific types of objects. A custom model is generally based on a framework of available models that are further trained and/or customized. Tensorflow is a well known and utilized open source platform with libraries and tools for machine learning that is provided by Google. Libraries contain various object detection models, as well as other features such as speech. Tensorflow is available for download from an open source platform. Among various object detection models available for implementation, the invention utilizes a specific model as described below: Faster_RCNN: Faster Regional Convolutional Neural Network. This base model performs better for object detection of smaller objects. Using this model does not limit the invention's technical architecture from using other models, combination of different models, or other computer vision technologies now or in the future. The Faster RCNN model is the current technology utilized in object detections at the time of the filing of the patent.1. In order to train an object detection model, one must first annotate the objects being presented for training. This is one of the most painstaking and important tasks in object detection, and a key component of the invention. Present invention contains a large collection of images taken on location at actual customer/user sites. Each individual object in the image is then annotated, a box drawn around it to identify it. A single image contains hundreds of objects, the training collection consists of thousands of images, both original and augmented, for performance improvements. Once the annotations are completed the actual machine learning process starts. The data is presented to machines and software designed for learning about the data; this process involves testing, modifying parameters for improvements, presenting additional data, and training on multiple cycles until valid results are obtained. Fine tuning for a specific application, such as intended in the present invention, can take months and years. Therefore, while object detection has been a popular catch phrase and utilized in many different practical applications, in the present invention it represents a focused solution, and requires a highly specialized concentration, algorithms, effort, and know-how to provide a particular industrial application. Serving the Model for Inference Once satisfactory results are obtained from training and testing, a trained model is generated to serve and produce consistent detection of objects for the application. In the present invention, SnapTally, such a model is presented images and returns all of the objects contained within a particular image with coordinates. The platform of the service can vary depending on the desired objectives and configuration; this service can also run on a mobile device. Summary and Status SnapTally system is continually being improved, additional capabilities and features may be added, existing algorithms may be optimized through developed versions of the solution. However, the objective of the invention and the method of producing real-world measurements from images remain the same, detecting all of the pieces and measuring them accurately. Major features included are, capturing images via camera, Object Detection, Measure and Compute, Data Management, Editing and Adding Objects, Uploading to Cloud, Label Printing. CONCLUSION Ability to measure, count, and manage product information, particularly for wood and lumber inventories present a unique challenge. A fast and accurate method is required to keep up with business demands as products are continually on the move. The SnapTally invention is unique and offers new methods by simply snapping a picture of a product to detect and measure objects within the image; in comparison to other image based detection systems, SnapTally does not require a special apparatus or equipment. The system works with smart phones, tablets, and other devices with built-in cameras. SnapTally empowers users and managers, brings an effective solution to the problem of measuring and counting inventories.
12,007
11861822
It is noted that the drawings are illustrative and are not necessarily to scale. DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS OF THE DISCLOSURE Example embodiments consistent with the teachings included in the present disclosure are directed to an image recognition device and method which retrieve information on a marker by imaging recesses on the marker. According to an embodiment shown inFIG.1, an image recognition device10captures and processes an image of a marker12having recesses13and non-recesses15, as shown inFIGS.1and3. The marker12can be fastened to an asset such as a pipe60. Other examples of assets include pipelines, storage tanks, etc. The marker12can be fastened adjacent to a weld joint or seam62of the pipe60. As used herein, the term “joint” refers to a joining of two pipe members to create an overall pipe. The image recognition device10captures the image of the marker12with recesses13within a field of view (FOV)32of a camera28of the device10. By capturing and processing the image, the device10extracts information identifying the weld seam62. The recesses13and non-recesses15encode the information identifying the weld seam62. Alternatively, the information identifies the asset itself. The information can be a binary pattern which encodes the information. In certain embodiments, the binary pattern can be further encrypted. In a particular implementation consistent with this aspect of the disclosure, the further encryption is performed using an obfuscation procedure. The obfuscation procedure can be a hash function, for example, using a hash table. As will be understood by those of skill in the art, when further obfuscation is to be used, a different encryption process can be employed, including any known operation on the information to obfuscate it. Referring toFIGS.1-2, the image recognition device10includes a processor20, a memory22, an input device24, an output device26, and the camera28. The memory22is accessible by the processor20. The memory22can store predetermined software30including a plurality of modules each comprising code which is executed by the processor20to operate the image recognition device10. The predetermined software30can be composed of at least one module configured to perform a specific function, as described below. The input device24can include a keyboard, a keypad, a mouse, a touchscreen, or combinations thereof. The output device26can include a display. The display can be incorporated into a touchscreen. The input device24and the output device26can implement a graphic user interface. The image recognition device10can be implemented in any computing device, such as a handheld device. For example, the image recognition device10can be implemented in a smartphone, and the predetermined software30can be an app. Alternatively, the image recognition device10can be a tablet or a desktop personal computer (PC) running an application implementing the predetermined software30. Referring toFIG.3, the recesses13can extend through at least a top surface52of the substrate of the marker12. Alternatively, the recesses13are holes which extend through both the top surface52and a bottom surface54. The substrate can be composed of a radiopaque material, including metal such as lead. Accordingly, the camera28can include a radiographic imager, such as an X-ray camera. By using a radiopaque material, the marker12can be used during inspection of pipes and welds at joints employing radiography such as X-rays, since the marker12will scatter X-rays. Alternatively, the camera28can be an optical camera operating using visible light to capture an image of the recesses13and non-recesses15in visible light wavelengths. The binary punch marker12can be generally cuboidal in shape with a rectangular cross-section as shown inFIG.3. It is understood that the marker12can have any other shape. It is also understood that the marker12can have any other cross-section such as a circular cross-section. The recesses13can be circular or any shape, such as a polygon. The marker12has a thickness56, as shown inFIG.3. The thickness56can be adjusted to the thickness of the asset, such as the underlying pipe to which the marker12is attached. The adjustment of the thickness56can provide sufficient contrast of the recesses13relative to the asset. For example, the thickness56can be less than 1 mm and having sufficient contrast. Such a thickness56has recesses13with sufficient contrast even with the marker12placed on top of several centimeters of steel. When the marker12is placed on the curved surface, such as the surface of the pipe60, the marker12can deform to complement a general curved shape of the surface, since the marker12is composed of metal such as lead. Accordingly, as shown inFIG.4, although a first set70of recesses13can remain substantially circular, a second set72of recesses13can deform to not be fully circular. In addition, the camera28views the set70of recesses13substantially straight on with no angular displacement. However, due to the complementing of the marker12to the curved shape of the surface, the camera28views the set72of recesses13at an angle, and so the set72of recesses13do not appear fully circular. Accordingly, as shown inFIGS.4-5, the recess13appears to the camera28as a narrow recess74with partial occlusion76. When processing the image of the recesses13and non-recesses15in a marker12, the image recognition device10overlays horizontal lines78and vertical lines80to form a mesh82, as shown inFIG.6. Using the mesh82, the locations of the recesses13and the non-recesses15can be determined. As shown inFIG.7, a method200is performed for image recognition of information stored by recesses13and non-recesses15of a marker12. The method200includes the step210of scanning the marker12having the recesses13and non-recesses15using the image recognition device10. The device10then generates an image of the marker12. For example, the image recognition device10includes the camera28. The camera28can be a radiographic camera configured to capture an image, for example, using X-rays. Alternatively, the camera28can be an optical camera configured to capture an image, for example, using light in visible wavelengths. The method200also includes the step220of mapping contrast in the image using a contrast mapping module to generate a set of contrasts from the image. The contrast can be an optical contrast of optical features in the image. Alternatively, the contrast is a variation of a parameter used to encode the information identifying the weld seam62as the recesses13and non-recesses15in the marker12. The method200further includes the step230of identifying variations in the contrast using a contrast variation identifying module to identify variations in the set of contrasts. The variations in the contrast can be determined relative to a predetermined threshold. Alternatively, the variations in the contrast can be determined relative to illumination of the marker during the image capturing by the camera28. In a further embodiment, the variations in the contrast can be determined relative to exposure of marker during the image capturing by the camera28. In addition, the method200includes the step240of creating a mesh82using a mesh creation module. The mesh creation module responds to the variations in the contrast to determine at least one recess13. Using the at least one recess13, the mesh creation module is configured to create the mesh82. The mesh82overlays the image to locate the position of all possible recesses13in the marker12. The mesh82can have equally spaced lines. Alternatively, due to deformation of the marker12as described above, the mesh82can have lines spaced at irregular intervals. For example, referring toFIG.6, the mesh line through the set72of recesses may be closer to a nearby mesh line than such mesh lines through and near the set70of recesses. The method200also includes the step250of identifying the presence or absence of recesses13using a recess identifying module. The recess identifying module determines where in the mesh that recesses13are expected to be. For example, the recess identifying module can measure the contrast along each of the lines78,80inFIG.6. In one embodiment, if the contrast is greater than a predetermined threshold, the recess identifying module determines that a recess13is present, as inFIG.3. Otherwise, the recess identifying module determines that a non-recess15is present, as inFIG.3. The recess identifying module then creates a transformed image with determined recesses13and non-recesses15. The method200further includes the step260of reading, from the transformed image, the binary pattern of the identifying information represented by the recesses13and non-recesses15in the marker12using a reading module. The reading module decodes the binary pattern of the recesses13and non-recesses15to extract the information identifying the weld seam62. The reading module can also decrypt the binary pattern if such identifying information has been encrypted. For example, the decryption can be a reverse hash function matching the original hash function. In one embodiment, the reverse hash function can use the original hash table to reverse the original hash function. Alternatively, the decryption can match the encryption process used to encrypt the identifying information. Portions of the methods described herein can be performed by software or firmware in machine readable form on a tangible (e.g., non-transitory) storage medium. For example, the software or firmware can be in the form of a computer program including computer program code adapted to cause the fabrication system to perform various actions described herein when the program is run on a computer or suitable hardware device, and where the computer program can be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices having computer-readable media such as disks, thumb drives, flash memory, and the like, and do not include propagated signals. Propagated signals can be present in a tangible storage media. The software can be suitable for execution on a parallel processor or a serial processor such that various actions described herein can be carried out in any suitable order, or simultaneously. It is to be further understood that like or similar numerals in the drawings represent like or similar elements through the several figures, and that not all components or steps described and illustrated with reference to the figures are required for all embodiments or arrangements. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “contains”, “containing”, “includes”, “including,” “comprises”, and/or “comprising,” and variations thereof, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Terms of orientation are used herein merely for purposes of convention and referencing and are not to be construed as limiting. However, it is recognized these terms could be used with reference to an operator or user. Accordingly, no limitations are implied or to be inferred. In addition, the use of ordinal numbers (e.g., first, second, third) is for distinction and not counting. For example, the use of “third” does not imply there is a corresponding “first” or “second.” Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. While the disclosure has described several exemplary embodiments, it will be understood by those skilled in the art that various changes can be made, and equivalents can be substituted for elements thereof, without departing from the spirit and scope of the invention. In addition, many modifications will be appreciated by those skilled in the art to adapt a particular instrument, situation, or material to embodiments of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed, or to the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims. The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes can be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the invention encompassed by the present disclosure, which is defined by the set of recitations in the following claims and by structures and functions or steps which are equivalent to these recitations.
13,463
11861823
DETAILED DESCRIPTION Lens surface build-up (deposition) may be a factor in determining lens-wear comfort and success. Therefore, there is considerable interest in easy-to-use methods and devices for reproducing, detecting, and quantifying lipid/protein deposition on contact lens suitable at clinical settings. Described herein are microfluidic devices (including device design, manufacture, test, and usage) and methods for quantifying contact lens deposition using μL or sub-μL human tears and small contact lens coupons. Microfluidic Device Described herein is a microfluidic (e.g., lab-on-a-chip) system and method to evaluate the interaction between contact lens and tear, which may enable the reproduction and quantification of tear deposits on small contact lens samples. The microfluidic device may be made of polydimethylsiloxane (PDMS), polystyrene, acrylic, cyclic olefin copolymer (COC), etc. by injection molding, replication molding, milling or lithography. Systems and devices of the present disclosure may comprise or be embodied as a lab-on-a-chip system including fluid source, pressure sources (and associated pneumatic components), and further including, one or more of: fluid handling (e.g., solution handling and flow control, reagent handling, sample handling, power supply/generation (e.g., port for power input, integrated battery, and/or the like), system control (e.g., a processor/microprocessor, a memory, electrical and pneumatic communication lines, and/or the like), communication (e.g., wired communication, wireless communications such as via Bluetooth), and/or the like. Example Microfluidic Chip Fabrication FIG.1Ashows an example microfluidic chip fabrication process.FIG.1Bshows an example schematic of inset of a fully assembled chip. As shown, a microfluidic chip100may comprise one or more layers. As an example, the microfluidic chip100may comprise one or more of a fluidic layer120, a well layer104, or a substrate106(e.g., glass slide), or any combination thereof. The well layer104may be disposed on the substrate106. The well layer104may be interposed between the fluidic layer120and the substrate106. The fluidic layer120may comprise polydimethylsiloxane (PDMS). However other materials may be used, such as, but not limited to polystyrene, cyclic olefin copolymer (COC), acrylic, polyurethane, polypropylene, polycarbonate, or glass, or a combination thereof. The well layer104may comprise polydimethylsiloxane (PDMS). However other materials may be to form one or more of the layers120,104,106, such as polystyrene, cyclic olefin copolymer (COC), acrylic, polyurethane, polypropylene, polycarbonate, or glass, or a combination thereof The well layer104may comprise one or more wells105configured to receive a material such as fluid and/or sample test materials (e.g., sample contact lens). The one or more wells105may be configured to retain a volume of fluid therein. The one or more wells105may be configured to receiver one or more samples such as a contact lens sample108. The contact lens sample may be or comprise a contact lens or a portion (e.g., coupon) of a contact lens. As an example, one or more of the wells105may be sized to receive the contact lens sample108or a portion of a contact lens sample108. One or more of the wells105may be sized to retain the contact lens sample108or a portion of the contact lens sample108. The fluidic layer120(and/or a control layer102(FIG.1B)) may comprise an inlet110and an outlet112. The inlet110may be defined by an orifice that allows passage of a fluid therethrough. The outlet112may be defined by an orifice that allows passage of a fluid therethrough. When the microfluidic chip100is assembled, the inlet110may be in fluid communication with the outlet112via a fluid conduit115or passage. The fluid conduit115may be at least partially formed in or defined by one or more of the control layer102(FIG.1B), fluidic layer120, or the well layer104. The fluid conduit115may be defined by at least a portion of one or more of the fluidic layer120or the well layer104. As an example, the well layer104may be disposed on the substrate106. One or more surfaces of the well layer104and/or the fluidic layer120may be plasma treated. One or more contact lens samples108may be disposed in respective wells105. The fluidic layer120may be aligned with the well layer104and sealed against the well layer104such that the inlet110and outlet112are in fluid communication with the one or more wells105(e.g., via the fluid conduit115). FIG.1Billustrates the assembled microfluidic device100. As shown, the well layer104is disposed on the substrate106. A contact lens sample108is disposed in each of the wells105of the well layer104. The fluidic layer120is disposed on the well layer104and defines the fluid conduit115in fluid communication between the wells105of the well layer104and the inlet110and outlet112of the fluidic layer120. The fluid layer120may be interposed between the well layer104and a control layer102, or may be formed as part of the control layer102). One or more fluids (e.g., tear fluid, multipurpose fluid) may be caused to pass through the fluid conduit115(e.g., via the inlet110and toward the outlet112). As an illustrative example, tear components116are shown passing over the contact lens samples108in the wells105. As such, deposition may occur on the contact lens samples108, which may be tested using the systems, devices, and methods of the present disclosure. As an example, one or more valves118(e.g. in the control layer102) may be configured to control flow of fluid in the fluid conduit115and between one or more of the wells105. As a non-limiting example, to fabricate the well layer104, RTV615 (PDMS) from R.S. Hughes (Sunnyvale, Calif.) or Sylgard 184 was mixed at a ratio of 10:1 (A:B), poured onto the well mold, degassed, and baked for 90 minutes in a 75° C. oven. After curing, the PDMS was peeled off the mold, cut into small squares, and bonded to a glass slide with air plasma (Electro-Technic Products, BD-20AC). To fabricate the control layer102, RTV615 was mixed at a ratio of 5:1, poured onto the mold, degassed, and par-baked for 1 hour. To fabricate the fluidic layer120, RTV615 was mixed at a ratio of 20:1, spun onto the mold at 1100 RPM, and par-baked for 1 hour. The control layer102was then peeled off the mold, cut into small squares, aligned on top of the fluidic layer120mold, and baked for another hour before lifting off and baking overnight. Inlet/outlet ports were cored using a 0.75 mm biopsy punch. Small (1 mm diameter) contact lens samples were cored from a full-size contact lens hydrogel using a biopsy punch. To assemble the chip, the top microfluidic layers (fluidic layer120and control layer102) and the bottom well layer104were plasma treated, lens samples were placed into the wells, and the two PDMS pieces were aligned and sealed followed by a 10-minute bake at 75° C. As a further example, three master molds (fluidic, control, and wells) were fabricated using standard photolithography on 3 inch silicon wafers. For both fluidic120and control102layers, AZ 9260 was spun at 900 RPM, soft-baked at 110° C. for 5 minutes, rehydrated for 30 minutes, exposed at 1800 mJ/cm2, developed for 5 minutes in a AZ 400K 1:3 developer, and reflowed at 130° C. for 1 minute (H=19 μm). For the well layer104, SU-8 2150 was spun to a thickness of 230 μm, soft-baked at 95° C. for 1 hour, exposed at 1480 mJ/cm2 with long pass filter (PL-360-LP), post-exposure baked at 95° C. for 20 minutes, developed in a SU-8 developer for 20 minutes, and hard-baked at 155° C. for 5 minutes. Example Microfluidic Chip Operation with 1 μL Samples and 1 mm Diameter Lens Example Capillary Driven Flow To simplify the setup and increase the throughput, the microfluidic chip was modified to make it hydrophilic by mixing PDMS with PDMS-PEO (1%). With such hydrophilic chips, tear samples may be dropped onto the well or pipetted into the inlet, then the tear samples may flow to the lens sample area without active pumping.FIG.2shows a 1 mm diameter example lens in a 1.5 mm diameter microfluidic well. 1 μL PBS solution was introduced into the well. FIG.2shows an example colored micrograph of a 1 mm diameter example lens in a 1.5 mm diameter microfluidic well on a PDMS-PEO microfluidic chip (left) and gray-scale optical micrograph of a 1 mm diameter example lens in a 1.2 mm diameter microfluidic well (right). Example Pressure Driven Flow To automate the microfluidic chip operation, liquid sample may be introduced by inserting a micropipette tip into the inlet (e.g., inlet110(FIG.1)), or dropping the sample into the inlet port and applying a pressure of 1 psi. Such pressure driven microfluidic flow may be automated and controlled by using on-chip valves and pumps, and a portable controller. Example Deposition Testing FIGS.3A-3Cshow example schematic side views of an example microfluidic chip300at different steps during an example deposition testing process. The microfluidic chip300may comprise a main body having a fluid conduit315formed therein. The fluid conduit315may extend between an inlet310and an outlet312. The fluid conduit315may be in fluid communication with one or more wells305. As an example, one or more valves318may be configured to control a flow of fluid through the fluid conduit315. As a further example, the one or more valves318may be configured to control a flow or retention of fluid over or in the one or more wells305. FIG.3Ashows a contact lens sample308disposed in the well305. The contact lens sample308may comprise a contact lens or a portion thereof. A tear fluid316may be disposed in the well305and/or on or around the lens sample308.FIG.3Bshows the microfluidic chip300disposed in a humidity chamber220. The humidity chamber220may comprise a fluid221and may be configured to maintain a target humidity and/or temperature. As such, the microfluidic chip300may emulate an on-eye environment. As an example, the tear fluid316may leave deposits222on the lens sample308.FIG.3Cshows a cover224disposed over the well to enclose the well305and to allow fluid to pass through the fluid conduit315without exiting through the well305. A fluid226such as a rinse or multipurpose solution may be caused to pass through the fluid conduit315from the inlet310through the well305and toward the outlet312. The fluid226may rinse at least a portion of the deposits222off the lens sample308as waste228. As an illustrative example, a 1 mm diameter lens coupon may be placed in the open reaction chamber (1.2 mm diameter). Pure water may be added to the chamber. Pre-tear optical microscope images of the lens may be taken. The on-chip valves may be closed to confine the tear in the chamber region. A 0.5 μL tear sample may be dropped onto the lens. The chamber may be left open for ˜10 minute to allow the tear to evaporate. The microfluidic chip may be placed in a humidity chamber at 37° C. for another 20 minutes. A removable cover may be placed on the chamber to close the chip for automated washing and processing. The optical microscope images of the lens may be taken at this point (post tear). The on-chip valves may be open and multipurpose solution may be pushed into the chamber to wash the lens. Post rinse optical microscope images may be taken. FIGS.4-5show an example implementation of a microfluidic device400using on-chip microvalves418and electronically controlled pressure-driven liquid flow. As shown, the microfluidic device400comprises an inlet410in fluid communication with an outlet412via a fluid conduit414A,414B. The fluid conduits414A,414B are configured in fluid communication with a well405. The well405is configured to receive a contact lens sample408, a fluid such as tear fluid416, or a combination of both. One or more valves418may be configured to control a flow of fluid through the conduits414A,414B, or the well405, or both. As shown inFIG.5, when the valves418are open, fluid may flow from the inlet410through the well405and toward the outlet412. A method for quantifying contact lens deposition using a microfluidic chip (e.g., microfluidic chip100(FIG.1), microfluidic chip300(FIG.3)) may comprise one or more of the following:1) disposing a contact lens sample in the well of the microfluidic chip; disposing a first volume of first fluid in the well with the contact lens sample;2) capturing first images of the contact lens sample;3) causing evaporation of at least a portion of the first volume of the first fluid;4) disposing a second volume of second fluid in the well with the contact lens sample;5) causing evaporation of at least a portion of the second volume of the second fluid; disposing the microfluidic chip in a humidity chamber for a time period;6) capturing second images of the contact lens sample after the time period has expired;7) rinsing the contact lens sample with a third fluid; capturing third images of the contact lens after the rinsing;8) determining, using one or more of the first images, the second images, or the third images, a deposition metric; and9) outputting the deposition metric. FIG.6Aillustrates an example method for quantifying contact lens deposition using a microfluidic chip (e.g., microfluidic chip100(FIG.1), microfluidic chip300(FIG.3)). The method shown inFIG.6Amay use a microfluidic chip comprising a well in fluid communication with a fluid conduit, wherein the fluid conduit is in selective communication with a fluid inlet and a fluid outlet to control passage of fluid through the fluid conduit and into the well. The method shown inFIG.6Amay comprise one or more of the steps600-622. At600, a contact lens sample may be disposed in the well of the microfluidic chip. The contact lens sample may be or comprise a whole or part of a contact lens, such as a soft contact ophthalmic lens. The contact lens sample may be sized based on a size of the well. The contact lens sample may be or comprise a 1 mm lens coupon. As an example, the microfluidic chip comprises hydrophilic material. As a further example, one or more of the well or the fluid conduit is configured to be hydrophilic. As yet a further example, multiple different types of contact lens coupons or materials may be pre-loaded in the microfluidic chip during the manufacturing processing. At602, a first volume of first fluid may be disposed in the well with the contact lens sample. As an example, the first fluid may be or comprise water. The first fluid may consist essentially of water. The first fluid may consist of water. The first volume may be based on a volume of the well. The first volume may be less than 1 μL. The first volume may be about 0.5 μL. The first volume may be between 0.3 μL and 3 μL. At604, one or more first images of the contact lens sample may be captured. The one or more first images may comprise an optical microscopic image of the contact lens sample. The one or more first images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. The one or more first images may be bright-field, dark-field, phase-contrast, differential interference contrast (DIC) or fluorescent microscope images, or a combination thereof. At606, at least a portion of the first volume of the first fluid in the well may be caused to evaporate. Such evaporation may be passive or active. At608, a second volume of second fluid may be disposed in the well with the contact lens sample. The second fluid may be or comprise tear fluid. The second fluid may consist essentially of tear fluid. The second fluid may consist of tear fluid. The second volume may be less than 1 μL. The second volume may be about 0.5 μL. The second volume may be between 0.3 μL and 3 μL. At610, at least a portion of the second volume of the second fluid may be caused to evaporate. Such evaporation may be passive or active. At612, the microfluidic chip may be disposed in a humidity chamber for a time period. The time period may be about 20 minutes. The time period may be 10-20 minutes. The time period may be 15-20 minutes. The time period may be adjusted to effect target conditions. At614, one or more second images of the contact lens sample may be captured after the time period has expired. The one or more second images may comprise an optical microscopic image of the contact lens sample. The one or more second images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At616, the contact lens sample may be rinsed with a third fluid. The third fluid may be or comprise multipurpose solution. The third fluid may consist essentially of multipurpose solution. The third fluid may consist of multipurpose solution. The third fluid may be or comprise pure water, phosphate-buffered saline (PBS) solution, or other contact lens cleaning liquids. At618, one or more third images of the contact lens may be captured after the rinsing. The one or more third images may comprise an optical microscopic image of the contact lens sample. The one or more third images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At620, a deposition metric may be determined. Determining the deposition metric may be implemented using one or more of the first images, the second images, or the third images. The deposition metric may comprise a normalized deposit area intensity. The deposition metric may comprise a normalized deposit area intensity before the rinsing. The deposition metric may comprise a normalized deposit area intensity after the rinsing. The deposition metric comprises a difference between a deposit area intensity before the rinsing and a deposit area intensity after the rinsing. At622, the deposition metric may be outputted. Such output may be via a user interface. FIG.6Billustrates an example method for quantifying contact lens deposition using a microfluidic chip (e.g., microfluidic chip100(FIG.1), microfluidic chip300(FIG.3)). The method shown inFIG.6Bmay be implemented using a microfluidic chip comprising a well in fluid communication with a fluid conduit, wherein the fluid conduit is in selective communication with a fluid inlet and a fluid outlet to control passage of fluid through the fluid conduit and into the well. The method shown inFIG.6Bmay comprise one or more of the steps630-642. At630, a contact lens sample may be disposed in the well of the microfluidic chip. The contact lens sample may be or comprise a whole or part of a contact lens, such as a soft contact ophthalmic lens. The contact lens sample may be sized base on a size of the well. The contact lens sample may be or comprise a 1 mm lens coupon. As an example, the microfluidic chip comprises hydrophilic material. As a further example, one or more of the well or the fluid conduit is configured to be hydrophilic. At632, a volume of tear fluid may be disposed in the well with the contact lens sample. The volume may be less than 1 μL. The volume may be about 0.5 μL. At634, one or more pre-rinse images may be captured of the contact lens sample. The one or more pre-rinse images may comprise an optical microscopic image of the contact lens sample. The one or more pre-rinse images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At636, the contact lens sample may be rinsed. The contact lens sample may be rinsed with a fluid. The fluid may be or comprise multipurpose solution. The fluid may consist essentially of multipurpose solution. The fluid may consist of multipurpose solution. At638, one or more post-rinse images of the contact lens after the rinsing may be captured. The one or more post-rinse images may comprise an optical microscopic image of the contact lens sample. The one or more post-rinse images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At640, a deposition metric may be determined. Determining the deposition metric may be implemented using one or more of the pre-rinse or post-rinse images. The deposition metric may comprise a normalized deposit area intensity. The deposition metric may comprise a normalized deposit area intensity before the rinsing. The deposition metric may comprise a normalized deposit area intensity after the rinsing. The deposition metric comprises a difference between a deposit area intensity before the rinsing and a deposit area intensity after the rinsing. At642, the deposition metric may be outputted. Such output may be via a user interface. FIG.6Cillustrates an example method for quantifying contact lens deposition using a microfluidic chip (e.g., microfluidic chip100(FIG.1), microfluidic chip300(FIG.3)). The method shown inFIG.6Cmay comprise one or more of the steps650-658. At650, a contact lens sample may be exposed to a volume of tear fluid from the wearer. At652, pre-rinse data of the contact lens sample may be captured. The pre-rinse data may comprise or be based on one or more pre-rinse images captured of the contact lens sample. The one or more pre-rinse images may comprise an optical microscopic image of the contact lens sample. The one or more pre-rinse images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At654, the contact lens sample may be rinsed. The contact lens sample may be rinsed with a fluid. The fluid may be or comprise multipurpose solution. The fluid may consist essentially of multipurpose solution. The fluid may consist of multipurpose solution. At656, post-rinse data of the contact lens sample after the rinsing may be captured. The post-rinse data may comprise or be based on one or more post-rinse images captured of the contact lens sample. The one or more post-rinse images may comprise an optical microscopic image of the contact lens sample. The one or more post-rinse images may comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. At658, the pre-rinse data and the post-rinse data may be compared. Such comparison may comprise determining a deposition metric. The deposition metric may comprise a normalized deposit area intensity. The deposition metric may comprise a normalized deposit area intensity before the rinsing. The deposition metric may comprise a normalized deposit area intensity after the rinsing. The deposition metric comprises a difference between a deposit area intensity before the rinsing and a deposit area intensity after the rinsing. Example Assay Protocol (Operational Procedure) Example testing procedures may comprises one or more of the following steps. Step 1. Centrifuge the tear at 3000 revolutions per minute (rpm) to bring the tear to the bottom of the tube. Step 2. Add 1 millimeter (mm) lens into the microfluidic well (1.2 mm diameter, 250 micrometers (um) deep, i.e. 0.28 microliter (μL)). Step 3. Add 0.5 μL pure water to the well; cover the well with a coverslip; and take gray-scale microscope images of front and back surfaces (pre-tear). Step 4. Wait for a few minutes so that some of the pure water evaporates but the lens is still moist; and add 0.5 μL tear into the well. Step 5. Wait for another 10 minutes so that some tear evaporates but the lens is still moist; put the microfluidic chip in the humidity chamber at 37° C.; wait for 20 minutes; and take gray-scale microscope images of front and back surfaces (with a coverslip and pure water) (post tear). Step 6. Rinse lens with 0.5 μL multi-purpose solution (e.g. MPS, Revitalens) 5 times; and take gray-scale microscope images of front and back surfaces (with a coverslip and MPS solution) (post rinse). Steps may be removed or added. Example Computer Algorithm One or more optical microscope images (bright-field, dark-field, fluorescent) may be processed by a computer image processing algorithm to give a quantitative number (e.g., score, etc.) based on a deposition area, intensities, fluorescent labels, morphology, the like and/or any combination of the foregoing. Example algorithm steps are given below: 1) One or more image files, such as a file arranged in Tagged Image File Format (TIFF) with a true color (red-green-blue (RGB)) baseline, may be received as input. 2) The one or more image files may be converted to a grayscale, such as 8-bit grayscale, 16-bit grayscale, etc. 3) Thresholding (e.g. Ostu or adaptive) may be applied to convert the one or more grayscale images to black & white in order to identify deposit areas. 4) A circular region (e.g., region of interest) on the lens (no lens boundary within the circle) may be selected, either manually or automatically. 5) The normalized deposit area intensities may be calculated for the selected region at both after rinse and before rinse. The normalized deposit area intensities may be calculated using the following formula: Deposit_intensity=total_deposit_area_intensity/total_area. 5) The difference in intensity between the calculated after rinse deposit intensity and the calculated before rinse deposit intensity may be calculated. The difference in intensity may be calculated using the following formula: Diff_intensity=deposit_intensity_after_rinse−deposit_intensity_before_rinse. 6) A deposition score may be calculated based on deposit area size, morphology, gray-scale intensity, fluorescent label intensities, and/or differences in such features between after-rinse and before-rinse. Steps may be removed or added. As an illustrative example, the following source MATLAB code may be used for image analysis and deposition quantification: image_filename=‘KCl_postrinse.tif’; image_rgb=imread(image_filename); % if image is a tiff using cmyk color space image_gray=rgb2gray(image_rgb); image_double=im2double(image_gray); imshow(image_double); h=drawcircle(‘Color’,‘r’); mask=createMask(h); bg_value=mean(mean(image_double)); bg=ones(size(image_double)).*(1−mask)*bg_value; white_bg=ones(size(image_double)).*(1−mask)*255; % imshow(bg): image_roi=image_double.*mask; imshow(image_roi+white_bg); % Code below is for histogram equalization which seems not necessary % figure % imhist(image_gray) % image_gray_histeq=histeq(image_gray); % image_gray_histeq=adapthisteq(image_gray); [level, EM]=graythresh(image_roi); % ostu's method % level=adaptthresh(image_roi): % image_wb=imbinarize(image_roi,level*1.8); % threshold level is 1.8 of the ostu's output image_bw=(1−image_wb).*mask; % for graythresh, need to use 1-image_bw imshow((image_bw)+white_bg); % h=drawcircle(‘Color’,‘r’); % mask=createMask(h); % imshow(image_gray, [ ]); % show gray scale deposit intensity image imshow((image_bw.*image_double)+white_bg); deposit_percentage=sum(sum((image_bw).*mask))/sum(sum(mask))*100% [0 100] deposit_intensity_normalized=sum(sum(image_double.*image_bw.*mask)/sum(sum(mask))*255% [0 255] 255 is max gray scale level for this camera deposit_intensity=sum(sum(image_double.*image_bw.*mask))*255% [0 255]255 is max gray scale level for this camera Modifications may be made to the example code. Other codes and algorithms may be used. Example On-Chip Lens Images after Tear Deposition and Cleaning FIG.7Ashows example contact lens coupon images in the microfluidic well (top) and example computer-processed images with deposition score (bottom). A shown, KCl is a heavy depositing tear. In particular,FIG.7Ashows bright-field optical micrographs of an example lens before any tear deposition, right after tear deposition and after rinsing with multi-purpose solution (top); and computer-processed gray-scale images with calculated deposition scores (bottom). The difference of the deposition scores between before rinse (post tear) and post rinse shows how easily the deposits may be removed by the multi-purpose solution: the larger the difference, the easier to remove the deposit. The deposition score may be calculated by summing up all the deposit pixel intensities and normalizing the sum by the lens area. The deposit pixels may be identified by applying a threshold to the gray scale image and picking the pixels with intensity higher than the threshold. The deposit pixels are shown as the bright areas in the bottom panel ofFIG.7A.FIG.7Bshows the set of images for a normal tear sample, where the top panel ofFIG.7Bshows example contact lens coupon images in the microfluidic well and the bottom panel shows example computer-processed images with deposition score; as shown, BL1 is a normal tear. By plotting the difference (in deposition scores between before rinse and post rinse) vs. post rinse deposition score, the heavy depositors (i.e. KCl) are expected to occupy the top-right region of the plot (i.e., small difference and large post rinse deposition score. This expectation is met inFIG.8. In particular,FIG.8shows an example difference vs. post rinse deposit score plot. High deposition should occupy the top-right region of the plot. In summary, the specific example described herein successfully demonstrate a novel microfluidic system and method for quantifying contact lens deposition using μL or sub-μL human tears and small contact lens coupons. Initial results show that reproduction of human tear deposition may be achieved within about 30 minutes and computer image analysis may provide quantitative deposition scores that may distinguish heavy depositor from normal tears. Imaging System Imaging of contact lens deposit may be performed by a conventional upright light field microscope to obtain bright-field images, a conventional dark-field microscope to obtain low-background high contrast dark-field images, or a fluorescent microscope (upright or inverted) to obtain fluorescently labelled images. In another aspect, a custom-built imaging setup (bright-field, dark-field or fluorescent) may be used to form a self-contained table-top system. In another aspect, a miniature imaging device including a smartphone (with camera) may be used to form a portable or handheld system. Table-Top Setup A table-top contact lens deposition analysis system may comprise. 1. a microfluidic cartridge with on-board lens materials for tear deposition generation. A potential implementation may be illustrated inFIGS.1and/or3; 2. an automated liquid handling system allowing liquid reagents manipulation (e.g., introduction, mixing, incubation, removal, etc.) on the microfluidic chip; 3. a custom-built imaging sub-system for bright-field, dark-field and/or fluorescent microscopy; 4. a built-in computer or microcontroller or FPGA to perform image analysis and deposition scoring; and 5. A user interface (e.g. a touchscreen) to allow user input/control and display the analysis results. Smartphone Setup FIG.9shows an example of a smartphone based dark-field imaging system.FIG.9shows a schematic of a folded-path smartphone based dark-field microscope900. A shown, a substrate902(e.g., phone case) may be configured with a light source904(e.g., light emitting diode) and a camera906. Light emitted from the light source904may pass through an optical element908, which may direct and/or focus light toward a condenser910. The condenser may direct light toward a sample912. A lens914may be disposed between the camera906and the sample912. As such, the camera906may capture images of the sample912. Additional Analyses Capability Beyond bright-field, dark-field, and fluorescent imaging with lipid or protein labels, microfluidic devices described herein may also be designed to perform other bioanalytical assays, such as protein quantification assays using ELISA, lipid quantification assays, bacterial detection for potential infection analysis, or other measurements such electrolytes. For example, an example microfluidic device described herein may comprise additional reservoirs to house reagents to perform analysis on things like lipids, etc. Examples Example 1: A method for quantifying contact lens deposition using a microfluidic chip comprising a well in fluid communication with a fluid conduit, wherein the fluid conduit is in selective communication with a fluid inlet and a fluid outlet to control passage of fluid through the fluid conduit and into the well, the method comprising: disposing a contact lens sample in the well of the microfluidic chip; disposing a first volume of first fluid in the well with the contact lens sample: capturing first images of the contact lens sample; causing evaporation of at least a portion of the first volume of the first fluid; disposing a second volume of second fluid in the well with the contact lens sample; causing evaporation of at least a portion of the second volume of the second fluid; disposing the microfluidic chip in a humidity chamber for a time period; capturing second images of the contact lens sample after the time period has expired; rinsing the contact lens sample with a third fluid; capturing third images of the contact lens after the rinsing; determining, using one or more of the first images, the second images, or the third images, a deposition metric; and outputting the deposition metric. Example 2: A method for quantifying contact lens deposition using a microfluidic chip comprising a well in fluid communication with a fluid conduit, wherein the fluid conduit is in selective communication with a fluid inlet and a fluid outlet to control passage of fluid through the fluid conduit and into the well, the method comprising: disposing a contact lens sample in the well of the microfluidic chip; capturing a pre-tear image of the contact lens sample; disposing a volume of tear fluid in the well with the contact lens sample; capturing tear images of the contact lens sample; rinsing the contact lens sample; capturing post-rinse images of the contact lens after the rinsing; determining, using one or more of the pre-tear images, the tear images, or the post-rinse images, a deposition metric; and outputting the deposition metric. Example 3: A method for quantifying contact lens deposition using a microfluidic chip comprising a well in fluid communication with a fluid conduit, wherein the fluid conduit is in selective communication with a fluid inlet and a fluid outlet to control passage of fluid through the fluid conduit and into the well, the method comprising: disposing a contact lens sample in the well of the microfluidic chip; disposing a volume of tear fluid in the well with the contact lens sample; capturing pre-rinse images of the contact lens sample; rinsing the contact lens sample; capturing post-rinse images of the contact lens after the rinsing; determining, using one or more of the tear images or the post-rinse images, a deposition metric; and outputting the deposition metric. Example 4: A method for quantifying contact lens deposition, the method comprising: disposing a contact lens sample in a fluid well; disposing a volume of tear fluid in the well with the contact lens sample; capturing pre-rinse images of the contact lens sample; rinsing the contact lens sample; capturing post-rinse images of the contact lens after the rinsing; determining, using one or more of the tear images or the post-rinse images, a deposition metric; and outputting the deposition metric. Example 5: A method for evaluating a contact lens wearer's compatibility with a lens material, the method comprising: exposing a contact lens sample to a volume of tear fluid from the wearer; capturing pre-rinse data of the contact lens sample; rinsing the contact lens sample; capturing post-rinse data of the contact lens sample after the rinsing, and comparing the pre-rinse data with the post-rinse data. Example 6: The method of any of examples 1-5, wherein the microfluidic chip comprises hydrophilic material. Example 7: The method of any of examples 1-6, wherein one or more of the well or the fluid conduit is configured to be hydrophilic. Example 8: The method of any of examples 1-7, wherein the contact lens sample comprises a 1 mm lens coupon. Example 9: The method of any of examples 1-8, wherein the first fluid comprises water. Example 10: The method of any of examples 1-9, wherein the first fluid consists essentially of water. Example 11: The method of any of examples 1-10, wherein the first fluid consists of water. Example 12: The method of any of examples 1-11, wherein the first volume is less than 1 μL. Example 13: The method of any of examples 1-12, wherein the first volume is about 0.5 μL. Example 14: The method of any of examples 1-13, wherein the causing evaporation of at least a portion of the first volume of the first fluid comprises allowing for passive evaporation. Example 15: The method of any of examples 1-14, wherein the first images comprise an optical microscopic image of the contact lens sample. Example 16: The method of any of examples 1-15, wherein the first images comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. Example 17: The method of any of examples 1-16, wherein the second fluid comprises tear fluid. Example 18: The method of any of examples 1-17, wherein the second fluid consists essentially of tear fluid. Example 19: The method of any of examples 1-18, wherein the second fluid consists of tear fluid. Example 20: The method of any of examples 1-19, wherein the second volume is less than 1 μL. Example 21: The method of any of examples 1-20, wherein the second volume is about 0.5 μL. Example 22: The method of any of examples 1-21, wherein the causing evaporation of at least a portion of the second volume of the second fluid comprises allowing for passive evaporation. Example 23: The method of any of examples 1-22, wherein the second images comprise an optical microscopic image of the contact lens sample. Example 24: The method of any of examples 1-23, wherein the second images comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. Example 25: The method of any of examples 1-24, wherein time period is about 20 minutes. Example 26: The method of any of examples 1-25, wherein the third fluid comprises multipurpose solution. Example 27: The method of any of examples 1-26, wherein the third fluid consists essentially of multipurpose solution. Example 28: The method of any of examples 1-27, wherein the third fluid consists of multipurpose solution. Example 29: The method of any of examples 1-28, wherein the third images comprise an optical microscopic image of the contact lens sample. Example 30: The method of any of examples 1-29, wherein the third images comprise an optical microscopic image of a first side and a second opposite side of the contact lens sample. Example 31: The method of any of examples 1-30, wherein the deposition metric comprises a normalized deposit area intensity. Example 32: The method of any of examples 1-31, wherein the deposition metric comprises a normalized deposit area intensity before the rinsing. Example 33: The method of any of examples 1-32, wherein the deposition metric comprises a normalized deposit area intensity after the rinsing. Example 34: The method of any of examples 1-33, wherein the deposition metric comprises a difference between a deposit area intensity before the rinsing and a deposit area intensity after the rinsing.
39,489
11861824
DETAILED DESCRIPTION Reference will now be made in detail to the subject matter disclosed, which is illustrated in the accompanying drawings. The present disclosure has been particularly shown and described with respect to certain embodiments and specific features thereof. The embodiments set forth herein are taken to be illustrative rather than limiting. It should be readily apparent to those of ordinary skill in the art that various changes and modifications in form and detail may be made without departing from the spirit and scope of the disclosure. Embodiments of the present disclosure are directed to systems and methods for generating reference images for scatterometry overlay (SCOL) metrology measurements to mitigate tool induced shift (TIS) errors associated with measurement non-uniformities. A reference image may characterize measurement non-uniformities and may be used to at least partially remove the impact of such measurement non-uniformities and the associated TIS errors during a measurement. In embodiments, multiple reference images are generated that may be suitable for different groups of metrology data (e.g., metrology data from different groups of overlay targets). For example, metrology data from various overlay targets across a sample may be grouped based on a metric associated with measurement non-uniformities and different reference images may be generated for each group. Metrology data associated with subsequent measurements of overlay targets on the same or different samples may then be grouped based on the metric for selection of a suitable reference image. As an illustration, one source of measurement non-uniformities may include non-uniformities of printed features that may vary across a sample. As a result, a single reference image may not mitigate the associated TIS errors with the same effectiveness. In some embodiments, multiple reference images are generated and applied to different groups of features. SCOL measurements are based on metrology data generated by illuminating a portion of a sample having features associated with two patterning processes (e.g., lithoghraphic exposures, etching processes, or the like) and collecting the resulting light from the sample, where the features associated with the different patterning processes may be on the same or different layers of the sample. SCOL techniques may generally utilize overlay targets including one or more cells, though this is not a requirement. As used herein, an overlay target may generally refer to any portion of the sample suitable for an overlay measurement and may include, but is not limited to, dedicated features designed for the purposes of an overlay measurement or device features associated with a device being fabricated. Further, features of an overlay target (referred to herein as target features) may be on any layer or combination of layers of a sample. For example, target features may be located on a photoresist layer. In this case, the features may be characterized by refractive index variations in the photoresist layer induced by a lithographic exposure. As another example, target features may be located on a process layer. In this case, the features may be characterized by variations in material composition induced by etching the sample after patterning with the photoresist or any other patterning process (e.g., direct etching or the like). It is contemplated herein that TIS errors in SCOL metrology may be attributed to various sources including illumination non-uniformity in an overlay metrology system as well as the particular physical and optical characteristics of sample features being measured. Since process variations may result in slight deviations between properties of overlay targets across the sample, it may be the case that a single reference image may not be sufficient to mitigate TIS errors for metrology targes across the sample. Embodiments of the present disclosure are directed to systems and methods for classifying metrology data from different metrology targets across a sample into groups (referred to herein as reference groups) and providing different reference images for the different reference groups. The metrology data from different overlay targets may be separated into different reference groups using any suitable technique. In some embodiments, the reference groups are based on a metric derived from measured or expected process variations across a sample. In some embodiments, the reference groups are based on a metric derived from the metrology data itself. For example, in SCOL measurements based on pupil-plane images, the metric may be a pupil center slope associated with a variation of a determined overlay based on a selection of a pupil center location. As an illustration, some pupil-plane SCOL techniques may determine overlay based on difference signals related to a difference in amplitude between diffraction orders of opposite sign (e.g., +1 and −1 diffraction orders) or opposing pupil coordinates more generally. These difference signals are defined according to a pupil center location such that a determined overlay measurement may vary for different selected pupil center locations. It is contemplated herein that the pupil center slope may be a suitable metric for grouping metrology data from overlay targets across the sample for the purposes of reference image generation. Put another way, a reference image may effectively mitigate TIS errors for overlay targets with similar pupil center slope values. The pupil center slope may thus be an indirect metric of sample variations that may be contribute to TIS errors. In some embodiments, groupings derived for one sample may be applied to additional samples. Continuing the previous example, pupil center slopes may be determined for metrology data associated with overlay targets on one or more additional samples. Subsequently, this data may be grouped based on the previously-defined bins representing pupil slope ranges and corresponding reference images may be used for overlay determinations. Additional embodiments of the present disclosure are directed to monitoring the values of the pupil center slope (or any other selected metric) over time across multiple samples as a function of sample location. It is contemplated herein that the variations of the pupil center slope (or any other selected metric) over time across samples at a particular sample location may be indicative of process variations. This information may then be used directly for process control (e.g., for generating correctables for process tools to compensate for such process variations) and/or for adjusting the definitions of the reference groups. For example, it may be the case that reference groups may generally be associated with spatial regions of a sample used to define the reference groups (e.g., radial spatial regions associated with radial process variations). However, if metrology data associated with a particular sample location begins to sort into a different reference group over time, it may be the case that the definitions of the reference groups (e.g., definitions of bins associated with pupil center slope ranges) may need to be updated. Referring now toFIGS.1A-8B, systems and methods for grouped reference image generation for overlay metrology is described in greater detail, in accordance with one or more embodiments of the present disclosure. FIG.1Ais a block diagram view of an overlay metrology system100, in accordance with one or more embodiments of the present disclosure. In some embodiments, the overlay metrology system100includes an optical sub-system102to generate metrology data from an overlay target104on a sample106. In some embodiments, the overlay metrology system100further includes a controller108with one or more processors110configured to execute program instructions maintained on memory112(e.g., a memory medium). The controller108may be communicatively coupled with any of the components of the overlay metrology system100such as, but not limited to the detector124. In this way, the controller108may generate overlay measurements based on the metrology data. An overlay metrology system100may generally be configurable according to one or more metrology recipes. A metrology recipe may include a set of parameters for controlling various aspects of an overlay measurement such as, but not limited to, the illumination of a sample, the collection of light from the sample in the form of metrology data, the position of the sample during a measurement, or processing steps used to generate a measurement based on collected metrology data. In this way, the optical sub-system102may be configured to provide a selected type of measurement for a selected overlay target design. For example, a metrology recipe may include parameters of the illumination beams114such as, but not limited to, an illumination wavelength, an illumination pupil distribution (e.g., a distribution of illumination angles and associated intensities of illumination at those angles), a polarization of incident illumination, or a spatial distribution of illumination. As another example, a metrology recipe may include collection parameters such as, but not limited to, a collection pupil distribution (e.g., a desired distribution of angular light from the overlay target104to be used for a measurement and associated filtered intensities at those angles), collection field stop settings to select portions of the overlay target104of interest, polarization of collected light, wavelength filters, or parameters for controlling one or more detectors. As another example, a metrology recipe may include various parameters associated with a design of the overlay target104such as, but not limited to, positions and orientations of sample features (e.g., pitches of grating features along particular directions). As another example, a metrology recipe may include various parameters associated with the position of the sample106during a measurement such as, but not limited to, a sample height, a sample orientation, whether a sample is static during a measurement, or whether a sample is in motion during a measurement (along with associated parameters describing the speed, scan pattern, or the like). As another example, a metrology recipe may include various processing steps used to generate a determined overlay measurement (e.g., a specific value of the overlay to be output) based on the metrology data. In some embodiments, the overlay metrology system100operates as a SCOL metrology tool such that the metrology recipe is associated with a SCOL measurement technique. In a general sense, an overlay target104suitable for SCOL measurement techniques (e.g., a SCOL overlay target104) may include target features for each patterning process of interest that are 180-degree rotationally symmetric and located in overlapping regions of the sample106. In some embodiments, target features associated with any of the patterning processes are periodic such that they generate discrete diffraction orders from incident illumination. However, this is not a requirement and target features associated with any of the patterning processes need not be periodic. Referring now toFIGS.2A-2B, a non-limiting example of a SCOL overlay target104is described. In a general sense, an overlay target104may include one or more cells202, each including overlapping target features associated with two or more patterning processes of interest. In some embodiments, an overlay target104includes target features dedicated to overlay measurements. In some embodiments, an overlay target104includes a region of the sample106including device features (e.g., features associated with a device being fabricated on the sample106). FIG.2Ais a top view of a single cell202of an overlay target104with periodic features, in accordance with one or more embodiments of the present disclosure.FIG.2Bis a side view of the single cell202inFIG.2A, in accordance with one or more embodiments of the present disclosure. In some embodiments, the cell202includes first-layer printed elements204located on a first layer206of the sample106and second-layer printed elements208located on a second layer210of the sample106oriented such that the regions including the first-layer printed elements204and the second-layer printed elements208overlap to form a grating-over-grating structure.FIG.2Bfurther depicts a substrate212beneath the first-layer printed elements204and the second-layer printed elements208. The first-layer printed elements204and the second-layer printed elements208in any particular cell202may be designed to have any intended offset (f0) along any direction (e.g., the X direction inFIG.2Bcorresponding to a measurement direction). For example, an intended offset of zero (f0=0) may provide that the first-layer printed elements204and the second-layer printed elements208fully overlap when the physical overlay is also zero (e.g., no overlay error). In this configuration, a relative shift between the first-layer printed elements204and the second-layer printed elements208is indicative of overlay error during fabrication. As another example, a non-zero intended offset (f0≠0) may provide that the first-layer printed elements204and the second-layer printed elements208exhibit this intended offset when the physical overlay is zero (e.g., no overlay error). An overlay target104may generally be formed from any number of cells202that may have any combination of intended offsets (e.g., values of f0). Although not explicitly illustrated, an overlay target104may be suitable for overlay measurements along multiple directions (e.g., orthogonal directions). In general, a measurement direction may correspond to direction of periodicity of the first-layer printed elements204and the second-layer printed elements208(e.g., a direction of periodicity of the grating-over-grating structure). As an illustration, the cell202depicted inFIGS.2A and2Bexhibits periodicity along the X direction and is suitable for overlay measurements along the X direction. In some embodiments, an overlay target104includes one or more cells202having periodicity along a first direction (e.g., the X direction as depicted inFIGS.2A and2B) and one or more cells having periodicity along a second direction (e.g., the Y direction as depicted inFIGS.2A and2B). In some embodiments, an overlay target104includes one or more cells202having periodicity along two directions simultaneously. For example, the first-layer printed elements204and the second-layer printed elements208may include structures that are periodic in both the X and Y directions (e.g., a hatch pattern, a grid of square or rectangular structures, or the like). Referring generally toFIGS.2A and2B, it is to be understood thatFIGS.2A and2Balong with the associated descriptions are merely illustrative and should not be interpreted as limiting on the present disclosure. For example, the first layer206and the second layer210may generally have any thicknesses. As another example, the sample106may include any number of layers between the first-layer printed elements204and the second-layer printed elements208and/or between the first-layer printed elements204and the substrate212. As another example, the first-layer printed elements204and the second-layer printed elements208may generally include any 180-degree rotationally symmetric features and need not include grating-over-grating or even periodic features. Various SCOL techniques have been developed to generate an overlay measurement based on metrology data from an overlay target104. Metrology data for SCOL techniques may generally be generated in a pupil-plane and/or a field plane. For example, a pupil-plane image may capture various diffraction orders from the target features associated with each patterning process. In this configuration, the intensity at any location of the pupil plane may be associated with the interference of diffraction from the target features associated with the different patterning processes. As another example, various field-plane images may be generated based on selected combinations of diffraction orders or light in different parts of a pupil-plane more generally. For instance, the overlay metrology system100may include various components to isolate desired diffraction orders for a particular field-plane image. As an illustration, first-order SCOL techniques may be based on metrology data associated with first-order diffraction from target features associated with different patterning processes. Such techniques may typically require metrology data from two cells202having different intended offsets (e.g., ±f0). In this configuration, an overlay measurement may be determined based on difference signals associated with differences between positive and negative first-order diffraction from each cell (e.g., opposing pupil coordinates more generally). These difference signals may be generated directly in pupil-plane images or based on separate field-plane images of each cell (e.g., one image based on positive first-order diffraction and one based on negative first-order diffraction). As another illustration, zero-order SCOL techniques may be based on metrology data associated with zero-order light from an overlay target such as, but not limited to, zero-order diffraction or opposite-order diffraction from overlapping target features (e.g., positive first-order diffraction from target features in one layer and negative first-order diffraction from target features in another layer). Various non-limiting examples of SCOL techniques are generally described in Adel, et al., “Diffraction order control in overlay metrology—a review of the roadmap options,” Proc. Of SPIE Vol. 6922, 692202-1 (2008); U.S. Pat. No. 7,317,531 entitled “Apparatus and methods for detecting overlay errors using scatterometry” and issued on Jan. 8, 2008; U.S. Pat. No. 10,197,389 entitled “Approaches in first order scatterometry overlay based on introduction of auxiliary electromagnetic fields” and issued on Feb. 5, 2019; and International Publication Number WO 2017/044283 published on Mar. 16, 2017; all of which are incorporated herein by reference in their entireties. Referring now to Equations (1)-(8), overlay determinations and the use of reference images to mitigate TIS errors are described in greater detail, in accordance with one or more embodiments of the present disclosure. For the purposes of illustration, first-order SCOL techniques based on pupil-plane images are described. However, it is to be understood that this is not limiting and the concepts described herein may be extended by one of ordinary skill in the art to any SCOL techniques using any combination of pupil-plane metrology data or field-plane metrology data. In some embodiments, differential signals such as the following may be defined based on metrology data: D+f0=I+1+f0-I-1+f02=B·sin⁡(2⁢π⁢O⁢V⁢L+f0P)(1)D-f0=I+1-f0-I-1-f02=B·sin⁡(2⁢π⁢O⁢V⁢L-f0P).(2) where I is an intensity at a particular location in the pupil plane, the subscripts (+/−1) refer to the diffraction order, and the superscripts (±f0) refer to the intended offset of the cell202being measured. In this way, D+f0corresponds to a differential signal generated from a cell202with an intended offset of +f0, while D−f0corresponds to a differential signal generated from a cell202with an intended offset of −f0. It is noted that the overlay target104need not include periodic features and thus need not provide discrete diffraction orders. Accordingly, Equations (1) and (2) may be generalized for all pupil coordinates. FIG.3is a conceptual view of a pupil image (e.g., an image in a collection pupil plane) illustrating the generation of differential signals, in accordance with one or more embodiments of the present disclosure. In particular,FIG.3illustrates 0-order diffraction302, positive first-order diffraction304and negative first-order diffraction306. This distribution may be generated by illuminating a cell202including periodic first-layer printed elements204and second-layer printed elements208with a single illumination beam at a normal incidence angle. As depicted inFIG.3, a differential signal D(k) may correspond to a difference between an intensity at pupil coordinate k and an intensity a pupil coordinate −k, where coordinates k and −k are 180-degree rotationally symmetric about a pupil center position308. Further, the differential signals may be formed as an average (or a weighted average) of all pupil coordinates. These differential signals may then be combined into so-called K and G signals: K=D+f0+D-f02=B⁢sin⁡(2⁢π⁢OVLP)⁢cos⁡(2⁢π⁢f0P)(3)G=D+f0-D-f02⁢f0=B⁢cos⁡(2⁢π⁢OVLP)⁢sin⁡(2⁢π⁢f0P)(4) An overlay measurement (OVL) may then be generated as O⁢V⁢L=P2⁢π⁢tan-1(KG⁢tan⁡(2⁢π⁢f0P))(5) as long as G≠0. Assuming OVL«P and f0«P, Equation (3) and (4) indicate that the K signal is dependent on the actual value of the overlay OVL being measured at the location of the overlay target104, while the G signal is not. Rather, the G signal depends on (O⁢V⁢LP)2 which is negligible under these conditions. In this case, Equation 5 reduces to O⁢V⁢L=KG·f0 As described previously herein, reference images may be used to at least partially mitigate TIS errors in overlay measurements. TIS errors may be attributable to multiple sources including, but not limited to, non-uniformities of a measurement performed by the optical sub-system102(e.g., a non-uniform illumination beam, non-uniformities in an illumination pathway, non-uniformities in a collection pathway, or the like) and/or non-uniformities in the overlay target104. Such TIS errors may generally be observed from overlay measurements generated with the overlay target104in two different orientations with respect to the optical sub-system102. In particular, the TIS may be characterized as: TIS=12⁢(OVL0-OVL1⁢8⁢0),(7) where OVL0corresponds an overlay measurement of an overlay target104with the sample in a first orientation with respect to the optical sub-system102and where OVL180corresponds to an overlay measurement of the same overlay target104rotated by 180 degrees with respect to the optical sub-system102. In this characterization, a TIS-corrected overlay measurement (OVL) independent of TIS may be determined as: OVL=12⁢(OVL0+OVL1⁢8⁢0).(8) It is noted that a TIS-corrected overlay measurement as defined by Equation (8) may generally require two measurements of the same overlay target104at 180-degree rotations. However, performing such double measurements on each overlay target104of interest across a sample may be time consuming and result in a relatively low measurement throughput. To mitigate the need to measure each overlay target104twice at different rotations, a reference image may be generated that may capture relevant non-uniformities. For example, a reference image may be generated based on metrology data captured for an overlay target104at two orientations (e.g., 0 and 180 degrees). As an example in the case of an overlay metrology system100configured to determine overlay based on pupil images (e.g., images generated by a detector124at a pupil plane138), a reference image may be generated by summing pupil images (e.g., metrology data) generated at 180-degree rotations. Further, a reference image may be symmetrized to prevent noise amplification. For instance, a reference image may be symmetrized by dividing the reference image by an average of the reference image and a 180-degree rotated version of itself. The impact of asymmetries on and calibrations of SCOL measurements are generally described in U.S. Pat. No. 9,909,982 issued Mar. 6, 2018, U.S. Pat. No. 9,164,397 issued on Oct. 20, 2015, and U.S. Pat. No. 9,390,492 issued on Jul. 12, 2016, all of which are incorporated herein by reference in their entireties. Regardless of how a reference image is generated, the reference image may be used to correct metrology data associated with a single measurement of an overlay target104at a single orientation. For example, a pupil image (e.g., the metrology data) may be divided by the reference image to mitigate the impact of TIS errors. However, the efficacy of a reference image for mitigation of TIS errors is related to how well the reference image captures the measurement non-uniformities that contribute to the TIS errors. It is recognized herein that many previous SCOL techniques have utilized a single reference image for all overlay targets104on a sample106based on an assumption that the measurement uniformities contributing to TIS errors are primarily due to non-uniformities of illumination and/or collection by the optical sub-system102. However, it is contemplated herein that sample variations between overlay targets104may also contribute to TIS errors such that a single reference image may not accurately mitigate TIS errors for all overlay targets104. FIG.4is a flow diagram illustrating steps performed in a method400for generating different reference images for metrology data associated with different groups of overlay targets104, in accordance with one or more embodiments of the present disclosure. Applicant notes that the embodiments and enabling technologies described previously herein in the context of the overlay metrology system100should be interpreted to extend to the method400. It is further noted, however, that the method400is not limited to the architecture of the overlay metrology system100. In some embodiments, the method400includes a step402of receiving metrology data associated with overlay targets104on one or more samples106. For example, the metrology data may include any combination of pupil-plane or field-plane images suitable for an overlay measurement using a SCOL technique (e.g., based on a selected metrology recipe). The metrology data generated in step402may be associated with all overlay targets104or a subset of overlay targets104on the one or more samples106. Further, the metrology data may be associated with overlay measurements along one or more measurement directions. In some embodiments, the method400includes a step404of generating a reference metric for each of the overlay targets104based on the metrology data, where the reference metric is associated with one or more properties of the respective overlay targets104that contribute to overlay error. In some embodiments, the method400includes a step406of classifying overlay targets104into one or more groups based on the reference metrics. In some embodiments, the method400includes a step408of generating a reference image for at least some of the one or more groups. In a general sense, the steps406and408may provide for any number of groups based on the reference metrics. It is contemplated herein that the number of groups may depend on the particular characteristics of each particular sample106or samples106within a particular lot and that a desired number of groups may not be known a priori. Accordingly, the method400may facilitate the analysis of the overlay targets104on each sample106based on the reference metric (e.g., step404), the determination of both a suitable number of groups that would benefit from separate reference images (e.g., step406), and the generation of the associated reference images (e.g., step408). As described previously herein, the reference images may be generated using any suitable technique known in the art. In some embodiments, a reference image for a particular group is generated based on metrology data from a single representative overlay target104in the group (e.g., metrology data from multiple orientations of the overlay target104). In some embodiments, a reference image for a particular group is generated based on metrology data from multiple representative overlay targets104. The reference metric may include any metric associated with properties of the overlay targets104that contribute to overlay error (e.g., TIS error). Such overlay errors may be associated with non-uniformities of the overlay target104and may vary based on location on a sample106. For example, it may be the case that a sample106may exhibit process variations with a radial distribution. As an illustration, characteristics such as, but not limited to, film thickness, sidewall angles of fabricated features, or asymmetries of fabricated features may vary with a radial distribution across a sample106. Further, one or more such process variations may impact overlay measurements such that a single reference image may not be suitable for mitigating TIS errors across the entire sample106. Accordingly, measurements of one or more such process variations may be used as a basis for separating metrology data into reference groups, each having a different reference image. Process variation detection is generally described in U.S. Pat. No. 10,699,969 issued on Jun. 30, 2020, which is incorporated herein by reference in its entirety. In some embodiments, the reference metric is based on at least one of a physical measurement, an expected measurement, or a simulation of one or more properties of the overlay targets104. Such measurements may be performed at locations of the overlay targets104or at different locations on the sample106and extrapolated to the locations of the overlay targets104. For example, the reference metric may be based on measurements such as, but not limited to asymmetry of target features (e.g., the first-layer printed elements204and/or second-layer printed elements208as depicted inFIGS.2A-2B), sidewall angle along any direction, critical dimension (CD), or layer thickness (e.g., a thickness of the first layer206and/or the second layer210as depicted inFIGS.2A-2B). In some embodiments, the reference metric is derived from the metrology data itself. In this way, no additional measurements, simulations, or assumptions may be necessary. Generation of reference metric based on a pupil-plane SCOL techniques is now described, in accordance with one or more embodiments of the present disclosure. However, it is to be understood that this description is provided purely for illustrative purposes and should not be interpreted as limiting. Rather, concepts disclosed herein may be extended to field-plane SCOL techniques by one of ordinary skill in the art. In some embodiments, the reference metric includes or is based on a pupil center slope measurement generated based on metrology data prior to correction with a reference image, where the pupil center slope may correspond to a variation of a determined overlay measurement based on selection of the pupil center position308. As depicted inFIG.3, an overlay measurement may be determined based on difference signals generated from pupil-plane images of light emanating from an overlay target104. However, such an approach may be sensitive to the selection of the pupil center position308. FIG.5is a conceptual view of a pupil image (e.g., an image in a collection pupil plane) illustrating the generation of differential signals with a different pupil center position308than inFIG.3, in accordance with one or more embodiments of the present disclosure. As shown inFIG.5, opposing pupil coordinates are defined by the pupil center position308. As a result, differential signals and a resulting overlay determination may be influenced by the selection of the pupil center position308. Additionally, the selection of the pupil center position308may impact the TIS error associated with a measurement as will be described in greater detail below. The relationship between the pupil center slope (PCslope), a resulting overlay measurement, and TIS may be described by Equations (9)-(11), which share common variables and definitions as Equations (7) and (8): OVL0=OVL+Δp·PCslope(9) OVL180=OVL−Δp·PCslope(10) TIS=Δp·PCslope(11) where Δp corresponds to a pupil center shift from a reference position (e.g., an arbitrary position) and may have a unit of pixels as measured by a pupil-plane image (e.g., metrology data). FIG.6is a plot of TIS as a function of pupil center shift (Δp) for several overlay targets104illustrating the relationship between the pupil center position308, TIS, and the pupil center slope (PCslope) for different overlay targets104, in accordance with one or more embodiments of the present disclosure. For example, the TIS may generally vary approximately linearly with the selection of the pupil center position308(e.g., with the pupil center shift), where the pupil center slope (PCslope) corresponds to a slope of this linear relationship. However, metrology data for different overlay targets104may exhibit different values of the pupil center slope (PCslope) and may further have different linear constants affecting a value at which the TIS is minimized (e.g., zero). Although not explicitly shown, similar plots may be constructed relating a determined overlay value at a given orientation (e.g., OVL0, OVL180, or the like) to the pupil center position308(or pupil center shift as shown inFIG.6). It is noted that typical pupil-plane based SCOL techniques may select the pupil center position308in various ways. For example, the pupil center position308may be selected to minimize TIS for all measured samples. This technique is illustrated inFIG.6by a dashed line602indicating a selection of the pupil center shift (and associated pupil center position308) that seeks to optimize both a minimal average TIS and 3σ variation across the measured overlay targets104. Pupil center optimization techniques are also generally described in International Patent Publication No. 2014/138741 published on Sep. 12, 2014, which is incorporated herein by reference in its entirety. It is contemplated herein that the pupil center slope (PCslope) is a metric that characterizes properties of the sample106at a location of an overlay target104and is independent of the particular optical sub-system102used to generate the associated metrology data. In particular, the simulations and SCOL theory indicate that the pupil center slope relates to the thickness of a measured layer (e.g., as illustrated inFIG.7A). As a result, the pupil center slope is well suited to operate as a reference metric for grouping overlay targets having similar physical properties based solely on metrology data (e.g., pupil-plane images of an overlay target104) without requiring additional measurements. Further, when a reference image is generated based on metrology data from a particular overlay target104(e.g., based on metrology data from the particular overlay target104from two orientations rotated 180 degrees from each other), the pupil center slope (PCslope) calculated based on the corrected metrology data may be approximately zero (at least over a suitable range of locations of the pupil center position308). As a result, the TIS at this location also approaches zero or a constant (e.g., as illustrated by Equations (9)-(11) andFIGS.3,5, and6). FIGS.7A and7Billustrate simulations of the impact of the selection of a reference image on the TIS of multiple overlay targets104, in accordance with one or more embodiments of the present disclosure. In particular,FIGS.7A and7Bare based on simulations with overlay targets104in locations of different layer thickness.FIG.7Ais a plot of TIS as a function of pupil center shift based on simulated metrology data from multiple overlay targets104that are not corrected with a reference image, in accordance with one or more embodiments of the present disclosure. InFIG.7A, each overlay target104(represented by a site on the sample106) has a different pupil center slope (PCslope) indicating varying physical characteristics of the associated overlay targets104.FIG.7Bis a plot of TIS as a function of pupil center shift based on simulated metrology data from multiple overlay targets104that are corrected with a reference image generated based on simulated metrology data from a particular one of the overlay targets104, in accordance with one or more embodiments of the present disclosure. In particular, the reference image used to generateFIG.7Bwas generated based on metrology data from an overlay target104at site3. InFIG.7B, the corrected pupil center slope (and the TIS) are approximately zero for the range of pupil center shifts considered. Further, overlay targets104having metrology data with similar values of the pupil center slope prior to correction with the reference image (as depicted inFIG.7A) are also well-corrected by the same reference image as indicated by similarly low values of the corrected pupil center slope and TIS. However, overlay targets104having metrology data with substantially different values of the pupil center slope prior to correction with the reference image (as depicted inFIG.7A) are not as well-corrected by the reference image as indicated by relatively high corrected pupil center slope and TIS values inFIG.7B. It is thus contemplated herein that a reference image may characterize the interaction of an optical sub-system102with a measured overlay target104(or location of a measured overlay target104) in addition to the non-uniformities of the optical sub-system102alone. Since the characteristics of overlay targets104may vary across different locations of a sample106due to factors such as, but not limited to, reflectivity variations (which may require different illumination intensities per site) or other imperfections in the measurements or associated calculations, the efficacy of a particular reference image at mitigating TIS may vary across a sample106or with process variations more generally. However, TIS may be reduced for multiple overlay targets104with varying characteristics by first grouping (e.g., in step406) the metrology data from the overlay targets104into one or more groups based on the values of the pupil center slope (e.g., as measured prior to correction by any reference image) followed by generating reference images for each group based on metrology data for overlay targets104within the respective groups (e.g., in step408). As illustrated by Equations (9)-(11) andFIGS.3,5, and6, the value of the pupil center slope may be generated based on determined overlay measurements (e.g., OVL0measurements) or by calculated TIS values (e.g., associated with both OVL0and OVL180measurements). However, the use of overlay measurements from a single set of metrology data (e.g., OVL0measurements) may be faster and thus desirable in some applications. Referring again toFIG.4more generally, the step406of classifying metrology data from overlay targets104into groups based on a reference metric (e.g., pupil center slope or any other suitable metric) may be performed in various ways within the spirit and scope of the present disclosure. In some embodiments, a range of values of the reference metric observed in the metrology data from the overlay targets104may be divided into any number of bins representative of the one or more groups. In this way, metrology data associated with a particular overlay target104may be classified based on the corresponding bin containing the value of the calculated reference metric for this overlay target104. In a general sense, any number of groups and associated reference images may be generated. However, since the generation of reference images requires metrology data at two sample orientations (e.g., a 0 degree orientation and a 180 degree orientation), a large number of groups and their associated reference images will require more data collection at multiple sample orientations. In this case, there is tradeoff between achieving minimal TIS (by having a large number of groups) and faster measurements of the entire sample (by having a smaller number of groups). FIGS.8A and8Bdepict the classification of metrology data from overlay targets104across the sample into three groups.FIG.8Ais a plot illustrating a map of pupil center slope (e.g., a selected reference metric) for overlay targets104across a sample106, in accordance with one or more embodiments of the present disclosure. InFIG.8A, the reference metric exhibits a clear radial pattern. As described previously herein, such a pattern may be associated with a radial layer thickness pattern across the sample106. However, it is to be understood thatFIG.8Ais merely illustrative and that the reference metric may have any distribution across a sample106. FIG.8Bis a plot illustrating a map of the groups into which the overlay targets104are classified (e.g., based on step406), in accordance with one or more embodiments of the present disclosure. As shown inFIG.8B, the groups may encompass spatial regions of the sample106having similar values of the reference metric. In this particular example, a first group802aincludes a central region of the sample, a second group802bincludes an intermediate radial region, and a third group802cincludes an edge region of the sample106. It is to be understood, however, that the groupings need not necessarily be continuous on the sample106. For example, several disparate regions of a sample106may fall into a common group. Referring again toFIG.4, additional steps of the method400are now described, in accordance with one or more embodiments of the present disclosure. In some embodiments, the method400includes a step410of generating corrected metrology data for the overlay targets104using the associated reference image for the one or more groups (e.g., at least the overlay targets104in groups for which a reference image was generated). In some cases, all measured overlay targets104are classified into groups (e.g., in step406) and reference images are generated for each of the groups (e.g., in step408). In some embodiments, the method400includes a step412of generating overlay measurements for the plurality of overlay targets based on the corrected set of metrology data. In this way, the provided overlay measurements associated with each overlay target104may be well-corrected for errors such as, but not limited to TIS based on appropriate selection of reference images. It is further contemplated herein that the reference groups and associated reference images are applied to additional overlay targets104on the same or different samples106. In some embodiments, a set of overlay targets104used to classify reference groups and generate associated reference images is only a sub-set of overlay targets104on a sample106(e.g., as defined by a metrology recipe). In this case, metrology data generated for overlay target104not in this original set may simply be classified based on the existing groups. As an illustration in the case of classification based on bins or ranges of the reference metric, values of the reference metric may be generated for additional overlay targets104(e.g., in a corollary to step404) such that the additional overlay targets104may be classified into a group based on the associated bin or range of the reference metric (e.g., in a corollary to step406). Subsequently, the associated reference image for the group may be used to generate a corrected overlay measurement for each of the additional overlay targets104(e.g., in a corollary to step410and step412). Similarly, the reference groups and associated reference images are applied to additional overlay targets104on additional samples106. In this case, values of the reference metric may be determined for the additional overlay targets104on the additional samples106for classification based on the existing groups. Subsequently, the associated reference image for the group may be used to generate a corrected overlay measurement for each of the additional overlay targets104. It is noted that the metrology data associated with additional overlay targets104may be classified based on the reference metric and need not necessarily be classified into the same group as neighboring overlay targets104or overlay targets104in the same location on previous samples106. In this way, the most suitable reference image may be used for each overlay target104. It is further contemplated herein that the reference groups as defined at one point in time may need to be updated. For example, it may be the case that one or more fabrication processes drift sufficiently over time that the efficacy of one or more reference images for the associated groups may decrease. In a general sense, the method400or portions thereof (e.g., step406related to the definition of ranges of the reference metric that correspond to the any of groups and/or step408related to generation of associated reference images for any of the groups) may be repeated or updated using any technique. In some embodiments, the method400or portions thereof may be performed at periodic intervals. In some embodiments, the method400or portions thereof may be performed upon a trigger condition. Any suitable trigger condition may be used. For example, one non-limiting example of a trigger condition may be a number of times that an overlay target104at a particular location on a sample106is classified into a different group than for an original sample106(either an absolute number or a number of times in a selected time window). For example, it may be expected that overlay targets104at a particular location of samples106should have relatively consistent physical parameters and thus relatively consistent reference metrics. Under this assumption, variations of a reference metric associated with a particular location over time may indicate sample-to-sample process variations. At some point, such sample-to-sample variations may render the reference image for one or more groups ineffective (e.g., TIS errors above a selected threshold value may be present). However, updating the classifications of the groups (e.g., ranges of the reference metric associated with any of the groups) and/or the reference images for any of the groups may again provide effective TIS error correction for all overlay targets104. Referring again toFIGS.1A and1B, additional aspects of the overlay metrology system100are described in greater detail, in accordance with one or more embodiments of the present disclosure. The one or more processors110of a controller108may include any processor or processing element known in the art. For the purposes of the present disclosure, the term “processor” or “processing element” may be broadly defined to encompass any device having one or more processing or logic elements (e.g., one or more micro-processor devices, one or more application specific integrated circuit (ASIC) devices, one or more field programmable gate arrays (FPGAs), or one or more digital signal processors (DSPs)). In this sense, the one or more processors110may include any device configured to execute algorithms and/or instructions (e.g., program instructions stored in memory). In one embodiment, the one or more processors110may be embodied as a desktop computer, mainframe computer system, workstation, image computer, parallel processor, networked computer, or any other computer system configured to execute a program configured to operate or operate in conjunction with the overlay metrology system100, as described throughout the present disclosure Moreover, different subsystems of the overlay metrology system100may include a processor or logic elements suitable for carrying out at least a portion of the steps described in the present disclosure. Therefore, the above description should not be interpreted as a limitation on the embodiments of the present disclosure but merely as an illustration. Further, the steps described throughout the present disclosure may be carried out by a single controller108or, alternatively, multiple controllers. Additionally, the controller108may include one or more controllers housed in a common housing or within multiple housings. In this way, any controller or combination of controllers may be separately packaged as a module suitable for integration into the overlay metrology system100. The memory112may include any storage medium known in the art suitable for storing program instructions executable by the associated one or more processors110. For example, the memory112may include a non-transitory memory medium. By way of another example, the memory112may include, but is not limited to, a read-only memory (ROM), a random-access memory (RAM), a magnetic or optical memory device (e.g., disk), a magnetic tape, a solid-state drive and the like. It is further noted that memory112may be housed in a common controller housing with the one or more processors110. In one embodiment, the memory112may be located remotely with respect to the physical location of the one or more processors110and controller108. For instance, the one or more processors110of controller108may access a remote memory (e.g., server), accessible through a network (e.g., internet, intranet and the like). FIG.1Bis a simplified schematic of the optical sub-system102, in accordance with one or more embodiments of the present disclosure. In embodiments, the optical sub-system102, is configurable according to a metrology recipe (e.g., an overlay recipe) to generate an overlay measurement associated with an overlay target104having a design based on the metrology recipe. For example, the optical sub-system102may direct one or more illumination beams114from an illumination source116to an overlay target104on the sample106(e.g., via an illumination pathway118), collect light or other radiation emanating from the overlay target104in response to the (referred to herein as measurement light120) (e.g., via a collection pathway122), and generate metrology data from at least one detector124based on the measurement light120. For example, metrology data may be associated with diffraction of the illumination beams114from the overlay target104. The illumination pathway118and the collection pathway122may further include additional optical elements such as, but not limited to, lenses126and beam manipulation components128(e.g., polarizers, spectral filters, spatial filters, beam blockers, apertures, or the like) at any suitable locations. In some embodiments, as illustrated inFIG.1B, the optical sub-system102includes one or more beamsplitters130to allow simultaneous illumination and collection through a common objective lens132or other focusing element. In some embodiments, the optical sub-system102includes a translation stage134including any number of linear or rotational actuators to secure and/or position the sample106. The optical sub-system102may include one or more detectors124at any suitable locations for the collection of metrology data. For example, the optical sub-system102may include at least one detector124at a field plane136(e.g., a plane conjugate to the sample106), which is illustrated inFIG.1B. As another example, though not illustrated, the optical sub-system102may include at least one detector124at a pupil plane138(e.g., a diffraction plane corresponding to an angular distribution of light from the sample106). Further, although not illustrated, the optical sub-system102may include multiple channels, each having a separate detector124. In this way, the optical sub-system102may provide multiple simultaneous measurements using multiple detectors124at any combination of field planes136or pupil planes138. For example, the optical sub-system102may include one or more beamsplitters (e.g., non-polarizing beamsplitters, polarizing beamsplitters, dichroic mirrors providing spectral selectivity, or the like) to split the measurement light120into the different channels for detection. The optical sub-system102may further include optical components to modify the properties of the measurement light120within each channel such as, but not limited to, polarizers, polarization rotators, spectral filters, spatial filters, or pupil filters (e.g., beam blocks or apertures in a pupil plane to block or pass selected diffraction orders). The herein described subject matter sometimes illustrates different components contained within, or connected with, other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “connected” or “coupled” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “couplable” to each other to achieve the desired functionality. Specific examples of couplable include but are not limited to physically interactable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interactable and/or logically interacting components. It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction, and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes. Furthermore, it is to be understood that the invention is defined by the appended claims.
53,897
11861825
DETAILED DESCRIPTION OF EMBODIMENTS The illustration in the drawings is schematically. In different drawings, similar or identical elements are provided with the same reference numerals. FIG.1shows a schematic illustration of a classification system1for classifying a patient's vasculature. In the exemplary embodiment ofFIG.1, the classification system1comprises a training device100, an input unit200, an inference unit300, a display unit400and a user interface500. Further, the classification system1is communicatively coupled to a database2. In the embodiment according toFIG.1, the classification system1includes training device100which receives diagnostic image data10representing a first vessel tree. The training device100implements a deep learning algorithm which is trained using the diagnostic image data10as a training dataset. For that purpose, the diagnostic image data may particularly comprise about hundreds to thousands diagnostic images of the vasculature of multiple patients. These are used to generate an initial model representing the vessels in the first vessel tree. In the exemplary embodiment ofFIG.1, the diagnostic images comprised in the diagnostic image data have been acquired using X-ray imaging. Other imaging modalities may, however, likewise be used as long as they enable an imaging of the vessels. Further, the diagnostic image data comprises a vessel labeling for one or more vessels, in particular for at least the vessels of a standard anatomical model of a vasculature. Using the diagnostic image data, the training device may then be trained, by a relatively small training dataset, with an initial model of a vasculature which includes the vessels of the first vessel tree. The classification system1further comprises input unit200. Input unit200is configured to receive a diagnostic image20representing a second vessel tree. The diagnostic image20has been acquired for the patient whose vasculature is to be assessed using the classification system. In the exemplary embodiment ofFIG.1, the at least one diagnostic image20has been obtained using X-ray imaging. Other imaging modalities, such as ultrasound imaging may likewise be used as long as they allow to identify the vessels in the patient's vasculature. In some embodiments, the diagnostic image20has been preprocessed elsewhere and the input unit200further receives the extracted centerline information for the second vessel tree, such as to allow for identifying its geometry and topology, along with the at least one diagnostic image. In some embodiments, the input unit200only receives the diagnostic image20and provides the diagnostic image20to inference unit300which then performs centerline extraction to identify the geometry of the vessels in the second vessel tree. In the embodiment ofFIG.1, inference unit300thus receives the diagnostic image20representative of the second vessel tree from input unit200and, further, the initial model representative of the first vessel tree from training device100. Inference unit300compares the first and second vessel tree with one another and identifies one or more deviations between the first and second vessel tree. To that end, inference unit300may particularly employ a difference approach, i.e. may obtain a difference for a particular variation between the geometries of the vessels in the first and second vessel tree, respectively, and comparing said difference value to a predetermined threshold. If the difference value is below said threshold, the difference is considered to be within the error range of the imaging modality and no deviation is identified. If the threshold value is exceeded, however, it is assumed that a deviation is present. In case the inference unit300determines that a deviation is present, the inference unit300provides a respective indication to display unit400. In the particular embodiment ofFIG.1, display unit400comprises a computer screen, on which a visual indication that a deviation has been detected may be displayed to a user. Accordingly, display unit400may generate said visual indication. The display unit may400particularly provide a graphical representation of some or all of the vessels in the second vessel tree along with a marker or other indicator to indicate the deviation identified. The display unit400may further output a request to a user to provide a labeling for the deviation. This request may either be a suggestion as to how to label the deviation, which the user simply has to accept or decline, or a list of possible labeling suggestions of which the user shall select. The request may also be an indication to manually input a label for said deviation. Further requests may also be envisioned that prompt the user to interact with the system1. The user may particularly provide a respective user input indicating the labeling via user interface500. In the exemplary embodiment ofFIG.1, user interface500may particularly comprise a keyboard. Further, user interface500may comprise a touchscreen, a mouse, a remote control or the like. Once the user has input the labeling, the thus labeled diagnostic image20is then returned to inference unit300which transmits the diagnostic image20, along with the labeling input by the user, to training device100to expand the training set for training device100by said diagnostic image20and the labeling. Based on the thus expanded training set, training device100may then use the diagnostic image20and the labeling to adjust the (initial) model of the vasculature accordingly. It shall be understood that the adjusting of the model may be performed in a number of ways, e.g. by semi-supervised learning or by adding the labeled diagnostic image20to the diagnostic image data10and retraining the training device. In the specific embodiment according toFIG.1, the training device is retrained with a new model based on an updated training dataset including the newly labeled diagnostic image20. According to the embodiment ofFIG.1, the process is iteratively repeated for a plurality of diagnostic images20that have been acquired for the patient. Each diagnostic image20is hereby processed and labeled as described above and subsequently used to retrain the training device. By means of this iterative retraining, the model that is trained to the training device is gradually adjusted to approach the vasculature of the patient for whom the diagnostic images are collected. Thus, while at the beginning of the process the user interactions will be manifold, as many deviations may be identified, the required user input will diminish over time as the trained model resembles the patient-specific vasculature more and more closely. By means of this approach, an accurate automatic labeling of the vessels in the vasculature may gradually be achieved that also takes account of the anatomical variations for each patient. By means of this automatic labeling approach, a medical representation, particularly a customized schematic representation of the patient's vasculature, may be obtained. This medical representation may be structured according to a predetermined format. In the particular embodiment according toFIG.1, the medical representation may particularly be structured according to the medical terminology and coding standard that is required according to SNOMED CT. The thus structured data may then be transmitted to and stored in database2. Database2may be accessed by other physicians to obtain patient information and case reports and/or by third parties, such as registries or organizations for treatment and trend analysis and prognosis. Since the medical representation is provided in a pre-determined format, the information provided therein may be interpreted easily and in an unambiguous manner. FIG.2schematically illustrates a method for classifying a patient's vasculature according to an embodiment. In step S101, the training device100receives the diagnostic image data10representing a first vessel tree. In step S102, the training device100, implementing a deep learning algorithm, is trained using the diagnostic image data10as a training dataset. The diagnostic image data may hereby particularly comprise hundreds to a thousand diagnostic images of multiple patients representing a plurality of vessels and a corresponding vessel labeling for at least a subset of these vessels. In step S103, the initial model that has been trained using the diagnostic image data10, which represents the first vessel tree including the vessel labeling, is provided to inference unit300. In step S201, input unit200receives a diagnostic image20acquired from a patient representing a second vessel tree. In the exemplary embodiment ofFIG.2, the input unit200, in step S202, provides the diagnostic image20to inference unit300for further processing. In this context, it may be understood that, in alternative embodiments, the diagnostic image20received by input unit200may be preprocessed and may thus comprise extracted centerline information which is then passed, along with the diagnostic image20, to inference unit300. In step S301, inference unit300receives the diagnostic image data20representative of the second vessel tree from input unit200. Further, also in step S301, inference unit300receives the initial model representative of the first vessel tree from training device100. In step S302, inference unit300identifies the geometry of the vessels in the second vessel tree represented by the diagnostic image20received from input unit200. In the exemplary embodiment ofFIG.2, inference unit300particularly uses a centerline extraction approach to identify the geometry of the vessels. In step S303, inference unit300then compares the first and second vessel tree, in particular their respective geometries, with one another and identifies one or more deviations between them, e.g. by means of the above-described difference approach. When a deviation is found by inference unit300, inference unit300, in step S304, provides a respective indication to display unit400. In step S401, the indication is received by display unit400which presents this indication, in step S402, to a user. In the embodiment according toFIG.2, the indication provided to the user may particularly be a visual indication. The indication may further comprise a haptic and/or auditory component. To that end, display unit400may optionally generate, in step S402, a graphical representation of the vessels in the second vessel tree along with a marker or other indicator to indicate the identified deviation and output a request to a user to label said deviation by means of the user interface500. In step S501, the indication is output to a user to prompt the user to provide, via the user interface, a respective interactive input for labeling the deviation. In step S502, the user may optionally manually label the deviation in response to the request by the system. In the particular embodiment according toFIG.2, the user input providing the respective labeling prompts the system to return to initial training step S102. That is, the at least one newly labeled diagnostic image20is input into training device100which then adjusts the initial model in accordance with the new labeling. In the exemplary embodiment ofFIG.2, this is achieved by adding the newly labeled diagnostic image to the diagnostic image data in order to expand the training dataset and retraining the training device with the expanded training dataset. These steps may be iteratively repeated for a plurality of diagnostic images that have been acquired for the patient. By means of this iterative retraining, the model that is trained to the training device is gradually adjusted to approach the patient-specific vasculature. This leads to a gradual reduction of necessary user interactions while at the same time keeping the used training dataset as small as possible. FIG.3schematically illustrates a detailed method for identifying the geometry of the vessels in the second vessel tree from at least one diagnostic image and a deviation in geometry between the vessels of said second vessel tree and the vessels of the first vessel tree according to an embodiment. Specifically, in step S301, the initial model as provided by the training device100and the diagnostic image20as provided by the input unit200are received at the inference unit300. In step S302a, the inference unit segments the diagnostic image20and, in step S302bextracts, based on the segmentation, the centerlines according to a known centerline extraction approach. Based on these extracted centerlines, the inference unit, in step S302c, identifies the geometry of the vessels in the second vessel tree. In step S303a, the inference unit300then compares the geometry of the vessels in the first vessel tree as inferred from the trained initial model with the geometry of the vessels in the second vessel tree as identified from the diagnostic image20. The comparison may particularly be performed by determining a difference value for selected points in the vessels in the first and second vessel tree, respectively, and comparing the difference value to a respective threshold in order to determine whether the difference is due to inaccuracies or due to actual geometric (and, thus, anatomical) variations between the vessels of the first and second vessel tree. Based on this comparison, the inference unit, in step S303b, identifies at least one deviation, such as a geometric or topological variation, between the vessels of the first and second vessel tree. In step S304, the inference unit300indicates the one or more identified deviations to display unit400for indicating the deviations to the user and prompting the user to interact with the data. This allows to interactively expand the data set used to define the model of the patient's vasculature, gradually improving the models accuracy and allowing for a more and more autonomous labeling and structure reporting with improved accuracy, while maintaining the initial training dataset rather small. Although in above described embodiments, the diagnostic images have been obtained using X-ray imaging, it shall be understood that in other embodiments, the diagnostic images may likewise be retrieved by other imaging methods, such as position emission tomography, single positron emission computed tomography, magnetic resonance imaging, X-ray scanning, ultrasound imaging or the like. Further, it shall be understood that, although in the above embodiments, the initial training dataset comprised diagnostic image data comprising corresponding vessel labeling and a plurality of diagnostic images, the initial training dataset may also be derived from a small set of three-dimensional coronary atlases containing 3D centerlines for the coronaries that may be projected onto a two-dimensional image by using the 3D geometry parameters of the X-ray system. Further, while in the above embodiments, the analyzing has been performed on the coronary vasculature, in other embodiments, the analysis may likewise be performed on the vasculature in other parts of the human body, such as the peripheral vasculature. It shall also be understood that, although in the above embodiments, the data has been stored in the database according to SNOMED CT, other data structures that are easy to interpret and provide unambiguous information may likewise be used. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single unit or device may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Procedures like the receiving of the diagnostic image data, the receiving of the at least one diagnostic image, the segmenting of said diagnostic image, the extracting of the centerlines, the identifying of deviations between the first and second vessel topology, et cetera performed by one or several units or devices can be performed by any other number of units or devices. These procedures in accordance with the invention can hereby be implemented as program code means of a computer program and/or as dedicated hardware. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium, supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope. A method for classifying a vasculature comprising the steps of: a) training a training device with an initial model of the vasculature using diagnostic image data representing a first vessel tree, said diagnostic image data comprising a corresponding vessel labeling for at least one vessel of said first vessel tree, b) inputting at least one diagnostic image representing a second vessel tree, c) identifying at least one deviation between the first vessel tree and the second vessel tree, d) in response to the identifying, outputting an indication of said at least one deviation to a user and providing at least one labeling for said at least one deviation, and e) adjusting, based on the at least one deviation and the at least one labeling, the initial model to classify the vasculature. By means of this method, deviations from the initial model may be resolved interactively by presenting deviations to an expert, such as a physician, who may interactively label the deviations as represented in that images and, thereby, expand the training set by the necessary knowledge about the anatomical variation between patients. This gradually yields an accurate automatic labeling of the (coronary) vasculature.
18,195